Search Results

Search found 1015 results on 41 pages for 'mongodb csharp'.

Page 8/41 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • meteor mongodb _id changing after insert (and UUID property as well)

    - by lommaj
    I have meteor method that does an insert. Im using Regulate.js for form validation. I set the game_id field to Meteor.uuid() to create a unique value that I also route to /game_show/:game_id using iron router. As you can see I'm logging the details of the game, this works fine. (image link to log below) Meteor.methods({ create_game_form : function(data){ Regulate.create_game_form.validate(data, function (error, data) { if (error) { console.log('Server side validation failed.'); } else { console.log('Server side validation passed!'); // Save data to database or whatever... //console.log(data[0].value); var new_game = { game_id: Meteor.uuid(), name : data[0].value, game_type: data[1].value, creator_user_id: Meteor.userId(), user_name: Meteor.user().profile.name, created: new Date() }; console.log("NEW GAME BEFORE INSERT: ", new_game); GamesData.insert(new_game, function(error, new_id){ console.log("GAMES NEW MONGO ID: ", new_id) var game_data = GamesData.findOne({_id: new_id}); console.log('NEW GAME AFTER INSERT: ', game_data); Session.set('CURRENT_GAME', game_data); }); } }); } }); All of the data coming out of the console.log at this point works fine After this method call the client routes to /game_show/:game_id Meteor.call('create_game_form', data, function(error){ if(error){ return alert(error.reason); } //console.log("post insert data for routing variable " ,data); var created_game = Session.get('CURRENT_GAME'); console.log("Session Game ", created_game); Router.go('game_show', {game_id: created_game.game_id}); }); On this view, I try to load the document with the game_id I just inserted Template.game_start.helpers({ game_info: function(){ console.log(this.game_id); var game_data = GamesData.find({game_id: this.game_id}); console.log("trying to load via UUID ", game_data); return game_data; } }); sorry cant upload images... :-( https://www.evernote.com/shard/s21/sh/c07e8047-de93-4d08-9dc7-dae51668bdec/a8baf89a09e55f8902549e79f136fd45 As you can see from the image of the console log below, everything matches the id logged before insert the id logged in the insert callback using findOne() the id passed in the url However the mongo ID and the UUID I inserted ARE NOT THERE, the only document in there has all the other fields matching except those two! Not sure what im doing wrong. Thanks!

    Read the article

  • MongoDB lists with paginations?

    - by Timmy
    for documents with lists with pagination, is it better to embed or use reference? im reading the custom type "SONManipulator" and it appears to transform every thing on retrieval, even the sub docs. i want to keep the list in the document sorted, should this impact anything?

    Read the article

  • MongoDB with OR and Range Indexes

    - by LMH
    I have a query: {"$query"=>{"user_id"=>"512f7960534dcda22b000491", "$or"=>[{"when_tz"=>{"$gte"=>2010-06-24 04:00:00 UTC, "$lt"=>2010-06-25 04:00:00 UTC}}, {"when_tz"=>{"$gte"=>2011-06-24 04:00:00 UTC, "$lt"=>2011-06-25 04:00:00 UTC}}, {"when_tz"=>{"$gte"=>2012-06-24 04:00:00 UTC, "$lt"=>2012-06-25 04:00:00 UTC}}], "_type"=>{"$in"=>["FacebookImageItem", "FoursquareImageItem", "InstagramItem", "TwitterImageItem", "Image"]}}, "$explain"=>true, "$orderby"=>{"when_tz"=>1}} And an index: { user_id: 1, _type: 1, when_tz: 1 } Explain: {"cursor"="BtreeCursor user_id_1__type_1_facebook_id_1 multi", "isMultiKey"=false, "n"=28, "nscannedObjects"=15094, "nscanned"=15098, "nscannedObjectsAllPlans"=181246, "nscannedAllPlans"=241553, "scanAndOrder"=true, "indexOnly"=false, "nYields"=12, "nChunkSkips"=0, "millis"=2869, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "facebook_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}, "allPlans"=[{"cursor"="BtreeCursor user_id_1__type_1_facebook_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15098, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "facebook_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_twitter_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "twitter_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_instagram_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "instagram_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_foursquare_id_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "foursquare_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_phash_1", "n"=21, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "phash"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_aperature_1_shutter_speed_1_when_tz_1", "n"=25, "nscannedObjects"=35, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "aperature"=[[{"$minElement"=1}, {"$maxElement"=1}]], "shutter_speed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_tz"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_image_hash_1", "n"=22, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "image_hash"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_tz_-1", "n"=23, "nscannedObjects"=32, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_tz"=[[{"$maxElement"=1}, {"$minElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_tz_1", "n"=24, "nscannedObjects"=33, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_tz"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_utc_-1", "n"=23, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_utc"=[[{"$maxElement"=1}, {"$minElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_time_zone_guessed_1_when_utc_1", "n"=24, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "time_zone_guessed"=[[{"$minElement"=1}, {"$maxElement"=1}]], "when_utc"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1_original_shared_item_id_1", "n"=24, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "original_shared_item_id"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_s3_tmp_file_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "s3_tmp_file"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_processed_-1_uploaded_-1_image_device_1 multi", "n"=28, "nscannedObjects"=15094, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "processed"=[[{"$maxElement"=1}, {"$minElement"=1}]], "uploaded"=[[{"$maxElement"=1}, {"$minElement"=1}]], "image_device"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BtreeCursor user_id_1__type_1_when_tz_1 multi", "n"=28, "nscannedObjects"=28, "nscanned"=15097, "indexBounds"={"user_id"=[["512f7960534dcda22b000491", "512f7960534dcda22b000491"]], "_type"=[["FacebookImageItem", "FacebookImageItem"], ["FoursquareImageItem", "FoursquareImageItem"], ["Image", "Image"], ["InstagramItem", "InstagramItem"], ["TwitterImageItem", "TwitterImageItem"]], "when_tz"=[[{"$minElement"=1}, {"$maxElement"=1}]]}}, {"cursor"="BasicCursor", "n"=0, "nscannedObjects"=15097, "nscanned"=15097, "indexBounds"={}}], "server"=""} Any idea how to get it to hit the indexes?

    Read the article

  • Removing an object in a child collection in MongoDB

    - by Jeremy B.
    I've got a collection of content. Said collection has a collection of responses as such Content : [{ 'id' : '1234', 'Responses' : [{ 'id' : '12345' } etc. Now, I want to remove response 12345, but I don't want to remove all of the responses. I can't seem to find the command to do so. I'm getting the impression that the correct action is to grab the object, rebuild the Responses without the one I want removed, and then save the Content object as a whole. Provided there are many responses, this seems like a bad practice to have to load the entire object that way.

    Read the article

  • when to index on multiple keys in mongodb

    - by Evan
    say I have an Item document with :price and :qty fields. I sometimes want to find all documents matching a given :price AND :qty, and at other times it will be either :price on its own or :qty on its own. I have already indexed the :price and :qty keys, but do I also need to create a compound index on both together or are the single key indexes enough?

    Read the article

  • MongoDB map reduce count giving more results than a query

    - by giorgiosironi
    I have a collection users in Mongo and I execute this map reduce which I believe is the equivalent of a COUNT(*) GROUP BY origin: > m = function() { for (i in this.membership) { ... emit( this.membership[i].platform_profile.origin, 1 ); ... } } function () { for (i in this.membership) { emit(this.membership[i].platform_profile.origin, 1); } } > r = function( id, values ) { var result = 0; ... for ( var i = 0; i < values.length; i ++ ) { result += values[i]; } ... return result; } function (id, values) { var result = 0; for (var i = 0; i < values.length; i++) { result += values[i]; } return result; } > db.users.mapReduce(m, r, {out : { inline: 1}}); { "results" : [ { "_id" : 0, "value" : 15 }, { "_id" : 1, "value" : 449 }, ... } But if I try to count how many documents have this field set to a specific value like 1, I get fewer results: db.users.count({"membership.platform_profile.origin": 1}); 424 What am I missing?

    Read the article

  • Mongodb, simple IN problem

    - by afvasd
    Hi everyone I am new to mongo, this is my db design: product := { name: str group: ref, comments: [ ref, ref, ref, ref ] } comments := { ... a bunch of comments stuff } tag := { _id: int, #Need this for online requests tag: str, products: [ {product: ref, score: float}, ... ], comments: [ {comment: ref, score: float}, ...], } So my usage pattern is: GIVEN a product, find comments that have certain tag and sort them accordingly. My current approach involves: Look for that tag object that has tag=myTag pull all the comments out, sorted look for that product where product.name=myProduct pull all the comments out (which are dbrefs by the way) loop through the result of 2, and checking if they are in 4, (this I can do a limit 10) etc. It's pretty inefficient. Any better methods?

    Read the article

  • Get _id from MongoDB using Scala

    - by user2438383
    For a given JSON how do I get the _id to use it as an id for inserting in another JSON? Tried to get the ID as shown below but does not return correct results. private def getModelRunId(): List[String] = { val resultsCursor: List[DBObject] = modelRunResultsCollection.find(MongoDBObject.empty, MongoDBObject(FIELD_ID -> 1)).toList println("resultsCursor >>>>>>>>>>>>>>>>>> " + resultsCursor) resultsCursor.map(x => (Json.parse(x.toString()) \ FIELD_ID).asOpt[String]).flatten } { "_id": ObjectId("5269723bd516ec3a69f3639e"), "modelRunId": ObjectId("5269723ad516ec3a69f3639d"), "results": [ { "ClaimId": "526971f5b5b8b9148404623a", "pricingResult": { "TxId": 0, "ClaimId": "Large_Batch_1", "Errors": [ ], "Disposition": [ { "GroupId": 1, "PriceAmt": 20, "Status": "Priced Successfully", "ReasonCode": 0, "Reason": "RmbModel(PAM_DC_1):ProgramNode(Validation CPG):ServiceGroupNode(Medical Services):RmbTerm(RT)", "PricingMethodologyId": 2, "Lines": [ { "Id": 1 } ] } ] } },

    Read the article

  • How find all overlapping circles from radius of central circle?

    - by roza
    How to do an intersection or overlap query in mongo shell - what circles overlap my search region? Within relate only to the center position but doesn't include radius of the other circles in searched scope. Mongo: # My bad conception: var search = [[30, 30], 10] db.places.find({circle : {"$within" : {"$center" : [search]}}}) Now I can obtain only this circles within central point lies in searched area of circle: Ruby: # field :circle, type: Circle # eg. [ [ 30, 30 ], 10 ] field :radius, type: Integer field :location, :type => Array, :spatial => true spatial_index :location Places.within_circle(location: [ [ 30, 30 ], 10 ]) # {"$query"=>{"location"=>{"$within"=>{"$center"=>[[30, 30], 10]}}} I created example data with additional location (special index) and radius instead circle because circle isn't supported by mongodb geo index: { "_id" : 1, "name" : "a", "circle" : [ [ 5, 5 ], 40 ], "latlng" : [ 5, 5 ], "radius" : 40 } { "_id" : 2, "name" : "b", "circle" : [ [ 10, 10 ], 5 ], "latlng" : [ 10, 10 ], "radius" : 5 } { "_id" : 3, "name" : "c", "circle" : [ [ 20, 20 ], 5 ], "latlng" : [ 20, 20 ], "radius" : 5 } { "_id" : 4, "name" : "d", "circle" : [ [ 30, 30 ], 50 ], "latlng" : [ 30, 30 ], "radius" : 50} { "_id" : 5, "name" : "e", "circle" : [ [ 80, 80 ], 30 ], "latlng" : [ 80, 80 ], "radius" : 30} { "_id" : 6, "name" : "f", "circle" : [ [ 80, 80 ], 20 ], "latlng" : [ 80, 80 ], "radius" : 20} Desired query result: { "_id" : 1, "name" : "a", "circle" : [ [ 5, 5 ], 40 ], "latlng" : [ 5, 5 ], "radius" : 40 } { "_id" : 3, "name" : "c", "circle" : [ [ 20, 20 ], 5 ], "latlng" : [ 20, 20 ], "radius" : 5 } { "_id" : 4, "name" : "d", "circle" : [ [ 30, 30 ], 50 ], "latlng" : [ 30, 30 ], "radius" : 50} { "_id" : 5, "name" : "e", "circle" : [ [ 80, 80 ], 30 ], "latlng" : [ 80, 80 ], "radius" : 30} Solution below assumes that I get all rows and then filter on the ruby side my radius but it returns only: { "_id" : 4, "name" : "d", "circle" : [ [ 30, 30 ], 50 ], "latlng" : [ 30, 30 ], "radius" : 50}

    Read the article

  • Why after deleting a 110+ GB collection, my /var/lib/mongodb directory still have same size?

    - by tunnuz
    I am having some troubles with MongoDB and space usage. In particular, I once used to have a large collection of about 600 million records totaling 110+ GB on disk. Recently I decided to drop it because the data was outdated, to do so I dropped the collection through rockmongo's web interface. Accordingly, rockmongo doesn't show me the collection anymore, however my disk usage hasn't changed at all. Is there any clean operation which I am not aware of, which must be run in order to synchronize the database with database files on disk? I have tried to perform a "repair" but the system complains that there's not enough space on disk ... that's because it is all used by MongoDB.

    Read the article

  • What hardware makes a good MongoDB Server ? Where to get it ?

    - by João Pinto Jerónimo
    Suppose you're on dell.com right now and you're buying a server to run your MongoDB database for your small startup. You will have to handle literally tens of thousands of writes and reads per minute (but small objects). Would you go for 2 processors ? Invest more on RAM ? I've heard (correct me if I'm wrong) MongoDB handles the most it can on the RAM and then flushes everything to the disk, in that case I should invest on a CPU with a large L2 cache, probably 40GB of RAM and a solid state drive.. right ? Would I be better off with a high end (~$11,309, 2 expensive processors, 96GB of RAM) server or 2x(~$6,419, 2 expensive processors, 12GB of RAM) servers ? Is Dell ok or do you have better sugestions ? (I'm outside the US, on Portugal)

    Read the article

  • Why doesn't MongoDb store my slashes in this string?

    - by Rob Dudley
    Can anyone tell me why this command doesn't work from the MongoDB shell client: db.coll.update({'live':true},{$set:{'mask':"\D\D\D\D\D\D\D\D"}},false,true) but db.coll.findOne({'id':'someId'}) returns the mask field as: "mask" : "DDDDDDDD", Where are the slashes going? I've tried "double escaping" with \\D and that inserts both slashes: "mask" : "\\D\\D\\D\\D\\D\\D\\D\\D", MongoDB shell version: 2.0.6, MongoDB version: 2.0.5, OSX Lion Thanks

    Read the article

  • data source does not support server-side data paging uisng asp.net Csharp

    - by Aamir Hasan
    Yesterday some one mail me and ask about data source does not support server side data paging.So i write the the solution here please if you have got this problem read this article and see the example code this will help you a Lot.The only change you have to do is in the DataBind().Here you have used the SqlDataReader to read data retrieved from the database, but SqlDataReader is forward only. You can not traverse back and forth on it.So the solution for this is using DataAdapter and DataSet.So your function may change some what like this private void DataBind(){//for grid viewSqlCommand cmdO;string SQL = "select * from TABLE ";conn.Open();cmdO = new SqlCommand(SQL, conn);SqlDataAdapter da = new SqlDataAdapter(cmdO);DataSet ds = new DataSet();da.Fill(ds);GridView1.Visible = true;GridView1.DataSource = ds;GridView1.DataBind();ds.Dispose();da.Dispose();conn.Close();} This surely works. The reset of your code is fine. Enjoy coding.

    Read the article

  • Shuffle tiles position in the beginning of the game XNA Csharp

    - by GalneGunnar
    Im trying to create a puzzlegame where you move tiles to certain positions to make a whole image. I need help with randomizing the tiles startposition so that they don't create the whole image at the beginning. There is also something wrong with my offset, that's why it's set to (0,0). I know my code is not good, but Im just starting to learn :] Thanks in advance My Game1 class: { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D PictureTexture; Texture2D FrameTexture; // Offset för bildgraff Vector2 Offset = new Vector2(0,0); //skapar en array som ska hålla delar av den stora bilden Square[,] squareArray = new Square[4, 4]; // Random randomeraBilder = new Random(); //Width och Height för bilden int pictureHeight = 95; int pictureWidth = 144; Random randomera = new Random(); int index = 0; MouseState oldMouseState; int WindowHeight; int WindowWidth; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; //scalar Window till 800x 600y graphics.PreferredBackBufferWidth = 800; graphics.PreferredBackBufferHeight = 600; graphics.ApplyChanges(); } protected override void Initialize() { IsMouseVisible = true; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); PictureTexture = Content.Load<Texture2D>(@"Images/bildgraff"); FrameTexture = Content.Load<Texture2D>(@"Images/framer"); //Laddar in varje liten bild av den stora bilden i en array for (int x = 0; x < 4; x++) { for (int y = 0; y < 4; y++) { Vector2 position = new Vector2(x * pictureWidth, y * pictureHeight); position = position + Offset; Rectangle square = new Rectangle(x * pictureWidth, y * pictureHeight, pictureWidth, pictureHeight); Square frame = new Square(position, PictureTexture, square, Offset, index); squareArray[x, y] = frame; index++; } } } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); MouseState ms = Mouse.GetState(); if (oldMouseState.LeftButton == ButtonState.Pressed && ms.LeftButton == ButtonState.Released) { // ta reda på vilken position vi har tryckt på int col = ms.X / pictureWidth; int row = ms.Y / pictureHeight; for (int x = 0; x < squareArray.GetLength(0); x++) { for (int y = 0; y < squareArray.GetLength(1); y++) { // kollar om rutan är tom och så att indexet inte går utanför för "col" och "row" if (squareArray[x, y].index == 0 && col >= 0 && row >= 0 && col <= 3 && row <= 3) { if (squareArray[x, y].index == 0 * col) { //kollar om rutan brevid mouseclick är tom if (col > 0 && squareArray[col - 1, row].index == 0 || row > 0 && squareArray[col, row - 1].index == 0 || col < 3 && squareArray[col + 1, row].index == 0 || row < 3 && squareArray[col, row + 1].index == 0) { Square sqaure = squareArray[col, row]; Square hal = squareArray[x, y]; squareArray[x, y] = sqaure; squareArray[col, row] = hal; for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) { Vector2 goalPosition = new Vector2(x * pictureWidth, y * pictureHeight); squareArray[x, y].Swap(goalPosition); } } } } } } } } //if (oldMouseState.RightButton == ButtonState.Pressed && ms.RightButton == ButtonState.Released) //{ // for (int x = 0; x < 4; x++) // { // for (int y = 0; y < 4; y++) // { // } // } //} oldMouseState = ms; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); WindowHeight = Window.ClientBounds.Height; WindowWidth = Window.ClientBounds.Width; Rectangle screenPosition = new Rectangle(0,0, WindowWidth, WindowHeight); spriteBatch.Begin(); spriteBatch.Draw(FrameTexture, screenPosition, Color.White); //Ritar ut alla brickorna förutom den som har index 0 for (int x = 0; x < 4; x++) { for (int y = 0; y < 4; y++) { if (squareArray[x, y].index != 0) { squareArray[x, y].Draw(spriteBatch); } } } spriteBatch.End(); base.Draw(gameTime); } } } My square class: class Square { public Vector2 position; public Texture2D grafTexture; public Rectangle square; public Vector2 offset; public int index; public Square(Vector2 position, Texture2D grafTexture, Rectangle square, Vector2 offset, int index) { this.position = position; this.grafTexture = grafTexture; this.square = square; this.offset = offset; this.index = index; } public void Draw(SpriteBatch spritebatch) { spritebatch.Draw(grafTexture, position, square, Color.White); } public void RandomPosition() { } public void Swap(Vector2 Goal ) { if (Goal.X > position.X) { position.X = position.X + 144; } else if (Goal.X < position.X) { position.X = position.X - 144; } else if (Goal.Y < position.Y) { position.Y = position.Y - 95; } else if (Goal.Y > position.Y) { position.Y = position.Y + 95; } } } }

    Read the article

  • MongoDB and GrifFS. What are the best storage options in the range of 1 TB?

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities?

    Read the article

  • Learning AngularJS by Example – The Customer Manager Application

    - by dwahlin
    I’m always tinkering around with different ideas and toward the beginning of 2013 decided to build a sample application using AngularJS that I call Customer Manager. It’s not exactly the most creative name or concept, but I wanted to build something that highlighted a lot of the different features offered by AngularJS and how they could be used together to build a full-featured app. One of the goals of the application was to ensure that it was approachable by people new to Angular since I’ve never found overly complex applications great for learning new concepts. The application initially started out small and was used in my AngularJS in 60-ish Minutes video on YouTube but has gradually had more and more features added to it and will continue to be enhanced over time. It’ll be used in a new “end-to-end” training course my company is working on for AngularjS as well as in some video courses that will be coming out. Here’s a quick look at what the application home page looks like: In this post I’m going to provide an overview about how the application is organized, back-end options that are available, and some of the features it demonstrates. I’ve already written about some of the features so if you’re interested check out the following posts: Building an AngularJS Modal Service Building a Custom AngularJS Unique Value Directive Using an AngularJS Factory to Interact with a RESTful Service Application Structure The structure of the application is shown to the right. The  homepage is index.html and is located at the root of the application folder. It defines where application views will be loaded using the ng-view directive and includes script references to AngularJS, AngularJS routing and animation scripts, plus a few others located in the Scripts folder and to custom application scripts located in the app folder. The app folder contains all of the key scripts used in the application. There are several techniques that can be used for organizing script files but after experimenting with several of them I decided that I prefer things in folders such as controllers, views, services, etc. Doing that helps me find things a lot faster and allows me to categorize files (such as controllers) by functionality. My recommendation is to go with whatever works best for you. Anyone who says, “You’re doing it wrong!” should be ignored. Contrary to what some people think, there is no “one right way” to organize scripts and other files. As long as the scripts make it down to the client properly (you’ll likely minify and concatenate them anyway to reduce bandwidth and minimize HTTP calls), the way you organize them is completely up to you. Here’s what I ended up doing for this application: Animation code for some custom animations is located in the animations folder. In addition to AngularJS animations (which are defined using CSS in Content/animations.css), it also animates the initial customer data load using a 3rd party script called GreenSock. Controllers are located in the controllers folder. Some of the controllers are placed in subfolders based upon the their functionality while others are placed at the root of the controllers folder since they’re more generic:   The directives folder contains the custom directives created for the application. The filters folder contains the custom filters created for the application that filter city/state and product information. The partials folder contains partial views. This includes things like modal dialogs used in the application. The services folder contains AngularJS factories and services used for various purposes in the application. Most of the scripts in this folder provide data functionality. The views folder contains the different views used in the application. Like the controllers folder, the views are organized into subfolders based on their functionality:   Back-End Services The Customer Manager application (grab it from Github) provides two different options on the back-end including ASP.NET Web API and Node.js. The ASP.NET Web API back-end uses Entity Framework for data access and stores data in SQL Server (LocalDb). The other option on the back-end is Node.js, Express, and MongoDB.   Using the ASP.NET Web API Back-End To run the application using ASP.NET Web API/SQL Server back-end open the .sln file at the root of the project in Visual Studio 2012 or higher (the free Express 2013 for Web version is fine). Press F5 and a browser will automatically launch and display the application. Using the Node.js Back-End To run the application using the Node.js/MongoDB back-end follow these steps: In the CustomerManager directory execute 'npm install' to install Express, MongoDB and Mongoose (package.json). Load sample data into MongoDB by performing the following steps: Execute 'mongod' to start the MongoDB daemon Navigate to the CustomerManager directory (the one that has initMongoCustData.js in it) then execute 'mongo' to start the MongoDB shell Enter the following in the mongo shell to load the seed files that handle seeding the database with initial data: use custmgr load("initMongoCustData.js") load("initMongoSettingsData.js") load("initMongoStateData.js") Start the Node/Express server by navigating to the CustomerManager/server directory and executing 'node app.js' View the application at http://localhost:3000 in your browser. Key Features The Customer Manager application certainly doesn’t cover every feature provided by AngularJS (as mentioned the intent was to keep it as simple as possible) but does provide insight into several key areas: Using factories and services as re-useable data services (see the app/services folder) Creating custom directives (see the app/directives folder) Custom paging (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Custom filters (see app/filters) Showing custom modal dialogs with a re-useable service (see app/services/modalService.js) Making Ajax calls using a factory (see app/services/customersService.js) Using Breeze to retrieve and work with data (see app/services/customersBreezeService.js). Switch the application to use the Breeze factory by opening app/services.config.js and changing the useBreeze property to true. Intercepting HTTP requests to display a custom overlay during Ajax calls (see app/directives/wcOverlay.js) Custom animations using the GreenSock library (see app/animations/listAnimations.js) Creating custom AngularJS animations using CSS (see Content/animations.css) JavaScript patterns for defining controllers, services/factories, directives, filters, and more (see any JavaScript file in the app folder) Card View and List View display of data (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Using AngularJS validation functionality (see app/views/customerEdit.html, app/controllers/customerEditController.js, and app/directives/wcUnique.js) More… Conclusion I’ll be enhancing the application even more over time and welcome contributions as well. Tony Quinn contributed the initial Node.js/MongoDB code which is very cool to have as a back-end option. Access the standard application here and a version that has custom routing in it here. Additional information about the custom routing can be found in this post.

    Read the article

  • How to know whether mongodb is running on 64 bit mode or 32 bit mode

    - by Jim Thio
    My programmer install mongodb. Then somehow it doesn't work. I run C:\mongod\bin>mongod mongod --help for help and startup options Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 warning: 32-bit servers don't have journaling enabled by def ault. Please use --journal if you want durability. Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 [initandlisten] MongoDB starting : pid=3800 port=27017 dbpat h=/data/db 32-bit host=haryantoi5 Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sat Aug 11 22:57:50 [initandlisten] ** see http://blog.mongodb.org/post/13 7788967/32-bit-limitations Sat Aug 11 22:57:50 [initandlisten] ** with --journal, the limit is lower Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] db version v2.0.7-rc1, pdfile version 4.5 Sat Aug 11 22:57:50 [initandlisten] git version: 9efe4cce272373b52b96de1309c1fbf 0c984305f Sat Aug 11 22:57:50 [initandlisten] build info: windows sys.getwindowsversion(ma jor=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB _VERSION=1_42 Sat Aug 11 22:57:50 [initandlisten] options: {} ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Sat Aug 11 22:57:50 [initandlisten] exception in initAndListen: 12596 old lock f ile, terminating Sat Aug 11 22:57:50 dbexit: Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close listening sockets.. . Sat Aug 11 22:57:50 [initandlisten] shutdown: going to flush diaglog... Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close sockets... Sat Aug 11 22:57:50 [initandlisten] shutdown: waiting for fs preallocator... Sat Aug 11 22:57:50 [initandlisten] shutdown: closing all files... Sat Aug 11 22:57:50 [initandlisten] closeAllFiles() finished Sat Aug 11 22:57:50 dbexit: really exiting now It seems that mongod is running on 32 bit. I have a 64 bit computer and I want to run mongodb in 64 bit enviroment. How do I do so?

    Read the article

  • JPA and NoSQL using EclipseLink - MongoDB supported

    - by arungupta
    EclipseLink 2.4 has added JPA support for NoSQL databases, MongoDB and Oracle NoSQL are the first ones to make the cut. The support to other NoSQL database can be extended by adding a EclipseLink EISPlatform class and a JCA adapter. A Java class can be mapped to a NoSQL datasource using the @NoSQL annotation or <no-sql> XML element. Even a subset of JPQL and the Criteria API are supported, dependent on the NoSQL database's query support. The connection properties are specified in "persistence.xml". A complete sample showing how JPA annotations are mapping and using @NoSQL is explained here. The MongoDB-version of the source code can also be checked out from the SVN repository. EclipseLink 2.4 is scheduled to be released with Eclipse Juno in June 2012 and the complete set of supported features is described on their wiki. The milestone and nightly builds are already available. Do you want to try with GlassFish and let us know ?

    Read the article

  • Random MongoDb Syntax: Updates

    - by Liam McLennan
    I have a MongoDb collection called tweets. Each document has a property system_classification. If the value of system_classification is ‘+’ I want to change it to ‘positive’. For a regular relational database the query would be: update tweets set system_classification = 'positive' where system_classification = '+' the MongoDb equivalent is: db.tweets.update({system_classification: '+'}, {$set: {system_classification:'positive'}}, false, true) Parameter Description { system_classification: '+' } the first parameter identifies the documents to select { $set: { system_classification: 'positive' } } the second parameter is an operation ($set) and the parameter to that operation {system_classification: ‘positive’} false the third parameter indicates if this is a regular update or an upsert (true for upsert) true the final parameter indicates if the operation should be applied to all selected documents (or just the first)

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >