Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 477/1698 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • Using MongoDB with Ruby On Rails and the Mongomapper plugin

    - by Micke
    Hello, i am currently trying to learn Ruby On Rails as i am a long-time PHP developer so i am building my own community like page. I have came pritty far and have made the user models and suchs using MySQL. But then i heard of MongoDB and looked in to it a little bit more and i find it kinda nice. So i have set it up and i am using mongomapper for the connection between rails and MongoDB. And i am now using it for the News page on the site. I also have a profile page for every User which includes their own guestbook so other users can come to their profile and write a little message to them. My thought now is to change the User models from using MySQL to start using MongoDB. I can start by showing how the models for each User is set up. The user model: class User < ActiveRecord::Base has_one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook < ActiveRecord::Base belongs_to :user has_many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts < ActiveRecord::Base belongs_to :guestbook, :class_name => "User::Guestbook" I have divided it like this for my own convenience but now when i am going to try to migrate to MongoDB i dont know how to make the tables. I would like to have one table for each user and in that table a "column" for all the guestbook entries since MongoDB can have a EmbeddedDocument. I would like to do this so i just have one Table for each user and not like now when i have three tables just to be able to have a guestbook. So my thought is to have it like this: The user model: class User include MongoMapper::Document one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook include MongoMapper::EmbeddedDocument belongs_to :user many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts include MongoMapper::EmbeddedDocument belongs_to :guestbook, :class_name => "User::Guestbook" But then i can think of one problem.. That when i just want to fetch the user information like a nickname and a birthdate then it will have to fetch all the users guestbook posts. And if each user has like a thousand posts in the guestbook it will get really much to fetch for the system. Or am i wrong? Do you think i should do it any other way? Thanks in advance and sorry if i am hard to understand but i am not so educated in the english language :)

    Read the article

  • MySql, InnoDB & Null Values

    - by pws5068
    Formerly I was using MyISAM storage engine for MySql and I had defined the combination of three fields to be unique. Now I have switched to InnoDB, which I assume caused this problem, and now NULL != NULL. So for the following table: ID (Auto) | Field_A | Field_B | Field_C I can insert (Field_A,Field_B,Field_C) Values(1,2,NULL) (1,2,NULL) (1,2,NULL) infinitely many times. How can I prevent this behavior?

    Read the article

  • Oracle Trigger to persist value

    - by Jose Jose
    I have a column that I need to be able to guarantee never gets set to anything other than "N" - I thought a trigger would be the perfect solution for this, but I can't seem to figure out how to make it so that anytime the column gets set to something other than "N" I reset it back to "N" Any pointers? EDIT: I wouldn't want to do a constraint because the application that will potentially change it to Y is outside of my control and I don't want it to be getting errors when it sets it to Y, I just want to passively set it back to N without fanfare.

    Read the article

  • Modifying SQL Server Schema Collection

    - by Mevdiven
    SQL Server XML Schema Collection is an interesting concept and I find it very useful when designing dynamic data content. However as I work my way through implementing Schema Collections, I find it very difficult to maintain them. Schema Collection DDL allows only CREATE and ALTER/ADD nodes to existing schemes. CREATE XML SCHEMA COLLECTION [ <relational_schema>. ]sql_identifier AS 'XSD Content' ALTER XML SCHEMA COLLECTION [ <relational_schema>. ]sql_identifier ADD 'Schema Component' When you want to remove any node from a schema you have to issue following DDL's. If that schema collection assigned to a table column, you have to alter table to remove schema collection association from that column Drop the schema collection object Re-Create schema collection Alter table column to re-associate schema collection to that column. This is pain when it comes to 100+ of schemes in a collection. Also you have to re-create XML indexes all over again, if any. Any solutions, suggestions, tricks to make this schema collection object editing process easier?

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • Replication: SQL Server 2008 Publisher with SQL Server Express 2005 Subscriber

    - by Jeremy
    Here is the setup: SQL Server 2008 Enterprise Server with a Merge Publication. SQL Server 2005 Express with pull subscription. There is no web or ftp setup. This is direct merge replication. Using the RMO objects from C#, I get a "class cannot be found." COM Error when accessing the MergePullSubscription.SynchronizationAgent property. I've tried with both the 2008 RMO dll's (version 10 dll's) and the 2005 RMO dll's (version 9 dll's). When trying to use replmerge.exe, I get the following: 2010-04-10 04:12:05.263 Microsoft SQL Server Merge Agent 9.00.1399.06 2010-04-10 04:12:05.294 Copyright (c) 2000 Microsoft Corporation 2010-04-10 04:12:05.294 2010-04-10 04:12:05.294 The timestamps prepended to the output lines are express ed in terms of UTC time. 2010-04-10 04:12:05.294 User-specified agent parameter values: -Publisher SUN -PublisherDB PRIMROSE -PublisherSecurityMode 1 -Publication PRIMROSE -Distributor SUN -DistributorSecurityMode 1 -Subscriber PVILLE\SQLEXPRESS -SubscriberSecurityMode 1 -SubscriberDB PRIMROSE -SubscriptionType 1 -DistributorLogin sa -DistributorPassword ********** -DistributorSecurityMode 0 -PublisherLogin sa -PublisherPassword ********** -PublisherSecurityMode 0 -SubscriberLogin sa -SubscriberPassword ********** -SubscriberSecurityMode 0 2010-04-10 04:12:05.325 Connecting to Subscriber 'PVILLE\SQLEXPRESS' 2010-04-10 04:12:05.481 Connecting to Distributor 'SUN' 2010-04-10 04:12:05.513 The version of SQL Server running at the Distributor(10. 0.2531.??????????????????) is not compatible with the version of SQL Server runn ing at the Subscriber(9.00.1399.???????L?L?LHL?L?L?L?,?). 2010-04-10 04:12:05.513 Category:NULL Source: Merge Process Number: -2147200979 Message: The version of SQL Server running at the Distributor(10.0.2531.???????? ??????????) is not compatible with the version of SQL Server running at the Subs criber(9.00.1399.???????L?L?LHL?L?L?L?,?). Any ideas?

    Read the article

  • Starting out NLP - Python + large data set

    - by pencilNero
    Hi, I've been wanting to learn python and do some NLP, so have finally gotten round to starting. Downloaded the english wikipedia mirror for a nice chunky dataset to start on, and have been playing around a bit, at this stage just getting some of it into a sqlite db (havent worked with dbs in the past unfort). But I'm guessing sqlite is not the way to go for a full blown nlp project(/experiment :) - what would be the sort of things I should look at ? HBase (.. and hadoop) seem interesting, i guess i could run then im java, prototype in python and maybe migrate the really slow bits to java... alternatively just run Mysql.. but the dataset is 12gb, i wonder if that will be a problem? Also looked at lucene, but not sure how (other than breaking the wiki articles into chunks) i'd get that to work.. What comes to mind for a really flexible NLP platform (i dont really know at this stage WHAT i want to do.. just want to learn large scale lang analysis tbh) ? Many thanks.

    Read the article

  • apostrophe in mysql/php

    - by fusion
    i'm trying to learn php/mysql. inserting data into mysql works fine but inserting those with apostrophe is generating an error. i tried using mysql_real_escape_string, yet this doesn't work. would appreciate any help. <?php include 'config.php'; echo "Connected <br />"; $auth = $_POST['author']; $quo = $_POST['quote']; $author = mysql_real_escape_string($auth); $quote = mysql_real_escape_string($quo); //************************** //inserting data $sql="INSERT INTO Quotes (vauthor, cquotes) VALUES ($author, $quote)"; if (!mysql_query($sql,$conn)) { die('Error: ' . mysql_error()); } echo "1 record added"; ... what am i doing wrong?

    Read the article

  • Invoice Discount: Negative line items vs Internal properties

    - by FreshCode
    Should discount on invoice items and entire invoices be negative line items or separate properties of an invoice? In a similar question, Should I incorporate list of fees/discounts into an order class or have them be itemlines, the asker focuses more on orders than invoices (which is a slightly different business entity). Discount is proposed to be separate from order items since it is not equivalent to a fee or product and may have different reporting requirements. Hence, discount should not simply be a negative line item. Previously I have successfully used negative line items to clearly indicate and calculate discount, but this feels inflexible and inaccurate from a business perspective. Now I am opting to add discount to each line item, along with an invoice-wide discount. Is this the right way to do it? Should each item have its own discount amount and percentage? Domain Model Code Sample This is what my domain model, which maps to an SQL repository, looks like: public class Invoice { public int ID { get; set; } public Guid JobID { get; set; } public string InvoiceNumber { get; set; } public Guid UserId { get; set; } // user who created it public DateTime Date { get; set; } public decimal DiscountPercent { get; set; } // all lines discount %? public decimal DiscountAmount { get; set; } // all lines discount $? public LazyList<InvoiceLine> InvoiceLines { get; set; } public LazyList<Payment> Payments { get; set; } // for payments received public boolean IsVoided { get; set; } // Invoices are immutable. // To change: void -> new invoice. public decimal Total { get { return (1.0M - DiscountPercent) * InvoiceLines.Sum(i => i.LineTotal) - DiscountAmount; } } } public class InvoiceLine { public int ID { get; set; } public int InvoiceID { get; set; } public string Title { get; set; } public decimal Quantity { get; set; } public decimal LineItemPrice { get; set; } public decimal DiscountPercent { get; set; } // line discount %? public decimal DiscountAmount { get; set; } // line discount amount? public decimal LineTotal { get { return (1.0M - DiscountPercent) * (this.Quantity * (this.LineItemPrice)) - DiscountAmount; } } }

    Read the article

  • How to implement Object Databases in Asp.net MVC(c#)

    - by amexn
    I started my project in Asp.net MVC(c#) & MSSQL 2005.I want to implement Object Databases in my project. While searched in google i found "MongoDb" I didn't have enough knowledge in Object Databases & which one best suited for MSSQL 2005. Please suggest a good example/reference regarding Object Databases implementation in Asp.net MVC application

    Read the article

  • can't use db:seed in rails

    - by Stacia
    I updated my rails gem to 2.3.5 but I keep getting this error when running db:seed: $ rake db:seed --trace (in c:/Documents and Settings/Owner/workspace/thepatstudio) rake aborted! Don't know how to build task 'db:seed' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1728:in `[]' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2050:in `invoke_task' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 c:/Ruby/bin/rake:19:in `load' c:/Ruby/bin/rake:19 ~/workspace/thepatstudio (master) $ rails --version Rails 2.3.5 My environment.rb has the correct rails version on it and I also ran rake rails:update. What can I do?

    Read the article

  • Insert into a star-schema

    - by shaun
    I've read a lot about star-schema's, about fact/deminsion tables, select statements to quickly report data, however the matter of data entry into a star-schema seems aloof to me. How does one "theoretically" enter data into a star-schema db? while maintaining the fact table. Is a series of INSERT INTO statement within giant stored proc with 20 params my only option (and how to populate the fact table). Many thanks.

    Read the article

  • Where can I download a free, text-rich dataset?

    - by blee
    I want to do a bit of lightweight testing and bench-marking for full-text search, so the dataset should have the qualities: 10,000 - 100,000 records. good dispersion of English words. In CSV or Excel format--i.e. I don't want to access it via API. Something like books or movies with title and description fields would be perfect. I browsed the UCI Machine Learning Repo, but it was too number-oriented. Thanks!

    Read the article

  • Don't Understand Sql Server Error

    - by Jonathan Wood
    I have a table of users (User), and need to create a new table to track which users have referred other users. So, basically, I'm creating a many-to-many relation between rows in the same table. So I'm trying to create table UserReferrals with the columns UserId and UserReferredId. I made both columns a compound primary key. And both columns are foreign keys that link to User.UserID. Since deleting a user should also delete the relationship, I set both foreign keys to cascade deletes. When the user is deleted, any related rows in UserReferrals should also delete. But this gives me the message: 'User' table saved successfully 'UserReferrals' table Unable to create relationship 'FK_UserReferrals_User'. Introducing FOREIGN KEY constraint 'FK_UserReferrals_User' on table 'UserReferrals' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. I don't get this error. A cascading delete only deletes the row with the foreign key, right? So how can it cause "cycling cascade paths"? Thanks for any tips.

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • Product table, many kinds of product, each product has many parameters

    - by StoneHeart
    Hi, i'm have not much experience in table design. My goal is a product table(s), it must design to fix some requirement below: Support many kind of products (TV, Phone, PC, ...). Each kind of product has different set of parameters like: Phone will have Color, Size, Weight, OS... PC will have CPU, HDD, RAM... Set of parameters must be dynamic. You can add or edit any parameter you like. I don't want make a table for each kind of product. So I need help to find a correct solution. Thanks.

    Read the article

  • How to implement Object Databases in Asp.net MVC

    - by amexn
    I started my project in Asp.net MVC(c#) & SQL Server 2005.I want to implement Object Databases in my project. While searched in google i found "MongoDb" & db4o I didn't have enough knowledge in Object Databases & which one best suited for SQL Server 2005. Please suggest a good example/reference regarding Object Databases implementation in Asp.net MVC application

    Read the article

  • Invoice & Invoice lines: How do you store customer address information?

    - by elviejo
    Hi I'm developing an invoicing application. So the general idea is to have two tables: Invoice (ID, Date, CustomerAddress, CustomerState, CustomerCountry, VAT, Total); InvoiceLine (Invoice_ID, ID, Concept, Units, PricePerUnit, Total); As you can see this basic design leads to a lot of repetiton of records where the client will have the same addrres, state and country. So the alternative is to have an address table and then make a relationship Address<-Invoice. However I think that an invoice is immutable document and should be stored just the way it was first made. Sometimes customers change their addresses, or states and if it was coming from an Address catalog that will change all the previously made invoices. So What is your experience? How is the customer address stored in an invoice? In the Invoice table? an Address Table? or something else? Can you provide pointers to a book, article or document where this is discussed in further detail?

    Read the article

  • Synchronizing an ERWin model with a Visual Studio 2008 GDR 2/2010 db project

    - by Grant Back
    I am looking for options to get our vast collection of DB objects across many DBs into source control (TFS 2010). Once we succeed here, we will work toward generating our alter scripts for a particular DB change via TFS build. The problem is, our data architecture group is responsible for maintaining the DB objects (excluding SPs), and they work within a model centric process, via ERWin. What this means, is that they maintain the DBs via ERWin models, and generate alters from them that are used to release changes. In order to achieve our goal of getting the DB objects (not just the ERWin models) into TFS, I believe the best option is to do this via Visual Studio DB projects. From what I can tell, there is very little urgency for CA to continue supporting an integration between ERWin and Visual Studio, that no longer works as of Visual Studio 2008 DB Ed. GDR. If I have been mislead in this regard, please feel free to set me straight. One potential solution is to: Perform changes in the ERWin model. Take the alter script generated from ERWin, and import the script into the appropriate Visual Studio DB project, updating the objects in the in the DB project Check the changed objects in the DB project into TFS. TFS Build executes to generate the alter scripts that will be used to push the changes through our release process. My question is, is this solution viable, or are there any other options?

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >