Search Results

Search found 61944 results on 2478 pages for 'text database'.

Page 489/2478 | < Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >

  • Exceptions in ASP.MVC

    - by George
    Hello guys, I'm here again with another question about MVC. Here is the deal. I have a simple table/class with an Id and a Name. Names suppossed to be unique, and are modeled like that in the DB. I created my controller and everything just works fine. But if I try to insert a name that already exists, an exception should be thrown. I'm just not finding what is the correct kind of exception and it's namespace. The error must be coming from the DB, so... Any ideas? Thanks

    Read the article

  • Syntax Error with MySQL 5.1 Server

    - by Mr.Z
    I am trying to connect to a server remotely using the command line client window. I am using MySQL 5.1 and I do not know why I am getting syntax error. If you can help me, that would be much appreciated. username as user password as pass hostname as example.com I have tried: mysql> -u user -h example.com -p ; mysql> -h example.com -u user -p ; I have looked at the reference manual and other verisons of remote connection with Server 5.1 but I can't see the syntax error.

    Read the article

  • How to figure out which record has been deleted in an effiecient way?

    - by janetsmith
    Hi, I am working on an in-house ETL solution, from db1 (Oracle) to db2 (Sybase). We needs to transfer data incrementally (Change Data Capture?) into db2. I have only read access to tables, so I can't create any table or trigger in Oracle db1. The challenge I am facing is, how to detect record deletion in Oracle? The solution which I can think of, is by using additional standalone/embedded db (e.g. derby, h2 etc). This db contains 2 tables, namely old_data, new_data. old_data contains primary key field from tahle of interest in Oracle. Every time ETL process runs, new_data table will be populated with primary key field from Oracle table. After that, I will run the following sql command to get the deleted rows: SELECT old_data.id FROM old_data WHERE old_data.id NOT IN (SELECT new_data.id FROM new_data) I think this will be a very expensive operation when the volume of data become very large. Do you have any better idea of doing this? Thanks.

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • Is there any online free movie information api's?

    - by Gary Willoughby
    For music there is the Gracenote CDDB SDK, etc. but does an online service exist for getting information about movies? The only solution i can see at the minute is querying IMDB and scraping the page. The problem i have is that i have a list of film titles and i want to retrieve stuff like the plot, director, cast, when released, get dvd cover art, etc..

    Read the article

  • SQL Server Concatenate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • T-SQL MERGE - finding out which action it took

    - by IanC
    I need to know if a MERGE statement performed an INSERT. In my scenario, the insert is either 0 or 1 rows. Test code: DECLARE @t table (C1 int, C2 int) DECLARE @C1 INT, @C2 INT set @c1 = 1 set @c2 = 1 MERGE @t as tgt USING (SELECT @C1, @C2) AS src (C1, C2) ON (tgt.C1 = src.C1) WHEN MATCHED AND tgt.C2 != src.C2 THEN UPDATE SET tgt.C2 = src.C2 WHEN NOT MATCHED BY TARGET THEN INSERT VALUES (src.C1, src. C2) OUTPUT deleted.*, $action, inserted.*; SELECT inserted.* The last line doesn't compile (no scope, unlike a trigger). I can't get access to @action, or the output. Actually, I don't want any output meta data. How can I do this?

    Read the article

  • Django: Serializing models in a nested data structure?

    - by Rosarch
    It's easy to serialize models in an iterable: def _toJSON(models): return serializers.serialize("json", models, ensure_ascii=False) What about when I have something more complicated: [ (Model_A_1, [Model_B_1, Model_B_2, Model_B_3]), (Model_A_2, [Model_B_3, Model_B_4, Model_B_5, Model_B_59]), (Model_A_3, [Model_B_6, Model_B_7]), ] I tried serializing each model as it was added to the structure, then serializing the whole thing with simplejson.dumps, but that causes the JSON defining each model to be escaped. Is there a better way to do this?

    Read the article

  • why is there extra using where in execution plan of query

    - by user366534
    I see plan of query: EXPLAIN SELECT * FROM `subscribers` WHERE state =4 AND date_added < '2010-12-23 11:47:45' It shows: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subscribers range state_date_added state_date_added 9 NULL 8 Using where Here is indexes of table: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment subscribers 0 PRIMARY 1 subscriber_id A 382039 NULL NULL BTREE subscribers 0 email_list_id 1 email_address A 191019 NULL NULL BTREE subscribers 0 email_list_id 2 list_id A 382039 NULL NULL BTREE subscribers 1 FK_list_id 1 list_id A 10 NULL NULL BTREE subscribers 1 state_date_added 1 state A 12 NULL NULL BTREE subscribers 1 state_date_added 2 date_added A 8128 NULL NULL BTREE The last two lines describes index what is supposed for the query. Why is there in extra column using where? Even If I fetch only state and date_added column, it has in extra column: Using where; Using index. I understand why it has using index, but I don't understand Using where here.

    Read the article

  • Fluent nHibernate - How to map a non-key column on an association table?

    - by The Matt
    Taking an example that is provided on the Fluent nHibernate website, I need to extend it slightly: I need to add a 'Quantity' column to the StoreProduct table. How would I map this using nHibernate? An example mapping is provided for the given scenario above, but I'm not sure how I would get the Quantity column to map: public class StoreMap : ClassMap<Store> { public StoreMap() { Id(x => x.Id); Map(x => x.Name); HasMany(x => x.Employee) .Inverse() .Cascade.All(); HasManyToMany(x => x.Products) .Cascade.All() .Table("StoreProduct"); } }

    Read the article

  • django related_name for field clashes.

    - by Absolute0
    I am getting a field clash in my models: class Visit(models.Model): user = models.ForeignKey(User) visitor = models.ForeignKey(User) Error: One or more models did not validate: profiles.visit: Accessor for field 'user' clashes with related field 'User.visit_set'. Add a related_name argument to the definition for 'user'. profiles.visit: Accessor for field 'visitor' clashes with related field 'User.visit_set'. Add a related_name argument to the definition for 'visitor'. what would be a sensible 'related_field' to use on visitor field? This model basically represents the visits that take place to a particular user's profile. Also should I replace any of the ForeignKey's with a ManyToManyField? The logic is a bit confusing. Edit: This seems to fix it, but I am unsure if its what I want. :) class Visit(models.Model): user = models.ForeignKey(User) visitor = models.ForeignKey(User, related_name='visitors')

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • Non-normalized association with legacy tables in Rails and ActiveRecord

    - by Thomas Holmström
    I am building a Rails application accessing a legacy system. The data model contains Customers which can have one or more Subscriptions. A Subscription always belong to one and only one Customer. Though not needed, this association is represented through a join table "subscribes", which do not have an id column: Column | Type | Modifiers -----------------+---------+----------- customer_id | integer | not null subscription_id | integer | not null I have this coded as a has_and_belongs_to_many declarations in both Customer and Subscription class Customer < Activerecord::Base has_and_belongs_to_many :subscriptions, :join_table => "subscribes", :foreign_key => "customer_id", :association_foreign_key => "subscription_id" end class Subscription < Activerecord::Base has_and_belongs_to_many :customers, :join_table => "subscribes", :foreign_key => "subscription_id", :association_foreign_key => "customer_id" end The problem I have is that there can only ever be one customer for each subscription, not many, and the join table will always contain at most one row with a certain customer_id. And thus, I don't want the association "customers" on a Subscription which returns an array of (at most one) Customer, I really do want the relation "customer" which returns the Customer associated. Is there any way to force ActiveRecord to make this a 1-to-N relation even though the join table itself seems to make it an N-to-M relation? --Thomas

    Read the article

  • How to get count of another table in a left join

    - by Sinan
    I have multiple tables post id Name 1 post-name1 2 post-name2 user id username 1 user1 2 user2 post_user post_id user_id 1 1 2 1 post_comments post_id comment_id 1 1 1 2 1 3 I am using a query like this: SELECT post.id, post.title, user.id AS uid, username FROM `post` LEFT JOIN post_user ON post.id = post_user.post_id LEFT JOIN user ON user.id = post_user.user_id ORDER BY post_date DESC It works as intended. However I would like the get the number of comments for each post too. So how can i modify the this query so I can get the count of comments. Any ideas?

    Read the article

  • In this example, would Customer or AccountInfo properly be the entity group parent?

    - by Badhu Seral
    In this example, the Google App Engine documentation makes the Customer the entity group parent of the AccountInfo entity. Wouldn't AccountInfo encapsulate Customer rather than the other way around? Normally I would think of an AccountInfo class as including all of the information about the Customer. import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.Key; import com.google.appengine.api.datastore.KeyFactory; @PersistenceCapable public class AccountInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; public void setKey(Key key) { this.key = key; } } // ... KeyFactory.Builder keyBuilder = new KeyFactory.Builder(Customer.class.getSimpleName(), "custid985135"); keyBuilder.addChild(AccountInfo.class.getSimpleName(), "acctidX142516"); Key key = keyBuilder.getKey(); AccountInfo acct = new AccountInfo(); acct.setKey(key); pm.makePersistent(acct);

    Read the article

  • When using a HiLo ID generation strategy, what types should be used to hold Ids?

    - by UpTheCreek
    I'm asking this from a c#/NHibnernate perspective, but it's generally applicable. The concern is that the HiLo strategy goes though id's pretty quickly, and for example a low record-count table (Such as Users) is sharing from the same set of id's as a high record-count table (Such as comments). So you can potentially get to high numbers quicker that with other strategies. So what do people recommend? Code side: int/uint/long/ulong? DBSide: int/bigint? My feeling is to go with longs and bigingts, but would like a sanity check :)

    Read the article

  • MySQL foreign key constraint disappearing

    - by Bramjam
    This is my table: /* oefenreeks leerplan */ CREATE TABLE leerplan_oefenreeks ( leerplan_oefenreeks_id INT PRIMARY KEY AUTO_INCREMENT NOT NULL, leerplan_id INT NOT NULL, oefenreeks_id INT NOT NULL, plaats INT NOT NULL ); /* fk */ ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT fk_leerp_oefenr_leerplan FOREIGN KEY(leerplan_id) REFERENCES leerplan (leerplan_id) ON DELETE CASCADE; ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT fk_leerp_oefenr_oefenreeks FOREIGN KEY(oefenreeks_id) REFERENCES oefenreeks (oefenreeks_id) ON DELETE CASCADE; /* when I execute the nexline, my fk_leerp_oefenr_leerplan constraint vanishes/disappears*/ ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT un_leerp_oefenr UNIQUE(leerplan_id, oefenreeks_id); ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT un_leerp_oefenr_plaats UNIQUE(leerplan_id, plaats); When I go and check only 3 constraints exist. fk_leerp_oefenr_leerplan disappears. I don't understand why this happens.

    Read the article

  • How to connect 2 mysql tables with 2 connection string

    - by denonth
    Hi all I need to connect 2 tables from 2 mysql databases that have 2 different connection strings and each is on the different server. I have this query: cmd = new MySqlCommand(String.Format("INSERT INTO {0} (a,b,c,d) SELECT (a,b,c,d) FROM {1}", ConfigSettings.ReadSetting("main_table"), ConfigSettings.ReadSetting("main_table")), con); So both table have the same columns. Thats why I have only one ConfigSettings.ReadSetting("main_table") for both of them as they are same. I have 2 connection strings and each is pointing to their server: con.ConnectionString = ConfigurationManager.ConnectionStrings["con1"].ConnectionString; con2.ConnectionString = ConfigurationManager.ConnectionStrings["con2"].ConnectionString; How to make this cmd to be workking with 2 different connection strings and with the same name for the table. Table name will change that's why it is saved in config.

    Read the article

  • SQLite - ON DUPLICATE KEY UPDATE

    - by Alix Axel
    MySQL has something like this: INSERT INTO visits (ip, hits) VALUES ('127.0.0.1', 1) ON DUPLICATE KEY UPDATE hits = hits + 1; As far as I'm know this feature doesn't exist in SQLite, what I want to know is if there is any way to archive the same effect without having to execute two queries. Also, if this is not possible, what do you prefer: SELECT + (INSERT or UPDATE) or UPDATE (+ INSERT if UPDATE fails)

    Read the article

  • Deploying and hosting scala in the cloud?

    - by TiansHUo
    I am starting a web app considering scalability as one of the top priorities. What would be the benefits of this: cassandra scala lift vs the traditional LAMP on the cloud? Since from what I've read, please correct me, the cloud itself is scalable I have never seen anyone deploy scala on the cloud before. Is it worth the effort to learn the platform? Is it ready for production use?

    Read the article

  • How do I connect to SQL Server with VB?

    - by Wayne Werner
    Hi, I'm trying to connect to a SQL server from VB. The SQL server is across the network uses my windows login for authentication. I can access the server using the following python code: import odbc conn = odbc.odbc('SignInspection') c = conn.cursor() c.execute("SELECT * FROM list_domain") c.fetchone() This code works fine, returning the first result of the SELECT. However, I've been trying to use the SqlClient.SqlConnection in VB, and it fails to connect. I've tried several different connection strings but this is the current code: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim conn As New SqlClient.SqlConnection conn.ConnectionString = "data source=signinspection;initial catalog=signinspection;integrated security=SSPI" Try conn.Open() MessageBox.Show("Sweet Success") 'Insert some code here, woo Catch ex As Exception MessageBox.Show("Failed to connect to data source.") MessageBox.Show(ex.ToString()) Finally conn.Close() End Try End Sub It fails miserably, and it gives me an error that says "A network-related or instance-specific error occurred... (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I'm fairly certain it's my connection string, but nothing I've found has given me any solid examples (server=mySQLServer is not a solid example) of what I need to use. Thanks! -Wayne

    Read the article

  • Question about joins and table with Millions of rows

    - by xRobot
    I have to create 2 tables: Magazine ( 10 millions of rows with these columns: id, title, genres, printing, price ) Author ( 180 millions of rows with these columns: id, name, magazine_id ) . Every author can write on ONLY ONE magazine and every magazine has more authors. So if I want to know all authors of Motors Magazine, I have to use this query: SELECT * FROM Author, Magazine WHERE ( Author.id = Magazine.id ) AND ( genres = 'Motors' ) The same applies to Printing and Price column. To avoid these joins with tables of millions of rows, I thought to use this tables: Magazine ( 10 millions of rows with this column: id, title, genres, printing, price ) Author ( 180 millions of rows with this column: id, name, magazine_id, genres, printing, price ) . and this query: SELECT * FROM Author WHERE genres = 'Motors' Is it a good approach ? I can use Postgresql or Mysql.

    Read the article

< Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >