Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 457/643 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • How to get use text columns in a trigger

    - by Jeremy
    I am trying to use an update trigger in sql 2000 so that when an update occurs, I insert a row into a history table, so I retain all history on a table: CREATE Trigger trUpdate_MyTable ON MyTable FOR UPDATE AS INSERT INTO [MyTableHistory] ( [AuditType] ,[MyTable_ID] ,[Inserted] ,[LastUpdated] ,[LastUpdatedBy] ,[Vendor_ID] ,[FromLocation] ,[FromUnit] ,[FromAddress] ,[FromCity] ,[FromProvince] ,[FromContactNumber] ,[Comment]) SELECT [AuditType] = 'U', D.* FROM deleted D JOIN inserted I ON I.[ID] = D.[ID] GO Of course, I get an error "Cannot use text, ntext, or image columns in the 'inserted' and 'deleted' tables." I tried joining to MyTable instead of deleted, but because the insert triger fires after the insert, it ends up inserting the new record into the history table, when I want the original record. How can I do this and still use text columns?

    Read the article

  • Annotate and Aggregate function in django

    - by thesteve
    In django I have the following tables and am trying to count the number of votes by item. class Votes(models.Model): user = models.ForeignKey(User) item = models.ForeignKey(Item) class Item(models.Model): name = models.CharField() description = models.TextField() I have the following queryset queryset = Votes.objects.values('item__name').annotate(Count('item')) that returns a list with item name and view count but not the item object. How can I set it up so that the object is returned instead of just the string value? I have been messing around with Manager and Queryset methods, that the right track? Any advice would be appreciated.

    Read the article

  • Entity Framework Performance Problem

    - by Steve Horn
    I'm hoping that someone can help me understand how to overcome a performance problem I'm running into with the latest version of the Entity Framework. In my test, I created my model from a database consisting of around 80 tables. The problem that I'm running into is that the cost of the very first query I run on a thread is very expensive. If I run without pre-compiling views the first query takes anywhere from 5800 to 6600 milliseconds. If I pre-compile the views (see this article) I can get the initial query cost down to about 2800 to 3200 milliseconds. 3 seconds for each request is still unacceptable for my needs. Subsequent queries are very fast. Can you please help me understand how to eliminate the poor performance of the initial query? I'm using the version of entity framework that ships with Visual Studio 2010 RC.

    Read the article

  • rails: include statement with two ON conditions

    - by Markus
    Hi, I have tree tables books bookmarks users where there is a n to m relation from books to users trough bookmarks. Im looking for a query, where I get all the books of a certain user including the bookmarks. If no bookmarks are there, there should be a null included... my sql statement looks like: SELECT * FROM `books` LEFT OUTER JOIN `bookmarks ` ON bookmarks.book_id = books.id AND bookmarks.user_id = ? In rails I only know the :include statement, but how can I add the second bookmarks.user_id = ? statement in the ON section of this query? if I put it in the :conditions part, no null results would get returned! Thanks! Markus

    Read the article

  • Access current_user in model

    - by LearnRails
    I have 3 tables items (columns are: name , type) history(columns are: date, username, item_id) user(username, password) When a user say "ABC" logs in and creates a new item, a history record gets created with the following after_create filter. How to assign this username ‘ABC’ to the username field in history table through this filter. class Item < ActiveRecord::Base has_many :histories after_create :update_history def update_history histories.create(:date=Time.now, username= ?) end My login method in session_controller def login if request.post? user=User.authenticate(params[:username]) if user session[:user_id] =user.id redirect_to( :action='home') flash[:message] = "Successfully logged in " else flash[:notice] = "Incorrect user/password combination" redirect_to(:action="login") end end end I am not using any authentication plugin. I would appreciate if someone could tell me how to achieve this without using plugin(like userstamp etc.) if possible.

    Read the article

  • mysql select from a table depending on in which table the data is in

    - by user253530
    I have 3 tables holding products for a restaurant. Products that reside in the bar, food and ingredients. I use php and mysql. I have another table that holds information about the orders that have been made so far. There are 2 fields, the most important ones, that hold information about the id of the product and the type (from the bar, from the kitchen or from the ingredients). I was thinking to write the sql query like below to use either the table for bar products, kitchen or ingredients but it doesn't work. Basically the second table on join must be either "bar", "produse" or "stoc". SELECT K.nume, COUNT(K.cantitate) as cantitate, SUM(K.pret) as pret, P.nume as NumeProduse FROM `clienti_fideli` as K JOIN if(P.tip,bar,produse) AS P ON K.produs = P.id_prod WHERE K.masa=18 and K.nume LIKE 'livrari-la-domiciliu' GROUP BY NumeProduse

    Read the article

  • mySQL and general database normalization question

    - by Sinan
    I have question about normalization. Suppose I have an applications dealing with songs. First I thought about doing like this: Songs Table: id | song_title | album_id | publisher_id | artist_id Albums Table: id | album_title | etc... Publishers Table: id | publisher_name | etc... Artists Tale: id | artist_name | etc... Then as I think about normalization stuff. I thought I should get rid of "album_id, publisher_id, and artist_id in songs table and put them in intermediate tables like this. Table song_album: song_id, album_id Table song_publisher song_id, publisher_id Table song_artist song_id, artist_id Now I can't decide which is the better way. I'm not an expert on database design so If someone would point out the right direction. It would awesome. Are there any performance issues between two approaches? Thanks

    Read the article

  • Problems with contenttypes when loading a fixture in Django

    - by gerdemb
    I am having trouble loading Django fixtures into my MySQL database because of contenttypes conflicts. First I tried dumping the data from only my app like this: ./manage.py dumpdata escola > fixture.json but I kept getting missing foreign key problems, because my app "escola" uses tables from other applications. I kept adding additional apps until I got to this: ./manage.py dumpdata contenttypes auth escola > fixture.json Now the problem is the following constraint violation when I try to load the data as a test fixture: IntegrityError: (1062, "Duplicate entry 'escola-t23aluno' for key 2") It seems the problem is that Django is trying to dynamically recreate contenttypes with different primary key values that conflict with the primary key values from the fixture. This appears to be the same as bug documented here: http://code.djangoproject.com/ticket/7052 The problem is that the recommended workaround is to dump the contenttypes app which I'm already doing!? What gives? If it makes any difference I do have some custom model permissions as documented here: http://docs.djangoproject.com/en/dev/ref/models/options/#permissions

    Read the article

  • A good design pattern for almost similar objects

    - by Sam
    Hello, I have two websites that have an almost identical database schema. the only difference is that some tables in one website have 1 or 2 extra fields that the other and vice versa. I wanted to the same Database Access layer classes to will manipulate both websites. What can be a good design pattern that can be used to handle that little difference. for example, I have a method createAccount(Account account) in my DAO class but the implementation will be slightly different between website A and website B. I know design patterns don't depend on the language but FYI i m working with Perl. Thanks

    Read the article

  • Program to find canonical cover or minimum number of functional dependencies

    - by Sev
    I would like to know if there is a program or algorithm to find canonical cover or minimum number of functional dependencies? For example: If you have: R = (A,B,C) <-- these are tables: A,B,C And dependencies: A ? BC B ? C A ? B AB ? C The canonical cover (or minimum number of dependencies) is: A ? B B ? C Is there a program that can accomplish this? If not, any code/pseudocode to help me write one would be appreciated. Prefer in Python or Java.

    Read the article

  • Teradata equivalent of persisted computed column (in SQL Server)

    - by Cade Roux
    We have a few tables with persisted computed columns in SQL Server. Is there an equivalent of this in Teradata? And, if so, what is the syntax and are there any limitations? The particular computed columns I am looking at conform some account numbers by removing leading zeros - an index is also created on this conformed account number: ACCT_NUM_std AS ISNULL(CONVERT(varchar(39), SUBSTRING(LTRIM(RTRIM([ACCT_NUM])), PATINDEX('%[^0]%', LTRIM(RTRIM([ACCT_NUM])) + '.' ), LEN(LTRIM(RTRIM([ACCT_NUM]))) ) ), '' ) PERSISTED With the Teradata TRIM function, the trimming part would be a little simpler: ACCT_NUM_std AS COALESCE(CAST(TRIM(LEADING '0' FROM TRIM(BOTH FROM ACCT_NUM))) AS varchar(39)), '' ) I guess I could just make this a normal column and put the code to standardize the account numbers in all the processes which insert into the table. We did this to put the standardization code in one place.

    Read the article

  • Sql Server - INSERT INTO SELECT to avoid duplicates

    - by Ashish Gupta
    I have following two tables:- Table1 ------------- ID Name 1 A 2 B 3 C Table2 -------- ID Name 1 Z I need to insert data from Table1 to Table2 and I can use following sytax for the same:- INSERT INTO Table2(Id, Name) SELECT Id, Name FROM Table1 However, In my case duplicate Ids might exist in Table2 (In my case Its Just "1") and I dont want to copy that again as that would throw an error. I can write something like this:- IF NOT EXISTS(SELECT 1 FROM Table2 WHERE Id=1) INSERT INTO Table2 (Id, name) SELECT Id, name FROM Table1 ELSE INSERT INTO Table2 (Id, name) SELECT Id, name FROM Table1 WHERE Table1.Id<>1 Is there a better way to do this without using IF - ELSE? I want to avoid two INSERT INTO-SELECT statements based on some condition. Any help is appreciated.

    Read the article

  • How to prevent GetOleDbSchemaTable from returning duplicate sheet names from Excel workbook

    - by Richard Bysouth
    Hi I have a function to return a DataView containing info on sheets in an Excel Workbook, as follows: Public Function GetSchemaInfo() As DataView Using connection As New OleDbConnection(GetConnectionString()) connection.Open() Dim schemaTable As DataTable = connection.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, Nothing) connection.Close() Return New DataView(schemaTable) End Using End Function This works fine except that if the workbook has linked data (i.e. pulls its data from another workbook), duplicate sheet names are returned. For example, Workbook1 has a single worksheet, Sheet1. I get 2 rows in the DataView, with the TABLE_NAME field being "Sheet1$" and "Sheet1$_". OK, I could use a RowFilter, but wondered whether there was a better way or why this extra row is returned? thanks Richard

    Read the article

  • Table and Column names causing problems

    - by craig
    I have an issue when the T4 linq templates generate the classes for my MySql db using subsonic 3. It looks like one of our table names "operator" is causing problems in the Context.cs generated class. In the following line of code in Context.cs Visual Studio sees <operator> as a c# operator and generates a compilation error of "Type expected" public Query<operator> operators { get; set; } Is there anyway I can work around this without having to rename my database table and column names? For example hard coding something in Settings.ttinclude to use or map different names to specific db tables and columns?

    Read the article

  • How do I create a check constraint?

    - by Zack Peterson
    Please imagine this small database... Diagram Tables Volunteer Event Shift EventVolunteer ========= ===== ===== ============== Id Id Id EventId Name Name EventId VolunteerId Email Location VolunteerId Phone Day Description Comment Description Start End Associations Volunteers may sign up for multiple events. Events may be staffed by multiple volunteers. An event may have multiple shifts. A shift belongs to only a single event. A shift may be staffed by only a single volunteer. A volunteer may staff multiple shifts. Check Constraints Can I create a check constraint to enforce that no shift is staffed by a volunteer that's not signed up for that shift's event? Can I create a check constraint to enforce that two overlapping shifts are never staffed by the same volunteer?

    Read the article

  • How can I best geocode a table of addresses in SQL Server?

    - by ess
    I've got a SQL Server 2008 table with addresses. I've got some C# code that can individually geocode the addresses. I've got a Google Maps API for geocoding. Now I'm trying to figure out the most efficient way to use these resources. I could write a console app that manually updates the tables using my C# library, but the data I have is updated periodically. I will be performing an import routine of some sort and I'm thinking it would be 'simplest' to perform the geocoding as the import occurs. I'm not so strong on SQL Server capabilities, so I'm looking for advice. I've considered letting the import call an assembly I create that would be referenced in SQL Server, but read that Sql Server 2008 has made it virtually impossible to reference your own DLL. So my next guess is having the import call a web service to pass in the address and update the table with the results, but I've not had much luck in finding info on this method. Any advice?

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

  • How do I delete a foreign key in SQLAlchemy?

    - by Travis
    I'm using SQLAlchemy Migrate to keep track of database changes and I'm running into an issue with removing a foreign key. I have two tables, t_new is a new table, and t_exists is an existing table. I need to add t_new, then add a foreign key to t_exists. Then I need to be able to reverse the operation (which is where I'm having trouble). t_new = sa.Table("new", meta.metadata, sa.Column("new_id", sa.types.Integer, primary_key=True) ) t_exists = sa.Table("exists", meta.metadata, sa.Column("exists_id", sa.types.Integer, primary_key=True), sa.Column( "new_id", sa.types.Integer, sa.ForeignKey("new.new_id", onupdate="CASCADE", ondelete="CASCADE"), nullable=False ) ) This works fine: t_new.create() t_exists.c.new_id.create() But this does not: t_exists.c.new_id.drop() t_new.drop() Trying to drop the foreign key column gives an error: 1025, "Error on rename of '.\my_db_name\#sql-1b0_2e6' to '.\my_db_name\exists' (errno: 150)" If I do this with raw SQL, i can remove the foreign key manually then remove the column, but I haven't been able to figure out how to remove the foreign key with SQLAlchemy? How can I remove the foreign key, and then the column?

    Read the article

  • ObjectContext ConnectionString Sqlite

    - by codegarten
    I need to connect to a database in Sqlite so i downloaded and installed System.Data.SQLite and with the designer dragged all my tables. The designer created a .cs file with public class Entities : ObjectContext and 3 constructors: 1st public Entities() : base("name=Entities", "Entities") this one load the connection string from App.config and works fine. App.config <connectionStrings> <add name="Entities" connectionString="metadata=res://*/Db.TracModel.csdl|res://*/Db.TracModel.ssdl|res://*/Db.TracModel.msl;provider=System.Data.SQLite;provider connection string=&quot;data source=C:\Users\Filipe\Desktop\trac.db&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> 2nd public Entities(string connectionString) : base(connectionString, "Entities") 3rd public Entities(EntityConnection connection) : base(connection, "Entities") Here is the problem, i already tried n configuration, already used EntityConnectionStringBuilder to make the connection string with no luck. Can you please point me in the right direction!? EDIT(1) How can i construct a valid connection string?!

    Read the article

  • asp.net Dynamic Data Site with custom own MetaData

    - by loviji
    Hello, I'm searching info about configuring own MetaData in asp.NET Dynamic Site. For example. I have a table in MS Sql Server with structure shown below: CREATE TABLE [dbo].[someTable]( [id] [int] NOT NULL, [pname] [nvarchar](20) NULL, [FullName] [nvarchar](50) NULL, [age] [int] NULL) and I there are 2 Ms Sql tables (I've created), sysTables and sysColumns. sysTables: ID sysTableName TableName TableDescription 1 | someTable |Persons |All Data about Persons in system sysColumns: ID TableName sysColumnName ColumnName ColumnDesc ColumnType MUnit 1 |someTable | sometable_pname| Name | Persona Name(ex. John)| nvarchar(20) | null 2 |someTable | sometable_Fullname| Full Name | Persona Name(ex. John Black)| nvarchar(50) | null 3 |someTable | sometable_age| age | Person age| int | null I want that, in Details/Edit/Insert/List/ListDetails pages use as MetaData sysColumns and sysTableData. Because, for ex. in DetailsPage fullName, it is not beatiful as Full Name . someIdea, is it possible? thanks

    Read the article

  • Data sync solution?

    - by user321088
    For some security issues I'm in an envorinment where third party apps can't access my DB. For this reason I should have some service/tool/script (dunno what yet... i'm open to the best option, still reading to see what I'm gonna do...) which enables me to generate on a regular basis(daily, weekly, monthly) some csv file with all new/modified records for a certain application. I should be able to automate this process and also export at any time a new file. So it should keep track for each application which records he still needs. Each application will need some data in some other format (csv/xls/sql), also some fields will be needed for some application and some aren't... It should be fairly flexible... What is the best option for me? Creating some custom tables for each application? Based on that extracting modified data?

    Read the article

  • Optimization of running total calculation in SQL for multiple values per join condition

    - by Kiril
    I have the following table (test_table): date value --------------- d1 10.0 d1 20.0 d2 60.0 d2 10.0 d2 -20.0 d3 40.0 I calculate the running total as follows. I use the same query twice, because first I need to calculate the values for a specifi date, and afterwards I can calculate the running total. Otherwise, joining the two tables where date is not unique, I would get too many results from the join: SELECT t1.date, SUM(t2.value) AS total FROM (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t1 JOIN (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t2 ON t1.date >= t2.date GROUP BY t1.date ORDER BY t1.date This gives me (which is fine): date total ------------- d1 30.0 d2 80.0 d3 120.0 BUT, this query isn't very efficient, because I need to change conditions in two places, if necessary. In production, the test_table is a lot bigger ( 4 Mio. rows), and the query takes too much time to complete. Question: How can I avoid using the same query twice?

    Read the article

  • What happens when auto_increment on integer column reaches the max_value in databases?

    - by Sanoj
    I am implementing a database application and I will use both JavaDB and MySQL as database. I have an ID column in my tables that has integer as type and I use the databases auto_increment-function for the value. But what happens when I get more than 2 (or 4) billion posts and integer is not enough? Is the integer overflowed and continues or is an exception thrown that I can handle? Yes, I could change to long as datatype, but how do I check when that is needed? And I think there is problem with getting the last_inserted_id()-functions if I use long as datatype for the ID-column.

    Read the article

  • Providing multi-version databases for backward compatibility for production applications/databases.

    - by JavaRocky
    How can I manage multiple versions of a database easily? I have some data (as views as selects for data originating in tables from other schemas), which other database may reference using various means including database synonyms & links. I wish to provide a sort of interface/guarantee in-case future for applications/databases which use this data. All of this is for in the event i need to update the views for correctness or applicability inside my database. How can i achieve this in a maintained, controlled and easy way? I am using Oracle 10g if that matters.

    Read the article

  • Fixtures and inheritance in Symfony

    - by Tere
    Hi! I have a database schema in Symfony like this: Persona: actAs: { Timestampable: ~ } columns: primer_nombre: { type: string(255), notnull: true } segundo_nombre: { type: string(255) } apellido: { type: string(255), notnull: true } rut: { type: string(255) } email: { type: string(255) } email2: { type: string(255) } direccion: { type: string(400) } ciudad: { type: string(255) } region: { type: string(255) } pais: { type: string(255) } telefono: { type: string(255) } telefono2: { type: string(255) } fecha_nacimiento: { type: date } Alumno: inheritance: type: concrete extends: Persona columns: comentario: { type: string(255) } estado_pago: { type: string(255) } Alumno_Beca: columns: persona_id: { type: integer, primary: true } beca_id: { type: integer, primary: true } relations: Alumno: { onDelete: CASCADE, local: persona_id, foreign: id } Beca: { onDelete: CASCADE, local: beca_id, foreign: id } Beca: columns: nombre: { type: string(255) } monto: { type: double } porcentaje: { type: double } descripcion: { type: string(5000) } As you see, "alumno" has a concrete inheritance from "persona". Now I'm trying to create fixtures for this two tables, and I can't make Doctrine to load them. It gives me this error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (eat/alumno__beca, CONSTRAINT alumno__beca_persona_id_alumno_id FOREIGN KEY (persona_id) REFERENCES alumno (id) ON DELETE CASCADE) Does someone know how to write a fixture for a table inherited from another? Thanks!

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >