Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 140/398 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • MS Access: Permission problems with views

    - by Keith Williams
    "I'll use an Access ADP" I said, "it's only a tiny project and I've got better things to do", I said, "I can build an interface really quickly in Access" I said. </sarcasm> Sorry for the rant, but it's Friday, I have a date in just under two hours, and I'm here late because this just isn't working - so, in despair, I turn to SO for help. Access ADP front-end, linked to a SQL Server 2008 database Using a SQL Server account to log into the database (for testing); this account is a member of the role, "Api"; this role has SELECT, EXECUTE, INSERT, UPDATE, DELETE access to the "Api" schema The "Api" schema is owned by "dbo" All tables have a corresponding view in the Api schema: e.g. dbo.Customer -- Api.Customers The rationale is that users don't have direct table access, but can deal with views as if they were tables I can log into SQL using my test login, and it works fine: no access to the tables, but I can select, insert, update and delete from the Api views. In Access, I see the views, I can open them, but whenever I try to insert or update, I get the following error: The SELECT permission was denied on the object '[Table name which the view is using]', database '[database name]', schema 'dbo' Crazy as it sounds, Access seems to be trying to access the underlying table rather than the view. Any ideas?

    Read the article

  • .NET 3.5 DataGridView Wont Save To Database

    - by Jimbo
    I have created a test Winforms application in Visual Studio 2008 (SP1) to see just how "RAD" C# and .NET 3.5 can be. So far I have mixed emotions. Added a service-based database to my application (MyDB.mdf) and added two tables - Contact (id [identity], name [varchar] and number [varchar] columns) and Group (id [identity] and name [varchar] columns) Added a DataSource, selected "Database" as the source, used the default connection string as the connection (which points to my database) and selected "All Tables" to be included in the data source and saved as MyDBDataSet Expanded the data source showing my two tables, selected the "Group" table and chose to display it as a DataGridView (from the dropdown option on the right of each entity) and dragged it onto Form1, thus creating a groupBindingNavigator, groupBindingSource, groupTableAdapter, tableAdapterManager, myDBDataset and groupDataGridView Press F5 to test the application, enter the name "Test" under the DataGridView's "name" column and click "Save" on the navigator which has autogenerated code to save the data that looks like this: private void groupBindingNavigatorSaveItem_Click(object sender, EventArgs e) { this.Validate(); this.groupBindingSource.EndEdit(); this.tableAdapterManager.UpdateAll(this.myDBDataSet); } Stop the application and have a look at the database data, you will see no data saved there in the "Group" table. I dont know why and cant find out how to fix it! Googled for about 30 minutes with no luck. The code is auto-generated with the controls, so you'd think that it would work too :)

    Read the article

  • Export the datagrid data to text in asp.net+c#.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • Importing Excel spreadsheet data into existing Access DB

    - by Keeb13r
    I've designed an Access 2003 DB with 3 tables: APPLICATIONS, SERVERS, and INSTALLATIONS. Records in the APPLICATIONS and SERVERS tables are uniquely identified by a synthetic primary key (in Access, an "auto number"). The INSTALLATIONS table is essentially a mapping table between APPLICATIONS and SERVERS: it's a list of records of which applications are installed on which servers. A record in the INSTALLATIONS table is also identified by a synthetic primary key, and it consists of an APPLICATION_ID and SERVER_ID for the records in their respective tables. I have an Excel 2003 spreadsheet I would like to import into this database, but it's proving difficult. The spreadsheet is made up of several tabs/worksheets, each one representing a server with its own listing of installed applications. I'm not sure how to proceed with an import - the "Get External Data -- Import" feature in Access has an import "In an Existing Table" option, but it's greyed out. I'm also unsure how I build the relationships between applications and servers for importing records into the INSTALLATIONS table. I had previously fooled around with adding some security to the Access DB file. I think I removed everything but perhaps I didn't and that's causing the problem? Some sample data from the Excel spreadsheet: SERVER101 * Adobe Reader 9 * BMC Remedy User 7.0 * HostExplorer 2008 * Microsoft Office 2003 * Microsoft Office 2007 * Notepad++ SERVER102 * Adobe Reader 9 * DameWare Mini Remote Control * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Oracle 9.2 SERVER103 * AWDView * EXTRA! Personal Client 32-bit * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Snagit 9.1 * WinZip 12.1 The Access DB design is very simple: APPLICATION * APPLICATION_ID (autonumber) * APPLICATION_NAME (varchar) SERVER * SERVER_ID (autonumber) * SERVER_NAME (varchar) INSTALLATION * INSTALLATION_ID (autonumber) * APPLICATION_ID (number) * SERVER_ID (number)

    Read the article

  • Flexible forms and supporting database structure

    - by sunwukung
    I have been tasked with creating an application that allows administrators to alter the content of the user input form (i.e. add arbitrary fields) - the contents of which get stored in a database. Think Modx/Wordpress/Expression Engine template variables. The approach I've been looking at is implementing concrete tables where the specification is consistent (i.e. user profiles, user content etc) and some generic field data tables (i.e. text, boolean) to store non-specific values. Forms (and model fields) would be generated by first querying the tables and retrieving the relevant columns - although I've yet to think about how I would setup validation. I've taken a look at this problem, and it seems to be indicating an EAV type approach - which, from my brief research - looks like it could be a greater burden than the blessings it's flexibility would bring. I've read a couple of posts here, however, which suggest this is a dangerous route: How to design a generic database whose layout may change over time? Dynamic Database Schema I'd appreciate some advice on this matter if anyone has some to give regards SWK

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • How can I determine when an InnoDB table was last changed?

    - by David M
    I've had success in the past storing the (heavily) processed results of a database query in memcached, using the last update time of the underlying tables(s) as part of the cache key. For MyISAM tables, that last changed time is available in SHOW TABLE STATUS. Unfortunately, that's usually NULL for InnoDB tables. In MySQL 4.1, the ctime for an InnoDB in its SHOW TABLE STATUS line was usually its actual last update time, but that doesn't seem to be true for MySQL 5.1. There is a DATETIME field in the table, but it only shows when a row has been modified - it cannot show the deletion time of a row that's not there anymore! So, I really cannot use MAX(update_time). Here's the really tricky part. I have a number of replicas that I do reads from. Can I figure out the state of the table that doesn't rely on when the changes have actually been applied? My conclusion after working on this for a while is that it's not going to be possible to get this information as cheaply as I'd like. I'm probably going to cache data until the time that I expect the table to change (it's updated once a day), and let the query cache help out where it can.

    Read the article

  • Does Microsoft Access use the PK fields for anything?

    - by chrismay
    Ok this is going to sound strange, but I have inherited an app that is an Access front end with a SQL Server backend. I am in the process of writing a new front end for it, but... we need to continue using the access front end for a while even after we deploy my new front end for reasons I won't go into. So both the existing Access app and my new app will need to be able to access and work with the data. The problem is the database design is a nightmare. For example some simple parent-child table relationships have like 4 and 5 part composite primary keys. I would REALLY like to remove these PKs and replace them with unique constraints or whatever, and add a new column to each of these tables called ID that is just an identity. If I change the PK and FKs on these tables to more managable fields, will the Access app have problems? What I mean is, does access use the meta data from the tables (PK and FK info) in such a way that it would break the app to change these?

    Read the article

  • What should I use to increase performance. View/Query/Temporary Table

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Shaping EF LINQ Query Results Using Multi-Table Includes

    - by sisdog
    I have a simple LINQ EF query below using the method syntax. I'm using my Include statement to join four tables: Event and Doc are the two main tables, EventDoc is a many-to-many link table, and DocUsage is a lookup table. My challenge is that I'd like to shape my results by only selecting specific columns from each of the four tables. But, the compiler is giving a compiler is giving me the following error: 'System.Data.Objects.DataClasses.EntityCollection does not contain a definition for "Doc' and no extension method 'Doc' accepting a first argument of type 'System.Data.Objects.DataClasses.EntityCollection' could be found. I'm sure this is something easy but I'm not figuring it out. I haven't been able to find an example of someone using the multi-table include but also shaping the projection. Thx,Mark var qry= context.Event .Include("EventDoc.Doc.DocUsage") .Select(n => new { n.EventDate, n.EventDoc.Doc.Filename, //<=COMPILER ERROR HERE n.EventDoc.Doc.DocUsage.Usage }) .ToList(); EventDoc ed; Doc d = ed.Doc; //<=NO COMPILER ERROR SO I KNOW MY MODEL'S CORRECT DocUsage du = d.DocUsage;

    Read the article

  • GridView ObjectDataSource LINQ Paging and Sorting using multiple table query.

    - by user367426
    I am trying to create a pageing and sorting object data source that before execution returns all results, then sorts on these results before filtering and then using the take and skip methods with the aim of retrieving just a subset of results from the database (saving on database traffic). this is based on the following article: http://www.singingeels.com/Blogs/Nullable/2008/03/26/Dynamic_LINQ_OrderBy_using_String_Names.aspx Now I have managed to get this working even creating lambda expressions to reflect the sort expression returned from the grid even finding out the data type to sort for DateTime and Decimal. public static string GetReturnType<TInput>(string value) { var param = Expression.Parameter(typeof(TInput), "o"); Expression a = Expression.Property(param, "DisplayPriceType"); Expression b = Expression.Property(a, "Name"); Expression converted = Expression.Convert(Expression.Property(param, value), typeof(object)); Expression<Func<TInput, object>> mySortExpression = Expression.Lambda<Func<TInput, object>>(converted, param); UnaryExpression member = (UnaryExpression)mySortExpression.Body; return member.Operand.Type.FullName; } Now the problem I have is that many of the Queries return joined tables and I would like to sort on fields from the other tables. So when executing a query you can create a function that will assign the properties from other tables to properties created in the partial class. public static Account InitAccount(Account account) { account.CurrencyName = account.Currency.Name; account.PriceTypeName = account.DisplayPriceType.Name; return account; } So my question is, is there a way to assign the value from the joined table to the property of the current table partial class? i have tried using. from a in dc.Accounts where a.CompanyID == companyID && a.Archived == null select new { PriceTypeName = a.DisplayPriceType.Name}) but this seems to mess up my SortExpression. Any help on this would be much appreciated, I do understand that this is complex stuff.

    Read the article

  • Heroku only initializes some of my models.

    - by JayX
    So I ran heroku db:push And it returned Sending schema Schema: 100% |==========================================| Time: 00:00:08 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 projects: 100% |==========================================| Time: 00:00:00 tasks: 100% |==========================================| Time: 00:00:00 users: 100% |==========================================| Time: 00:00:00 Sending data 8 tables, 70,551 records groups: 100% |==========================================| Time: 00:00:00 schema_migrat: 100% |==========================================| Time: 00:00:00 projects: 100% |==========================================| Time: 00:00:00 tasks: 100% |==========================================| Time: 00:00:02 authenticatio: 100% |==========================================| Time: 00:00:00 articles: 100% |==========================================| Time: 00:08:27 users: 100% |==========================================| Time: 00:00:00 topics: 100% |==========================================| Time: 00:01:22 Resetting sequences And when I went to heroku console This worked >> Task => Task(id: integer, topic: string, content: string, This worked >> User => User(id: integer, name: string, email: string, But the rest only returned something like >> Project NameError: uninitialized constant Project /home/heroku_rack/lib/console.rb:150 /home/heroku_rack/lib/console.rb:150:in `call' /home/heroku_rack/lib/console.rb:28:in `call' >> Authentication NameError: uninitialized constant Authentication /home/heroku_rack/lib/console.rb:150 /home/heroku_rack/lib/console.rb:150:in `call' update 1: And when I typed >> ActiveRecord::Base.connection.tables it returned => ["projects", "groups", "tasks", "topics", "articles", "schema_migrations", "authentications", "users"] Using heroku's SQL console plugin I got SQL> show tables +-------------------+ | table_name | +-------------------+ | authentications | | topics | | groups | | projects | | schema_migrations | | tasks | | articles | | users | +-------------------+ So I think they are existing in heroku's database already. There is probably something wrong with rack db:migrate update 2: I ran rack db:migrate locally in both production and development modes and nothing wrong happened. But when I ran it on heroku it only returned: $ heroku rake db:migrate (in /disk1/home/slugs/389817_1c16250_4bf2-f9c9517b-bdbd-49d9-8e5a-a87111d3558e/mnt) $ Also, I am using sqlite3 update 3: so I opened up heroku console and typed in the following command class Authentication < ActiveRecord::Base;end Amazingly I was able to call Authentication class, but once I exited, nothing was changed.

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • Selecting More Than 1 Table in A Single Query

    - by Kamran
    I have 5 Tables in MS Access as Logs [Title, ID, Date, Author] Tape [Title, ID, Date, Author] Maps [Title, ID, Date, Author] VCDs [Title, ID, Date, Author] Book [Title, ID, Date, Author] I tried my level best through this code SELECT Logs.[Author], Tape.[Author], Maps.[Author], VCDs.[Author], Book.[Author] FROM Logs , Tape , Maps , VCDs, Book WHERE ((([Author] & " " & [Author] & " " & [Author] & " " & [Author]& " " & [Author]) Like "*" & [Type the Title or Any Part of the Title and Press Ok] & "*")); I want to select all of these fields in a single query. Suppose there is Adam as author of works in all tables. So when i put Adam in search box it should result from all tables. I know this can be done by having single table or renaming fields names but that's not required. Please help.

    Read the article

  • What are the Limitations for Connecting to an Access Query in Excel

    - by thornomad
    I have an Access 2007 database that has a number of tables, some are fairly large (100,000+ records); I have created a union query to pull some of the same types of data from multiple tables into one large query for pivot table manipulation and reporting. For example: SELECT Language FROM Table1 UNION ALL SELECT Language FROM Table2 UNION ALL SELECT Language FROM Table3; This works. I found, quickly, however, that a union query will not show up when connecting to the datasource from Excel 2007. So, I created a second query to reference the union query. Like so: SELECT * FROM [The Above Union Query]; This query works and it, initially, was accessible from Excel. Time passed, I've added more data. Suddenly, when I connect to my Access database from Excel my query referencing the union has disappeared. MS Access shows no signs of an issue (data displays in Access) and my other non-union queries are showing up in Excel 2007 ... but not the one that references the union. What could be going on? Why did it disappear? I noticed if I switch some of the referenced tables in the union query to a smaller table (with less rows) all of sudden the query appears in Excel again. At least, I think that's what the difference is. I really can't put my finger on why some of the union queries won't show up and some will. Am stumped and need some guidance. Thanks.

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • database table design

    - by e.b.white
    I design the tables as below for the system which looks like a package delivering system For example, after user received the package, postman should record in system, and the state(history table) is "delivered",and operator is this postman, the current state(state table) is of course "delivered" history table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | package_state | +---------------+--------------------------+ | operators | operators | +---------------+--------------------------+ | create_time| create_time | +---------------+--------------------------+ state table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | latest_package_state | +---------------+--------------------------+ Above is just the basic information to record, some other information( like invoice, destination,...) should be recored as well. But there are different service types like s1 and s2, for s1 it is not needed to record invoice but s1 need, and maybe s1 need some other information to record (like the tel of end user). After all, at delivering way stations there are additional information to record, and for different service type the information type is different. My question is: 1. For different service type, shall I need to declare different tables(option A) or just one big table which can record all information for all types(option B)? 2. If option A, since the basic information above is MUST, how can prevent from declaring there duplicate fields in different tables?

    Read the article

  • How can i design a DB where the user can define the fields and types of a detail table in a M-D rela

    - by Simon
    My application has one table called 'events' and each event has approx 30 standard fields, but also user defined fields that could be any name or type, in an 'eventdata' table. Users can define these event data tables, by specifying x number of fields (either text/double/datetime/boolean) and the names of these fields. This 'eventdata' (table) can be different for each 'event'. My current approach is to create a lookup table for the definitions. So if i need to query all 'event' and 'eventdata' per record, i do so in a M-D relaitionship using two queries (i.e. select * from events, then for each record in 'events', select * from 'some table'). Is there a better approach to doing this? I have implemented this so far, but most of my queries require two distinct calls to the DB - i cannot simply join my master 'events' table with different 'eventdata' tables for each record in in 'events'. I guess my main question is: can i join my master table with different detail tables for each record? E.g. SELECT E.*, E.Tablename FROM events E LEFT JOIN 'E.tablename' T ON E._ID = T.ID If not, is there a better way to design my database considering i have no idea on how many user defined fields there may be and what type they will be.

    Read the article

  • Can a T-SQL variable represent an entire row?

    - by elbillaf
    I'm coding for MS SQL Server 10. I have two databases that contain dozens of tables. Each table in one database contains a table with the same name in the other database. Tables with the same name have identical format (fields and data types). The contents of the two tables are similar but not identical. I need to update one based on changes made to the other, but only under certain circumstances. I think I want to use a cursor for this, but I can't find a good example to go by. So far, the MSDN examples are reading one field at a time into a variable. I do need to be able to read /modify two fields which are identical in each table, but I gotta believe there's something less tedious than declaring variables for every field of every table. I would like to be able to FETCH an entire row, check a couple of fields and then make a decision of whether I want to write the entire row to the other table after changing two fields - but do I have to declare variables for EVERY field I want to fetch / write? There's no way to just FETCH an entire row and write an entire row?

    Read the article

  • What is the best way to lay out elements in GWT?

    - by KutaBeach
    What is the best practice to specify the positions of elements in GWT widget? Imagine you have a task: place a set of widgets in page layout. What would you use to position all your buttons and inputs in some order? standart HTML markup with tables/divs + CSS styles for positioning GWT widgets: panels, grids, tables + CSS styles for positioning GWT widgets: panels, grids, tables + their native properties for positioning If 2 or 3 - what would you use to reproduce a standart HTML table with colspans, fixed width columns and paddings? ps UIBinder and XML markup. GWT 2.4 My opinion: one of the biggest advantages of GWT is the ability to prevent programmer from writing HTML markup and add cross-browser support for interfaces. We shouldn't drop these points so its better to choose p.3 and try to use CSS ONLY for decoration - i.e. colors, fonts etc. Another point of view: its a bad idea to place any styles inline. By specifying properties of the widgets in XML markup we are literally doing exactly this. Also, GWT doesn't have enough widgets to produce a normal layout. For example you need to create a simple table with collspans and fixed column width. How would you go about this? Looks like you have to embed several HorizontalPanels into VerticalPanels, specify width/height in everyone of them and produce a great paper of XML by this. So whats your opinion?

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • List all foreign key constraints that refer to a particular column in a specific table

    - by Sid
    I would like to see a list of all the tables and columns that refer (either directly or indirectly) a specific column in the 'main' table via a foreign key constraint that has the ON DELETE=CASCADE setting missing. The tricky part is that there would be an indirect relationships buried across up to 5 levels deep. (example: ... great-grandchild- FK3 = grandchild = FK2 = child = FK1 = main table). We need to dig up the leaf tables-columns, not just the very 1st level. The 'good' part about this is that execution speed isn't of concern, it'll be run on a backup copy of the production db to fix any relational issues for the future. I did SELECT * FROM sys.foreign_keys but that gives me the name of the constraint - not the names of the child-parent tables and the columns in the relationship (the juicy bits). Plus the previous designer used short, non-descriptive/random names for the FK constraints, unlike our practice below The way we're adding constraints into SQL Server: ALTER TABLE [dbo].[UserEmailPrefs] WITH CHECK ADD CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] FOREIGN KEY([UserId]) REFERENCES [dbo].[UserMasterTable] ([UserId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[UserEmailPrefs] CHECK CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] GO The comments in this SO question inpire this question.

    Read the article

  • Load Empty Database table

    - by john White
    I am using SQLexpress and VS2008. I have a DB with a table named "A", which has an IdentitySpecification column named ID. The ID is auto-incremented. Even if the row is deleted, the ID still increases. After several data manipulation, the current ID has reached 15, for example. When I run the application if there's at least 1 row: if I add a new row, the new ID is 16. Everything is fine. If the table is empty (no row): if I add a new row, the new ID is 0, which is an error (I think). And further data manipulation (eg. delete or update) will result in an unhandled exception. Has anyone encountered this? PS. In my table definition, the ID has been selected as follow: Identity Increment = 1; Identity Seed =1; The DB load code is: dataSet = gcnew DataSet(); dataAdapter->Fill(dataSet,"A"); dataTable=dataSet->Tables["A"]; dbConnection->Open(); The Update button method dataAdapter->Update(dataSet,"tblInFlow"); dataSet->AcceptChanges(); dataTable=dataSet->Tables["tblInFlow"]; dataGrid->DataSource=dataTable; If I press Update: if there's at least a row: the datagrid view updates and shows the table correctly. if there's nothing in the table (no data row), the Add method will add a new row, but from ID 0. If I close the program and restart it again: the ID would be 16, which is correct. This is the add method row=dataTable->NewRow(); row["column1"]="something"; dataTable->Rows->Add(row); dataAdapter->Update(dataSet,"A"); dataSet->AcceptChanges(); dataTable=dataSet->Tables["A"];

    Read the article

  • Adding linked table to Access 2003 while keeping ODBC connection info in MDB

    - by webworm
    I have an Access 2003 database MDB where all of the tables exist as linked tables within SQL Server 2005. The MDB file contains all of the ODBC information that points to the correct SQL Server and log-on credentials (trusted connection). What I would like to do is add a new linked table to the MDB file however I am not sure how to go about specifying the ODBC connection information. When I try to add a new linked table I keep getting prompted to locate or create a DSN file. I don't want to have to create a new DSN entry on every machine, rather I would like all that information stored within the Access MDB file itself. In the existing database I can "hover" over the table names and see the ODBC connection info as a tool-tip. All I need to do is add another linked table using the same connection information. I do have access to the SQL Server where the tables are linked to,. I have already created the new table I wanted to add. I just need to find a way to link to it.

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >