Search Results

Search found 27248 results on 1090 pages for 'table adapter'.

Page 48/1090 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Should foreign keys become table primary key?

    - by Carvell Fenton
    Hello again, I have a table (session_comments) with the following fields structure: student_id (foreign key to students table) session_id (foreign key to sessions table) session_subject_ID (foreign key to session_subjects table) user_id (foreign key to users table) comment_date_time comment Now, the combination of student_id, session_id, and session_subject_id will uniquely identify a comment about that student for that session subject. Given that combined they are unique, even though they are foreign keys, is there an advantage to me making them the combined primary key for that table? Thanks again.

    Read the article

  • LLBLGen: Copy table from one database to another

    - by StreamT
    I have two databases (SQL Server 2005) with the same table schemes. I need to copy data from source table to destination with some modification of data along the way. And if destination table already contains some data, then rows from source table should not override, but be added to the destination table. In our project we use LLBLGen and LINQ to LLBLGen to as ORM solution. Example: Table 1: Table 2: Table 1: Key Value Key Value Key Value 1 One 1 T2_One Result=> 1 One 2 Two 2 T2_Two 2 Two 3 Three 3 Three 4 T2_One 5 T2_Two

    Read the article

  • Windows XP Home Edition SP3 cant recognise PCMIA SD Card

    - by Pozo
    System Specifications: Laptop : Dell Inspiron 6000 OS: Windows Home Edition SP3 SD Adapter: Hagiwara Smart Media Adapter I inserted the card into the slot, windows xp recognises the device, lists the pcmia controller on the device manager list, an entry appears under the IDE ATA/ATAPI category on the device manager as well. However, the device does not show under my computer and the driver does not get assigned a letter number. I checked the system logs from the device manager and there were no logged errors. Checking the Hagiwara support website, the manufacturer indicates that the adapter driver is the same as the windows xp pcmia controller. Checking Dell's website, no specific drivers were listed for that either. General Search on the web indicates that multiple people face similar problems with their SD cards, yet none actually spell out the route issue that causes this. Please let me know if you have any suggestions for further debugging. Thanks in advance

    Read the article

  • .NET Data Adapter Timeout SP Issue

    - by A-B
    We have a SQL Server stored procedure that runs fine in SQL Manager directly, does a rather large calculation but only takes 50-10 seconds max to run. However when we call this from the .NET app via a data adapter it times out. The timeout however happens before the timeout period should, we set it to 60 seconds and it still times out in about 20 seconds or less. I've Googled the issue and seen others note issues where a SP works fien directly but is slow via a data adpater call. Any ideas on how to resolve this?

    Read the article

  • Mysql adapter for Zend_Translate

    - by Peter
    Hi everybody, I'm currently in the planning phase of a rather large project that I'll develop in the Zend Framework. One of the problems I'm facing is that the customers will want to translate not only the content but also the interface. I'm currently using gettext and poedit to manage my language files but this is not an option for the customer as they, for one, want have FTP access to the site. Hence, I'm thinking of a mysql back end with an interface in the front end for the customer to manage his own translations of the interface. There is however still no mysql adapater for Zend_Translate. So, does anybody now of an adapter script for Zend_Translate so it can work with a mysql table? Or any arguments against using mysql and possible other solutions for this problem?

    Read the article

  • setting an ImageView to invisable inside a custom Adapter

    - by iyad al aqel
    i'm defining my own list adapter and i want an image inside it to be shown OR hidden based on a value what i've noticed that its always invisible or visible disregarding the value Here's my code , this code is inside the getView method singleRow=data.get(position); readit = singleRow.getRead(); Log.i("readit","" + readit ); //NotificationID=singleRow.getId(); holder.title.setText(singleRow.getAttach_title()); holder.date.setText( singleRow.getAttach_created()); holder.dueDate.setVisibility(ImageView.INVISIBLE); holder.course.setText(singleRow.getCourse_title()); if(readit==1) { //holder.read.setImageResource(IGNORE_ITEM_VIEW_TYPE); holder.read.setVisibility(ImageView.INVISIBLE); } else { holder.read.setImageResource(R.drawable.unread); }

    Read the article

  • Update a tableView with a plist took from another table

    - by Pheel
    Background: I have a tab bar application, which has a tableView as the "heart" of the app. It loads data from a plist and, through a button that checks if there are any updates on the remote plist file, updates the local plist with the remote contents. Then, i have another tableView, that should display only those plist items that have a bool value set to YES. Now i want to add a button to the second table that reloads the plist took from the first table. Expected: When i update the local plist from the first table and when i press the button on the second table, the 2nd table is supposed to update and show the cells with that bool value set to YES. (Note: I set YES as default to some items on plist). What happens: The first table updates its content from remote. The second table shows the old items with the value set to YES. When i press the button to refresh data, it reads the plist fine (by logging it, it has the same contents of the first table -only those set to YES-),but it doesn't update data even if i have [self.tableView reloadData];. When i close the app and open it again, the second table is filled with the right items. :\ Code i'm using: //Reading Plist { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *plistPath = [[documentPaths lastObject] stringByAppendingPathComponent:@"myPlist.plist"]; NSFileManager *fMgr = [NSFileManager defaultManager]; if (![fMgr fileExistsAtPath:plistPath]) { plistPath = [[NSBundle mainBundle] pathForResource:@"myPlist" ofType:@"plist"]; } NSMutableArray *returnArr = [NSMutableArray arrayWithContentsOfFile:plistPath]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"isFav == YES"]; for (NSDictionary *sect in returnArr) { NSArray *arr = [sect objectForKey:@"Rows"]; [sect setValue:[arr filteredArrayUsingPredicate:predicate] forKey:@"Rows"]; } [self.tableView reloadData]; } //Refresh data button - (void) refreshTable:(id)sender { NSLog(@"plist read"); [self readPlist]; NSLog(@"refreshed plist:%@",[self readPlist]); [self.tableView reloadData]; } Does anyone know why the table is not updating?

    Read the article

  • Upgraded activerecord-sqlserver-adapter from 2.2.22 to 2.3.8 and now getting an ODBC error

    - by stuartc
    I have been using MSSQL 2005 with Rails for quite a while now, and decided to bump my gems up on one of my projects and ran into a problem. I moved from 2.2.22 to 2.3.8 (latest as of writing) and all of a sudden I got this: ODBC::Error: S1090 (0) [unixODBC][Driver Manager]Invalid string or buffer length I'm using a DSN connection with FreeTDS my database.yml looks like this: adapter: sqlserver mode: ODBC dsn: 'DRIVER=FreeTDS;TDSVER=7.0;SERVER=10.0.0.5;DATABASE=db;Port=1433;UID=user;PWD=pwd;' Now in the mean time I moved back to 2.2.22 and there are no deprecation warnings and everything seems fine but obviously for the sake of being up to date, any ideas what could have changed in the adaptor that could cause this?

    Read the article

  • Capistrano 3, Rails 4, database configuration does not specify adapter

    - by Kazmin
    When I start cap production deploy it fails like this: DEBUG [4ee8fa7a] Command: cd /home/deploy/myapp/releases/releases/20131025212110 && (RVM_BIN_PATH=~/.rvm/bin RAILS_ENV= ~/.rvm/bin/myapp_rake assets:precompile ) DEBUG [4ee8fa7a] rake aborted! DEBUG [4ee8fa7a] database configuration does not specify adapter You can see that "RAILS_ENV=" is actually empty and I'm wondering why that might be happening? I assume that this is the reason for the latter error that I don't have a database configuration. The deploy.rb file is below: set :application, 'myapp' set :repo_url, '[email protected]:developer/myapp.git' set :branch, :master set :deploy_to, '/home/deploy/myapp/releases' set :scm, :git set :devpath, "/home/deploy/myapp_development" set :user, "deploy" set :use_sudo, false set :default_env, { rvm_bin_path: '~/.rvm/bin' } set :keep_releases, 5 namespace :deploy do desc 'Restart application' task :restart do on roles(:app), in: :sequence, wait: 5 do # Your restart mechanism here, for example: within release_path do execute " bundle exec thin restart -O -C config/thin/production.yml" end end end after :restart, :clear_cache do on roles(:web), in: :groups, limit: 3, wait: 10 do within release_path do end end end after :finishing, 'deploy:cleanup' end?

    Read the article

  • forcing Management Studio to use alter table instead of drop/recreate

    - by marco
    Hi! I'm wondering if is there a way to force MSSQL Management Studio to produce a script like this: ALTER TABLE Mytable ADD MyCol bit NOT NULL CONSTRAINT MyColDefault DEFAULT 0 WITH VALUES ALTER TABLE [dbo].Mytable ALTER COLUMN MyCol2 int NULL GO when I alter a very simple property of a column on a table. If I do this in the designer and ask for the produced script, the script doesn't do such simple tasks, but instead copies all the data in a tmp table, drops the original table, renames the tmp table with the original table name. And, of course, drops and recreates every constraint and relationships. Is there any option I can change to change this behaviour? Or, this may be possible, is there some danger I don't see in using the simple ALTER TABLE above? thanks.

    Read the article

  • Making a table in a scrolling div resizable?

    - by Mason Jones
    I've got a table in a div, with a vertical scrollbar on the div to allow the table to be longer than the div can hold. Works fine. But I'd like to allow the user to resize the div vertically if they want to be able to view more of the table. I've been playing with the jQueryUI resizable interaction, but it doesn't seem to quite do what I want; at least, not so far. I've tried making the wrapper div resizable, but the behavior's erratic. If I have the style "height:20em; overflow:auto;" on it, then I can resize the table horizontally, but not vertically. If I remove the overflow, then the table flows outside the div of course. If I remove the height, then the table is actually resizable, but it is initially drawn at full height. Anyone know of a way to specify an initial height, but allow it to be resized larger than that? If I make the table resizable rather than the div, then I can resize the table horizontally within the div but I can't increase the height of the displayed table. Which makes sense, of course, but I thought I'd mention it. Also, is there a way to make the resize "handle" the corner between the horizontal and vertical scrollbars? Right now it's a sort of invisible handle in the bottom-right of the table. Thanks for any thoughts.

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • Issue with creating index organized table

    - by mtim
    I'm having a weird problem with index organized table. I'm running Oracle 11g standard. i have a table src_table SQL> desc src_table; Name Null? Type --------------- -------- ---------------------------- ID NOT NULL NUMBER(16) HASH NOT NULL NUMBER(3) ........ SQL> select count(*) from src_table; COUNT(*) ---------- 21108244 now let's create another table and copy 2 columns from src_table set timing on SQL> create table dest_table(id number(16), hash number(20), type number(1)); Table created. Elapsed: 00:00:00.01 SQL> insert /*+ APPEND */ into dest_table (id,hash,type) select id, hash, 1 from src_table; 21108244 rows created. Elapsed: 00:00:15.25 SQL> ALTER TABLE dest_table ADD ( CONSTRAINT dest_table_pk PRIMARY KEY (HASH, id, TYPE)); Table altered. Elapsed: 00:01:17.35 It took Oracle < 2 min. now same exercise but with IOT table SQL> CREATE TABLE dest_table_iot ( id NUMBER(16) NOT NULL, hash NUMBER(20) NOT NULL, type NUMBER(1) NOT NULL, CONSTRAINT dest_table_iot_PK PRIMARY KEY (HASH, id, TYPE) ) ORGANIZATION INDEX; Table created. Elapsed: 00:00:00.03 SQL> INSERT /*+ APPEND */ INTO dest_table_iot (HASH,id,TYPE) SELECT HASH, id, 1 FROM src_table; "insert" into IOT takes 18 hours !!! I have tried it on 2 different instances of Oracle running on win and linux and got same results. What is going on here ? Why is it taking so long ?

    Read the article

  • SSIS - SharePoint to SQL without Adapter Addin?

    - by Mark
    Hey all, Im looking to Extract a SharePoint List (WSS 2.0) to a SQL(2005) Table using SQL Server Integrated Services. First off I am aware of the "adapter" that does this from http://msdn.microsoft.com/en-us/library/dd365137.aspx however I'm just wondering for compatibility purposes if it can't just be done "out of the box". There are only a limited number of "Data Flow Sources" to select as alternatives and I am unsure if any of these would be able to work in a similar way either directly to SharePoint or via SharePoints web services (e.g. http://server_name/_vti_bin/Lists.asmx) From the list of these sources it looks like the best option would be the OLE DB connector, but not sure how it would do this. Any help you have would be great, Mark

    Read the article

  • Does Table.InsertOnSubmit create a copy of the original table?

    - by Bryan
    Using InsertOnSubmit seems to have some memory overhead. I have a System.Data.Linq.Table<User> table. When I do table.InsertOnSubmit(user) and then int count = table.Count(), the memory usage of my application increases by roughly the size of the User table, but the count is the number of items before user was inserted. So I'm guess an enumeration after InsertOnSubmit will create a copy of the table. Is that true?

    Read the article

  • hibernate modeling relationships managed through an intermediate table

    - by shikarishambu
    I have a datamodel that has an intermediate table to manage relationships between entities. For example, tables Person and Organization are related through the Relationship table Party (table) - ID Person (table) - ID (references Party.ID) - name Organization (table) -ID (references Party.ID) -name Relationship (table) -ID (PK) -type (references relationshiptype lookup) -fromID (references Party.ID) -ToID (references Party.ID) -fromDate -ToDate Type+fromID+ToID+fromDate+ToDate is guaranteed to be unique. How do I manage this using hibernate? TIA

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • Is it possible to use the bluetooth adapter from my logitech keyboard to connect headset?

    - by King Chan
    So, as titled. I have a bluetooth wireless keyboard that goes with the bluetooth adapter from Logitech 2 years ago. Recently I just brought a bluetooth headset from other company, but I wonder if I can reuse the bluetooth adapter from Logitech to use the headset? I don't seems to find an option in my control panel that allows me to add the headset device.... Or the blue adapter that comes with Logitech is only able to connect to the logitech keyboard? In that case, if I buy a bluetooth adapter, will it possible to share 1 bluetooth adapter with two device? (1 to 2? not 1 to 1?)

    Read the article

  • StreamTokenizer Iterator Adapter help

    - by alpdog14
    I have this StreamTokenizer Iterator Adapter that is suppose to create a Tokenizer Iterator Index Builder then build the index from a STIA wrapped around a StreamTokenizer. I am having trouble implementing the hasNext and Next for my STIA, can anyone help me, here is my class: public class StreamTokenizerIteratorAdapter implements Iterator<Token> { DefaultIndexImpl index; StreamTokenizer source; public StreamTokenizerIteratorAdapter(final StreamTokenizer source) { if (source == null) throw new IllegalArgumentException("source == null"); } @Override public boolean hasNext() { return !index.isEmpty(); } public Token next() { if(!index.isEmpty()) return next(); else return null; } @Override public void remove() { throw new UnsupportedOperationException(); } } Should I be using the source element in the hasNext() and next()?

    Read the article

  • Why I can't use template table in dynamic query SQL SERVER 2005

    - by StuffHappens
    Hello! I have the following t-sql code which generates an error Declare @table TABLE ( ID1 int, ID2 int ) INSERT INTO @table values(1, 1); INSERT INTO @table values(2, 2); INSERT INTO @table values(3, 3); DECLARE @field varchar(50); SET @field = 'ID1' DECLARE @query varchar(MAX); SET @query = 'SELECT * FROM @table WHERE ' + @field + ' = 1' EXEC (@query) The error is Must declare the table variable "@table". What's wrong with the query. How to fix it?

    Read the article

  • Table Variables in SSIS

    - by aceinthehole
    In one SQL Task can I create a table variable DELCARE @TableVar TABLE (...) Then in another SQL Task or DataSource destination and select or insert into the table variable? The other option I have considered is using a Temp Table. CREATE TABLE #TempTable (...) I would prefer to use Table Variable so that it remains in memory. But can use temp table if it is not possible to use table variable. Also I cannot use the record set destination as I need to preform straight SQL tasks on it later on.

    Read the article

  • Zend Database Adapter - Complex MySQL Query

    - by Sonny
    I have defined a function in my Navigation model that executes a query, and I was wondering if there's a more "Zendy" way of generating/executing the query. The query I'm using was proposed by Bill Karwin on another thread here for setting arbitrary record order. I tried using a prepared statement, but the values in the SIGN() function got quoted. I'm using the PDO adapter for MySQL. /** * */ public function setPosition($parentId, $oldPosition, $newPosition) { $parentId = intval($parentId); $oldPosition = intval($oldPosition); $newPosition = intval($newPosition); $this->getAdapter()->query(" UPDATE `navigation` SET `position` = CASE `position` WHEN $oldPosition THEN $newPosition ELSE `position` + SIGN($oldPosition - $newPosition) END WHERE `parent_id` = $parentId AND `position` BETWEEN LEAST($oldPosition, $newPosition) AND GREATEST($oldPosition, $newPosition) "); return $this; }

    Read the article

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

  • C# SQL Data Adapter Fill on existing typed Dataset

    - by René
    I have an option to choose between local based data storing (xml file) or SQL Server based. I already created a long time ago a typed dataset for my application to save data local in the xml file. Now, I have a bool that changes between Server based version and local version. If true my application get the data from the SQL Server. I'm not sure but It seems that Sql Adapter's Fill Method can't fill the Data in my existing schema SqlCommand cmd = new SqlCommand("Select * FROM dbo.Categories WHERE CatUserId = 1", _connection); cmd.CommandType = CommandType.Text; _sqlAdapter = new SqlDataAdapter(cmd); _sqlAdapter.TableMappings.Add("Categories", "dbo.Categories"); _sqlAdapter.Fill(Program.Dataset); This should fill my data from dbo.Categories to Categories (in my local, typed dataset). but it doesn't. It creates a new table with the name "Table". It looks like it can't handle the existing schema. I can't figure it out. Where is the problem? btw. of course the database request I do isn't very useful that way. It's just a simplified version for testing...

    Read the article

  • Query performs poorly unless a temp table is used

    - by Paul McLoughlin
    The following query takes about 1 minute to run, and has the following IO statistics: SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN TASK_REQUESTS AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN AND T3.TASK = 'UPDATE_MEM_BAL' GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'TRANSACTIONS'. Scan count 5977, logical reads 7527408, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TASK_REQUESTS'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 58157 ms, elapsed time = 61437 ms. If I instead introduce a temporary table then the query returns quickly and performs less logical reads: CREATE TABLE #MyTable(RGN VARCHAR(20) NOT NULL, CD VARCHAR(20) NOT NULL, PRIMARY KEY([RGN],[CD])); INSERT INTO #MyTable(RGN, CD) SELECT RGN, CD FROM TASK_REQUESTS WHERE TASK='UPDATE_MEM_BAL'; SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN #MyTable AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'Worktable'. Scan count 5974, logical reads 382339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TRANSACTIONS'. Scan count 4, logical reads 4547, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#MyTable________________________________________________________________000000000013'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1420 ms, elapsed time = 1515 ms. The interesting thing for me is that the TASK_REQUEST table is a small table (3 rows at present) and statistics are up to date on the table. Any idea why such different execution plans and execution times would be occuring? And ideally how to change things so that I don't need to use the temp table to get decent performance? The only real difference in the execution plans is that the temp table version introduces an index spool (eager spool) operation.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >