Search Results

Search found 24675 results on 987 pages for 'table'.

Page 188/987 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • (mySQL) Unable to query 2 tables properly for data

    - by Devner
    I have 2 tables. One is 'page_links' and the other is 'rpp'. Table page_links is the superset of table rpp. The following is the schema of my tables: -- Table structure for table `page_links` -- CREATE TABLE IF NOT EXISTS `page_links` ( `page` varchar(255) NOT NULL, `page_link` varchar(100) NOT NULL, `heading_id` tinyint(3) unsigned NOT NULL, PRIMARY KEY (`page`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Dumping data for table `page_links` -- INSERT INTO `page_links` (`page`, `page_link`, `heading_id`) VALUES ('a1.php', 'A1', 8), ('b1.php', 'B1', 8), ('c1.php', 'C1', 5), ('d1.php', 'D1', 5), ('e1.php', 'E1', 8), ('f1.php', 'F1', 8), ('g1.php', 'G1', 8), ('h1.php', 'H1', 1), ('i1.php', 'I1', 1), ('j1.php', 'J1', 8), ('k1.php', 'K1', 8), ('l1.php', 'L1', 8), ('m1.php', 'M1', 8), ('n1.php', 'N1', 8), ('o1.php', 'O1', 8), ('p1.php', 'P1', 4), ('q1.php', 'Q1', 5), ('r1.php', 'R1', 4); -- Table structure for table `rpp` -- CREATE TABLE IF NOT EXISTS `rpp` ( `role_id` tinyint(3) unsigned NOT NULL, `page` varchar(255) NOT NULL, `is_allowed` tinyint(1) NOT NULL, PRIMARY KEY (`role_id`,`page`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Dumping data for table `rpp` -- INSERT INTO `rpp` (`role_id`, `page`, `is_allowed`) VALUES (3, 'a1.php', 1), (3, 'b1.php', 1), (3, 'c1.php', 1), (3, 'd1.php', 1), (3, 'e1.php', 1), (3, 'f1.php', 1), (3, 'h1.php', 1), (3, 'i1.php', 1), (3, 'l1.php', 1), (3, 'm1.php', 1), (3, 'n1.php', 1), (4, 'a1.php', 1), (4, 'b1.php', 1), (4, 'q1.php', 1), (5, 'r1.php', 1); WHAT I AM TRYING TO DO: I am trying to query both the above tables (in a single query) in such a way that all the pages from page_links are displayed along with the is_allowed value from rpp for a particular role. For example, I want to get the is_allowed value of all the pages from rpp for role_id = 3 and at the same time, list all the available pages from page_links. A clear example of my expected result would be: page is_allowed role_id ---------------------------------------- a1.php 1 3 b1.php 1 3 c1.php 1 3 d1.php 1 3 e1.php 1 3 f1.php 1 3 g1.php NULL NULL h1.php 1 3 i1.php 1 3 j1.php NULL NULL k1.php NULL NULL l1.php 1 3 m1.php 1 3 n1.php 1 3 o1.php NULL NULL p1.php NULL NULL q1.php NULL NULL r1.php NULL NULL One more example of my desired result could be achieved by doing a LEFT JOIN rpp ON page_links.page = rpp.page but we need to omit using role_id = 3 (or any value) to be able to get that. But I do want to specify the role_id as well and get the results. I need the query to be able to get this result. I would appreciate any replies that could help me with this. If you can suggest me any changes as well to the table(s) design to be able to achieve the desired result, that's good as well. Thanks in advance.

    Read the article

  • How to implement best matching logic in TSQL (SQL Server 2000)

    - by sanjay-kumar1911
    I have two tables X and Y: Table X C1 C2 C3 1 A 13 2 B 16 3 C 8 Table Y C1 C2 C3 C4 1 A 2 N 2 A 8 N 3 A 12 N 4 A 5 N 5 B 7 N 6 B 16 N 7 B 9 N 8 B 5 N 9 C 8 N 10 C 2 N 11 C 8 N 12 C 6 N Records in Table Y can be n number CREATE TABLE X(C1 INT, C2 CHAR(1), C3 INT); CREATE TABLE Y(C1 INT, C2 CHAR(1), C3 INT, C4 CHAR(1)); with following data: INSERT INTO X VALUES (1 'A',13 ); INSERT INTO X VALUES (2 'B',16 ); INSERT INTO X VALUES (3 'C',8 ); INSERT INTO Y VALUES (1,'A', 2,'N'); INSERT INTO Y VALUES (2,'A', 8,'N'); INSERT INTO Y VALUES (3,'A', 12,'N'); INSERT INTO Y VALUES (4,'A', 5,'N'); INSERT INTO Y VALUES (5,'B', 7,'N'); INSERT INTO Y VALUES (6,'B', 16,'N'); INSERT INTO Y VALUES (7,'B', 9,'N'); INSERT INTO Y VALUES (8,'B', 5,'N'); INSERT INTO Y VALUES (9,'C', 8,'N'); INSERT INTO Y VALUES (10,'C', 2,'N'); INSERT INTO Y VALUES (11,'C', 8,'N'); INSERT INTO Y VALUES (12,'C', 6,'N'); EXPECTED RESULT Table Y C1 C2 C3 C4 1 A 2 N 2 A 8 Y 3 A 12 N 4 A 5 Y 5 B 7 N 6 B 16 Y 7 B 9 N 8 B 5 N 9 C 8 Y 10 C 2 N 11 C 8 N 12 C 6 N How do I compare value of column C3 in Table X with all possible matches of column C3 of Table Y and to mark records as matched and unmatched in column C4 of Table Y? Possible matches for A (i.e. value of column C2 in Table X) would be (where R is row number i.e. value of column C1 in Table Y): R1, R2, R3, R4, R1+R2, R1+R3, R1+R4, R2+R3, R2+R4, R3+R4, R4+R5, R1+R2+R3, R1+R2+R4, R2+R3+R4, R1+R2+R3+R4

    Read the article

  • Disregard a particular TD element

    - by RussellDias
    Below I have the code that allows me to edit a table row inline. However it edits ALL of the TDs within that row. My problem, along with the code, are stated below. Any help is appreciated. <tbody> <tr> <th scope="row">Test</th> <td class="amount">$124</td> <td class="amount" id="" >$154</td> <td class="diff">- 754</td> </tr> </tbody> The above table is just a sample. What I have been trying to accomplish is, to simply edit the TDs within that particular row, but I need it to disregard the diff TD. I'm fairly new to jQuery and have got the following code via the help of a jQuery book. $(document).ready(function() { TABLE.formwork('#current-expenses'); }); var TABLE = {}; TABLE.formwork = function(table){ var $tables = $(table); $tables.each(function () { var _table = $(this); _table.find('thead tr').append($('<th class="edit">&nbsp;</th>')); _table.find('tbody tr').append($('<td class="edit"><input type="button" value="Edit"/></td>')) }); $tables.find('.edit :button').live('click', function(e) { TABLE.editable(this); e.preventDefault(); }); } TABLE.editable = function(button) { var $button = $(button); var $row = $button.parents('tbody tr'); var $cells = $row.children('td').not('.edit'); if($row.data('flag')) { // in edit mode, move back to table // cell methods $cells.each(function () { var _cell = $(this); _cell.html(_cell.find('input').val()); }) $row.data('flag',false); $button.val('Edit'); } else { // in table mode, move to edit mode // cell methods $cells.each(function() { var _cell = $(this); _cell.data('text', _cell.html()).html(''); if($('td.diff')){ var $input = $('<input type="text" />') .val(_cell.data('text')) .width(_cell.width() - 16); _cell.append($input); } }) $row.data('flag', true); $button.val('Save'); } } I have attempted to alter the code so that it would disregard the diff class TD, but have had no luck so far.

    Read the article

  • IF statement within WHILE not working

    - by Ds.109
    I am working on a basic messaging system. This is to get all the messages and to make the row of the table that has an unread message Green. In the table, there is a column called 'msgread'. this is set to '0' by default. Therefore it should make any row with the msgread = 0 - green. this is only working for the first row of the table with the code i have - i verified that it is always getting a 0 value, however it only works the first time through in the while statement .. require('./connect.php'); $getmessages = "SELECT * FROM messages WHERE toperson = '" . $userid . "'"; echo $getmessages; $messages = mysql_query($getmessages); if(mysql_num_rows($messages) != 0) { $table = "<table><tr><th>From</th><th>Subject</th><th>Message</th></tr>"; while($results = mysql_fetch_array($messages)) { if(strlen($results[message]) < 30){ $message = $results[message]; } else { $message = substr($results[message], 0 ,30) . "..."; } if($results[msgread] == 0){ $table .= "<tr style='background:#9CFFB6'>"; $table .= "<td>" . $results[from] . "</td><td>" . $results[subject] . "</td><td><a href='viewmessage.php?id=" . $results[message_id] ."'>" . $message . "</a></td></tr>"; } else { $table .= "<tr>"; $table .= "<td>" . $results[from] . "</td><td>" . $results[subject] . "</td><td><a href='viewmessage.php?id=" . $results[message_id] ."'>" . $message . "</a></td></tr>"; } } echo $table ."</table>"; } else { echo "No Messages Found"; } There's all the code, including grabbing the info from the database. Thanks.

    Read the article

  • Something for the weekend - Whats the most complex query?

    - by simonsabin
    Whenever I teach about SQL Server performance tuning I try can get across the message that there is no such thing as a table. Does that sound odd, well it isn't, trust me. Rather than tables you need to consider structures. You have 1. Heaps 2. Indexes (b-trees) Some people split indexes in two, clustered and non-clustered, this I feel confuses the situation as people associate clustered indexes with sorting, but don't associate non clustered indexes with sorting, this is wrong. Clustered and non-clustered indexes are the same b-tree structure(and even more so with SQL 2005) with the leaf pages sorted in a linked list according to the keys of the index.. The difference is that non clustered indexes include in their structure either, the clustered key(s), or the row identifier for the row in the table (see http://sqlblog.com/blogs/kalen_delaney/archive/2008/03/16/nonclustered-index-keys.aspx for more details). Beyond that they are the same, they have key columns which are stored on the root and intermediary pages, and included columns which are on the leaf level. The reason this is important is that this is how the optimiser sees the world, this means it can use any of these structures to resolve your query. Even if your query only accesses one table, the optimiser can access multiple structures to get your results. One commonly sees this with a non-clustered index scan and then a key lookup (clustered index seek), but importantly it's not restricted to just using one non-clustered index and the clustered index or heap, and that's the challenge for the weekend. So the challenge for the weekend is to produce the most complex single table query. For those clever bods amongst you that are thinking, great I will just use lots of xquery functions, sorry these are the rules. 1. You have to use a table from AdventureWorks (2005 or 2008) 2. You can add whatever indexes you like, but you must document these 3. You cannot use XQuery, Spatial, HierarchyId, Full Text or any open rowset function. 4. You can only reference your table once, i..e a FROM clause with ONE table and no JOINs 5. No Sub queries. The aim of this is to show how the optimiser can use multiple structures to build the results of a query and to also highlight why the optimiser is doing that. How many structures can you get the optimiser to use? As an example create these two indexes on AdventureWorks2008 create index IX_Person_Person on Person.Person (lastName, FirstName,NameStyle,PersonType) create index IX_Person_Person on Person.Person(BusinessentityId,ModifiedDate)with drop_existing    select lastName, ModifiedDate   from Person.Person  where LastName = 'Smith' You will see that the optimiser has decided to not access the underlying clustered index of the table but to use two indexes above to resolve the query. This highlights how the optimiser considers all storage structures, clustered indexes, non clustered indexes and heaps when trying to resolve a query. So are you up to the challenge for the weekend to produce the most complex single table query? The prize is a pdf version of a popular SQL Server book, or a physical book if you live in the UK.  

    Read the article

  • Oracle SQL Developer Data Modeler: What Tables Aren’t In At Least One SubView?

    - by thatjeffsmith
    Organizing your data model makes the information easier to consume. One of the organizational tools provided by Oracle SQL Developer Data Modeler is the ‘SubView.’ In a nutshell, a SubView is a subset of your model. The Challenge: I’ve just created a model which represents my entire ____________ application. We’ll call it ‘residential lending.’ Instead of having all 100+ tables in a single model diagram, I want to break out the tables by module, e.g. appraisals, credit reports, work histories, customers, etc. I’ve spent several hours breaking out the tables to one or more SubViews, but I think i may have missed a few. Is there an easy way to see what tables aren’t in at least ONE subview? The Answer Yes, mostly. The mostly comes about from the way I’m going to accomplish this task. It involves querying the SQL Developer Data Modeler Reporting Schema. So if you don’t have the Reporting Schema setup, you’ll need to do so. Got it? Good, let’s proceed. Before you start querying your Reporting Schema, you might need a data model for the actual reporting schema…meta-meta data! You could reverse engineer the data modeler reporting schema to a new data model, or you could just reference the PDFs in \datamodeler\reports\Reporting Schema diagrams directory. Here’s a hint, it’s THIS one The Query Well, it’s actually going to be at least 2 queries. We need to get a list of distinct designs stored in your repository. For giggles, I’m going to get a listing including each version of the model. So I can query based on design and version, or in this case, timestamp of when it was added to the repository. We’ll get that from the DMRS_DESIGNS table: SELECT DISTINCT design_name, design_ovid, date_published FROM DMRS_designs Then I’m going to feed the design_ovid, down to a subquery for my child report. select name, count(distinct diagram_id) from DMRS_DIAGRAM_ELEMENTS where design_ovid = :dESIGN_OVID and type = 'Table' group by name having count(distinct diagram_id) < 2 order by count(distinct diagram_id) desc Each diagram element has an entry in this table, so I need to filter on type=’Table.’ Each design has AT LEAST one diagram, the master diagram. So any relational table in this table, only having one listing means it’s not in any SubViews. If you have overloaded object names, which is VERY possible, you’ll want to do the report off of ‘OBJECT_ID’, but then you’ll need to correlate that to the NAME, as I doubt you’re so intimate with your designs that you recognize the GUIDs So I’m going to cheat and just stick with names, but I think you get the gist. My Model Of my almost 90 tables, how many of those have I not added to at least one SubView? Now let’s run my report! Voila! My ‘BEER2′ table isn’t in any SubView! It says ’1′ because the main model diagram counts as a view. So if the count came back as ’2′, that would mean the table was in the main model diagram and in 1 SubView diagram. And I know what you’re thinking, what kind of residential lending program would have a table called ‘BEER2?’ Let’s just say, that my business model has some kinks to work out!

    Read the article

  • SQL SERVER – How to Ignore Columnstore Index Usage in Query

    - by pinaldave
    Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following. “We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow. Any workaround in this situation?” In summary the question in simple words “How can we ignore using columnstore index in selective queries?” Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index. Here is the quick script to prove the same. We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO Now we have created columnstore index so if we run following query it will use for sure the same index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) GO Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • GoldenGate 12c - MySQL Active-Active Replication Setup

    - by Jinyu Wang-Oracle
    Active-active  (also called Master-Master or Bi-Directional) replication captures data changes from two or more systems and replicat the changes to synchronize the data.  Active-Active replication is often needed for high availability, load balancing and scaling out purposes.   Oracle GoldenGate is known to be one of the first and the best replication tool handling active-active replications. As of Oracle GoldenGate 12c, it provides (Refer to Oracle GoldenGate 12.1.2 Documentation - Configuring Oracle GoldenGate for Active-Active High Availability for more information) the followings: Robust loop-back prevention Comprehensive conflict resolution and detection support Heterogeneous support across different database versions and operation systems.  Oracle GoldenGate supports active-active configurations for DB2 on z/OS, LUW, and IBM i, MySQL, Oracle, SQL/MX,SQL Server, Sybase, and Teradata. However, the setup is different from database to database. In this example, I will show you how to setup an active-active data replication between two MySQL database instances. The example setup below is to have active-active replication between MySQL 5.5 and MySQL 5.6 instances and is shown as follows: MySQL 5.5 (Manager Port: 15105)  Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3305 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, PASSWORD mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl REPORTROLLOVER AT 05:30 ON saturday TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15106, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3305 targetdb test, userid root, password mysql sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge REPERROR (DEFAULT, ABEND) REPERROR(1062, IGNORE) map test.TCUSTMER, target test.TCUSTMER,colmap(usedefaults, region_code="region code"); map test.TCUSTORD, target test.TCUSTORD; MySQL 5.6 (Manager Port: 15106) Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3306 targetdb test, userid root, password mysql --assumetargetdefs sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge map test.TCUSTMER, target test.TCUSTMER, colmap(usedefaults, "region code"=region_code); map test.TCUSTORD, target test.TCUSTORD; Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3306 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, USERID mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/usr/local/mysql56/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15105, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; The setup parameters are quite self-explanatory. The key setup is to avoid the replication data  looping. Oracle GoldenGate for MySQL uses the information in the replication checkpoint table to identify the transaction applied by replicats and thus avoid extracting those transactions by Oracle GoldenGate extracts. The example setup in the extract in MySQL 5.5 instance is shown as follows.  TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl Setting up an active-active replication is often more complicated than this and requires the following additional considerations. I would elaborate on this in the follow-up discussions. 

    Read the article

  • How can I add a link with a UID from a User Reference from a table in Drupal Views

    - by pduncan
    I have the following in Drupal 6: A Master CCK type which contains a User reference field and other fields. There will only be one record per user here. A View of this CCK, shown as a table, with one of the fields being the user ref from the CCK type. This field is initially shown as a user name, linking to the user profile. A Second CCK type which can have several pieces of data about a particular user. A View for this CCK type, displaying information as a table. It takes a user id as an argument (an integer) I want to click on the user name in the master view, and be directed to the detail view for this user. To do this, I tried selecting 'Output this field as a link' on the user field. The thing available for me to replace are: Fields * [field_my_user_ref_uid_1] == Content: User (field_my_user_ref) Arguments * %1 == User: Uid However, the [field_my_user_ref_uid_1] element is replaced by the user name, and %1 seems to get replaced with an empty string. How can I put the user id in here?

    Read the article

  • Cascading Deletes in SQL Sever 2008 not working.

    - by Vaccano
    I have the following table setup. Bag | +-> BagID (Guid) +-> BagNumber (Int) BagCommentRelation | +-> BagID (Int) +-> CommentID (Guid) BagComment | +-> CommentID (Guid) +-> Text (varchar(200)) BagCommentRelation has Foreign Keys to Bag and BagComment. So, I turned on cascading deletes for both those Foreign Keys, but when I delete a bag, it does not delete the Comment row. Do need to break out a trigger for this? Or am I missing something? (I am using SQL Server 2008) Note: Posting requested SQL. This is the defintion of the BagCommentRelation table. (I had the type of the bagID wrong (I thought it was a guid but it is an int).) CREATE TABLE [dbo].[Bag_CommentRelation]( [Id] [int] IDENTITY(1,1) NOT NULL, [BagId] [int] NOT NULL, [Sequence] [int] NOT NULL, [CommentId] [int] NOT NULL, CONSTRAINT [PK_Bag_CommentRelation] PRIMARY KEY CLUSTERED ( [BagId] ASC, [Sequence] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Bag] FOREIGN KEY([BagId]) REFERENCES [dbo].[Bag] ([Id]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Bag] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Comment] FOREIGN KEY([CommentId]) REFERENCES [dbo].[Comment] ([CommentId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Comment] GO The row in this table deletes but the row in the comment table does not.

    Read the article

  • Problem after adding <form>?

    - by Mahmoud
    When i added <form> to my web page, all my javascript stopped working, and when i put the form at the begining of my table submit wont work, what i am doing wrong. below is my code <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Sabay Afrah.Inc | Contact Us</title> <script src="js/clear.js" language="javascript" type="text/javascript"></script> <script src="js/SpryValidationSelect.js" type="text/javascript"></script> <script type="text/jscript"> function Checking(form){ if(empty(form.fname.value){ alert("do nothing"); } } </script> <style type="text/css"> <!-- body { background-color: #000; } body,td,th { color: #FFF; font-size: 14px; } .address { font-family: "Comic Sans MS", cursive; font-weight: bold; } --> </style> <link href="theme/style.css" rel="stylesheet" type="text/css" /> <link href="theme/SpryValidationSelect.css" rel="stylesheet" type="text/css" /> </head> <body> <form action="enterdb.php" method="post"> <table width="1000" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td align="center">&nbsp;</td> </tr> <tr> <td><table width="1006" border="0" cellspacing="0" cellpadding="0"> <tr> <td width="4">&nbsp;</td> <td width="93" align="right">&nbsp;</td> <td width="4">&nbsp;</td> <td width="374" ><img src="images/logo.png" width="230" height="114" /></td> <td width="426" align="right" class="address"> 10 GlenLake parkway<br /> Suite 130, mailbox # 76<br /> Atlanta, GA 30328<br /> Phone #: + 678-222-3442<br /> Fax #: +678-222-3401<br /> Office hours: M-F 8:30 a.m. to 5:00 p.m.<br /> </td> <td width="99">&nbsp;</td> </tr> <tr> <td colspan="5"><table width="600" border="0" cellspacing="0" cellpadding="0"> <tr> <td>&nbsp;</td> <td class="title">&nbsp;</td> </tr> <tr> <td width="84"><br /></td> <td width="516" class="title">Contact Us</td> </tr> </table></td> <td>&nbsp;</td> </tr> </table></td> </tr> <tr> <td> <table width="883" border="0" align="center" cellpadding="0" cellspacing="0"> <tr class="table"> <td width="27" rowspan="10" bgcolor="#330099" class="textable">&nbsp;</td> <td colspan="2" bgcolor="#330099" class="textable">&nbsp;</td> <td width="29" rowspan="8" bgcolor="#330099" class="textable">&nbsp;</td> <td colspan="3" class="textable">&nbsp;</td> </tr> <tr > <td width="139" height="31" bgcolor="#330099" class="textable">First Name:</td> <td> <input id="fname" name="fname" type="text" size="40" /> </td> <td width="150" class="textable">Last Name:</td> <td width="265" class="textable"><table width="200" border="0" cellspacing="0" cellpadding="0"> <tr> <td ><table width="200" border="0" cellspacing="0" cellpadding="0"> <tr> <td ><input id="lname" name="lname" type="text" size="40" /></td> </tr> </table></td> </tr> </table></td> <td width="32" class="textable">&nbsp;</td> </tr> <tr> <td height="30" class="textable">Subject:</td> <td> <span id="spryselect1"> <label> <select name="sub" id="sub"> <option> Choose a Subject</option> <option> General Question</option> <option> MemberShip Area</option> <option> Others</option> </select> </label> <span class="selectRequiredMsg">Please select a Subject.</span></span> </td> <td colspan="3" class="textable">&nbsp;</td> </tr> <tr> <td height="33" class="textable">Company Name:</td> <td> <input id="cname" name="cname" type="text" size="40" /></td> <td class="textable">Company Address:</td> <td class="textable"><table width="200" border="0" cellspacing="0" cellpadding="0"> <tr> <td><input id="cadd" name="cadd" type="text" size="40" onclick="" /></td> </tr> </table></td> <td class="textable">&nbsp;</td> </tr> <tr> <td height="31" class="textable">Phone Number:</td> <td><input id="phonen" name="phonen" type="text" size="40" /> </td> <td colspan="3" rowspan="4" class="textable">&nbsp;</td> </tr> <tr> <td height="31" class="textable">Fax Number:</td><td> <input id="faxn" name="faxn" type="text" size="40" /></td> </tr> <tr> <td height="32" class="textable">Email Address:</td><td><input id="email" name="email" type="text" size="40" /></td> </tr> <tr> <td colspan="2" class="textable">&nbsp;</td> </tr> <tr> <td valign="top" class="textable">Additional Information:</td> <td colspan="5" class="textable"><table width="600" border="0" align="left" cellpadding="0" cellspacing="0"> <tr> <td colspan="2" align="center"> <textarea id="add" name="add" cols="70" rows="10" /></textarea> </td> </tr> <tr> <td align="center" class="textable"> <input name="Submit" type="submit" value="Submit" onclick="Checking()"/> </td> <td align="center" class="textable"> <input type="reset" value="Clear" /> </td> </tr> </table></td> </tr> <tr> <td colspan="6" class="textable">&nbsp;</td> </tr> </table></td> </tr> </table> </form> <script type="text/javascript"> <!-- var spryselect1 = new Spry.Widget.ValidationSelect("spryselect1"); //--> </script> </body> </html>

    Read the article

  • Database design for credit based purchases

    - by FreshCode
    I need an elegant way to implement credit-based purchases for an online store with a small variety of products which can be purchased using virtual credit or real currency. Alternatively, products could only be priced in credits. Previous work I have implemented credit-based purchasing before using different product types (eg. Credit, Voucher or Music) with post-order processing to assign purchased credit to users in the form of real currency, which could subsequently be used to discount future orders' charge totals. This worked fairly well as a makeshift solution, but did not succeed in disconnecting the virtual currency from the real currency, which is what I'd like to do, since spending credits is psychologically easier for customers than spending real currency. Design I need guidance on designing the database correctly with support for the simultaneous bulk purchase of credits at a discount along with real currency products. Alternatively, should all products be priced in credits and only credit have a real currency value? Existing Database Design Partial Products table: ProductId Title Type UnitPrice SalePrice Partial Orders table: OrderId UserId (related to Users table, not shown) Status Value Total Partial OrderItems table (similar to CartItems table): OrderItemId OrderId (related to Orders table) ProductId (related to Products table) Quantity UnitPrice SalePrice Prospective UserCredits table: CreditId UserId (related to Users table, not shown) Value (+/- value. Summed over time to determine saldo.) Date I'm using ASP.NET MVC and LINQ-to-SQL on a SQL Server database.

    Read the article

  • Moses v1.0 multi language ini file

    - by Milan Kocic
    I was working with mosesserver 0.91 and everything works fine but now there is version 1.0 and nothing is same as before. Here is my situation: I want to have multi language translation from arabic to english and from english to arabic. All data and configuration file I have works with 0.91 version of mosesserver. Here is my config file: ------------------------------------------------- ######################### ### MOSES CONFIG FILE ### ######################### # D - decoding path, R - reordering model, L - language model [translation-systems] ar-en D 0 R 0 L 0 en-ar D 1 R 1 L 1 # input factors [input-factors] 0 # mapping steps [mapping] 0 T 0 1 T 1 # translation tables: table type (hierarchical(0), textual (0), binary (1)), source-factors, target-factors, number of scores, file # OLD FORMAT is still handled for back-compatibility # OLD FORMAT translation tables: source-factors, target-factors, number of scores, file # OLD FORMAT a binary table type (1) is assumed [ttable-file] 1 0 0 5 /mnt/models/ar-en/phrase-table/phrase-table 1 0 0 5 /mnt/models/en-ar/phrase-table/phrase-table # no generation models, no generation-file section # language models: type(srilm/irstlm), factors, order, file [lmodel-file] 1 0 5 /mnt/models/ar-en/language-model/en.qblm.mm 1 0 5 /mnt/models/en-ar/language-model/ar.lm.d1.blm.mm # limit on how many phrase translations e for each phrase f are loaded # 0 = all elements loaded [ttable-limit] 20 # distortion (reordering) files [distortion-file] 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/ar-en/reordering-table/reordering-table.wbe-msd-bidirectional-fe.gz 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/en-ar/reordering-model/reordering-table.wbe-msd-bidirectional-fe.gz # distortion (reordering) weight [weight-d] 0.3 0.3 # lexicalised distortion weights [weight-lr] 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 # language model weights [weight-l] 0.5000 0.5000 # translation model weights [weight-t] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 # no generation models, no weight-generation section # word penalty [weight-w] -1 -1 [distortion-limit] 12 --------------------------------------------------------- So please can someone help me and rewrite this config file so it can work in version 1.0. And i need some python sample code of translation. I am using xmlrpc in python and earler I sent http request with: import xmlrpclib client = xmlrpclib.ServerProxy('http://localhost:8080') client.translate({'text': 'some text', 'system': 'en-ar'}) but now seems there is no more 'system' parameter and moses use always default settings.

    Read the article

  • How to build a GridView like DataBound Templated Custom ASP.NET Server Control

    - by Deyan
    I am trying to develop a very simple templated custom server control that resembles GridView. Basically, I want the control to be added in the .aspx page like this: <cc:SimpleGrid ID="SimpleGrid1" runat="server"> <TemplatedColumn>ID: <%# Eval("ID") %></ TemplatedColumn> <TemplatedColumn>Name: <%# Eval("Name") %></ TemplatedColumn> <TemplatedColumn>Age: <%# Eval("Age") %></ TemplatedColumn> </cc:SimpleGrid> and when providing the following datasource: DataTable table = new DataTable(); table.Columns.Add("ID", typeof(int)); table.Columns.Add("Name", typeof(string)); table.Columns.Add("Age", typeof(int)); table.Rows.Add(1, "Bob", 35); table.Rows.Add(2, "Sam", 44); table.Rows.Add(3, "Ann", 26); SimpleGrid1.DataSource = table; SimpleGrid1.DataBind(); the result should be a HTML table like this one. ------------------------------- | ID: 1 | Name: Bob | Age: 35 | ------------------------------- | ID: 2 | Name: Sam | Age: 44 | ------------------------------- | ID: 3 | Name: Ann | Age: 26 | ------------------------------- The main problem is that I cannot define the TemplatedColumn. When I tried to do it like this ... private ITemplate _TemplatedColumn; [PersistenceMode(PersistenceMode.InnerProperty)] [TemplateContainer(typeof(TemplatedColumnItem))] [TemplateInstance(TemplateInstance.Multiple)] public ITemplate TemplatedColumn { get { return _TemplatedColumn; } set { _TemplatedColumn = value; } } .. and subsequently instantiate the template in the CreateChildControls I got the following result which is not what I want: ----------- | Age: 35 | ----------- | Age: 44 | ----------- | Age: 26 | ----------- I know that what I want to achieve is pointless and that I can use DataGrid to achieve it, but I am giving this very simple example because if I know how to do this I would be able to develop the control that I need. Thank you.

    Read the article

  • What encoding does InstallShield expect non-latin-alphabet string table entries to use?

    - by DNS
    I work on an app that gets distributed via a single installer containing multiple localizations. The build process includes a script that updates the .ism string table with translations for each supported language. This works fine for languages like French and German. But when testing the installer in, i.e. Japanese, the text shows up as a series of squares. It's unlikely to be a font problem, since the InstallShield-supplied strings show up fine; only the string table entries are mangled. So the problem seems to be that the strings are in the wrong encoding. The .ism is in XML format, with UTF-8 declared as its encoding, so I assumed the strings needed to be UTF-8 encoded as well. Do they actually need to use the encoding of the target platform? Is there any concern, then, about targets having different encodings, i.e. Chinese systems using one GB-encoding versus another? What is the right thing to do here?

    Read the article

  • Postgres update rule returning number of rows affected

    - by Lithium
    I have a view table which is a union of two seperate tables (say Table _A and Table _B). I need to be able to update a row in the view table, and it seems the way to do this was through a 'view rule'. All entries in the view table have seperate id's, so an id that exists in table _A won't exist in table _B. I created the following rule: CREATE OR REPLACE RULE view_update AS ON UPDATE TO viewtable DO INSTEAD ( UPDATE _A SET foo = false WHERE old.id = _A.id; UPDATE _B SET foo = false WHERE old.id = _B.id; ); If I do an update on table _B it returns the correct number of rows affected (1). However if I update table _A it returns (0) rows affected even though the data was changed. If I swap out the order of the updates then the same thing happens, but in reverse. How can I solve this problem so that it returns the correct number of rows affected. Thanks.

    Read the article

  • XPath with optional tbody element

    - by Phrogz
    As in this Stack Overflow answer imagine that you need to select a particular table and then all the rows of it. Due to the permissiveness of HTML, all three of the following are legal markup: <table id="foo"><tr>...</tr></table> <table id="foo"><tbody><tr>...</tr></tbody></table> <table id="foo"><tr>...</tr><tbody><tr>...</tr></tbody></table> You are worried about tables nested in tables, and so don't want to use an XPath like table[@id="foo"]//tr. If you could specify your desired XPath as a regex, it might look something like: table[@id="foo"](/tbody)?/tr In general, how can you specify an XPath expression that allows an optional element in the hierarchy of a selector? To be clear, I'm not trying to solve a real-world problem or select a specific element of a specific document. I'm asking for techniques to solve a class of problems.

    Read the article

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • Database design problem: intermediate table between 2 tables may end up with too many results.

    - by SK.
    I have to design a database to handle forms. Basically, a form needs to go through (exactly) 7 people, one by one. Each person can either agree or decline a form. If one declines, the chain stops and the following people don't even get notified that there is a form. Right now I have thought of those 3 tables: FORM, PERSON, and RESPONSE inbetween. However, my first solution sounds too heavy because each form could have up to 7 responses. Here we are with the table inbetween. That means that each successful form has 7 rows in the table RESPONSE. Here we have the responding information directly inside the form. It looks ugly but at least keeps everything as singular as possible. On the bad side I can't track the response dates, but I don't think it is crucial for that matter. What is your opinion on this? I feel like both of them are wrong and I don't know how to fix that. If that matters, I'll be using Oracle 9.

    Read the article

  • MySQL & PHP - Creating Multiple Parent Child Relations

    - by Ashok
    Hi, I'm trying to build a navigation system using categories table with hierarchies. Normally, the table would be defined as follows: id (int) - Primary key name (varchar) - Name of the Category parentid (int) - Parent ID of this Category referenced to same table (Self Join) But the catch is that I require that a category can be child to multiple parent categories.. Just like a Has and Belongs to Many (HABTM) relation. I know that if there are two tables, categories & items, we use a join table categories_items to list the HABTM relations. But here i'm not having two tables but only table but should somehow show HABTM relations to itself. Is this be possible using a single table? If yes, How? If not possible, what rules (table naming, fields) should I follow while creating the additional join table? I'm trying to achieve this using CakePHP, If someone can provide CakePHP solution for this problem, that would be awesome. Even if that's not possible, any solution about creating join table is appreciated. Thanks for your time.

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • Is there a tag in XHTML that you can put anywhere in the body - even inside TABLE elements?

    - by Iain Fraser
    I would like to be able to place an empty tag anywhere in my document as a marker that can be addressed by jQuery. However, it is important that the XHTML still validates. To give you a bit of background as to what I'm doing: I've compared the current and previous versions of a particular document and I'm placing markers in the html where the differences are. I'm then intending to use jQuery to highlight the parent block-level elements when highlightchanges=true is in the URL's query string. At the moment I'm using <span> tags but it occurred to me that this sort of thing wouldn't validate: <table> <tr> <td>Old row</td> </tr> <span class="diff"></span><tr> <td>Just added</td> </tr> </table> So is there a tag I can use anywhere? Meta tag maybe? Thanks for your help! Iain

    Read the article

  • SQL Is it possible to setup a column that will contain a value dependent on another column?

    - by Wesley
    I have a table (A) that lists all bundles created off a machine in a day. It lists the date created and the weight of the bundle. I have an ID column, a date column, and a weight column. I also have a table (B) that holds the details related to that machine for the day. In that table (B), I want a column that lists a sum of weights from the other table (A) that the dates match on. So if the machine runs 30 bundles in a day, I'll have 30 rows in table (A) all dated the same day. In table (B) I'll have 1 row detailing other information about the machine for the day plus the column that holds the total bundle weight created for the day. Is there a way to make the total column in table (B) automatically adjust itself whenever a row is added to table (A)? Is this possible to do in the table schema itself rather than in an SQL statement each time a bundle is added? If it's not, what sort of SQL statement do I need? Wes

    Read the article

  • How can I create a SQL table using excel columns?

    - by Phsika
    I need to help to generate column name from excel automatically. I think that: we can do below codes: CREATE TABLE [dbo].[Addresses_Temp] ( [FirstName] VARCHAR(20), [LastName] VARCHAR(20), [Address] VARCHAR(50), [City] VARCHAR(30), [State] VARCHAR(2), [ZIP] VARCHAR(10) ) via C#. How can I learn column name from Excel? private void Form1_Load(object sender, EventArgs e) { ExcelToSql(); } void ExcelToSql() { string connectionString = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Source\MPD.xlsm;Extended Properties=""Excel 12.0;HDR=YES;"""; // if you don't want to show the header row (first row) // use 'HDR=NO' in the string string strSQL = "SELECT * FROM [Sheet1$]"; OleDbConnection excelConnection = new OleDbConnection(connectionString); excelConnection.Open(); // This code will open excel file. OleDbCommand dbCommand = new OleDbCommand(strSQL, excelConnection); OleDbDataAdapter dataAdapter = new OleDbDataAdapter(dbCommand); // create data table DataTable dTable = new DataTable(); dataAdapter.Fill(dTable); // bind the datasource // dataBingingSrc.DataSource = dTable; // assign the dataBindingSrc to the DataGridView // dgvExcelList.DataSource = dataBingingSrc; // dispose used objects if (dTable.Rows.Count > 0) MessageBox.Show("Count:" + dTable.Rows.Count.ToString()); dTable.Dispose(); dataAdapter.Dispose(); dbCommand.Dispose(); excelConnection.Close(); excelConnection.Dispose(); }

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >