Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 18/405 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • Foreign key problem linking tables in phpMyAdmin

    - by alan
    I'm using phpMyAdmin (PHP & MySQL) and I'm having a lot of trouble linking the tables using foreign keys. I'm getting negative values for the field countyId (which is the foriegn key). However, it is linking to my other table and cascading fine. When I go to add data there will be a drop selection for the CountyId and the values will look something like this: " -1 1- " Here is my alter statement: ALTER TABLE Baronies ADD FOREIGN KEY (CountyId) REFERENCES Counties (CountyId) ON DELETE CASCADE

    Read the article

  • Foreign key problem linking tables in phpMyAdmin

    - by alan
    I'm using phpMyAdmin (PHP & MySQL) and I'm having a lot of trouble linking the tables using foreign keys. I'm getting negative values for the field countyId (which is the foriegn key). However, it is linking to my other table and cascading fine. When I go to add data there will be a drop selection for the CountyId and the values will look something like this: " -1 1- " Here is my alter statement: ALTER TABLE Baronies ADD FOREIGN KEY (CountyId) REFERENCES Counties (CountyId) ON DELETE CASCADE

    Read the article

  • How do I resize tables in Visio 2010?

    - by Thomas
    Create a Database Model Diagram Reverse Engineer a database (Database Tab, Reverse Engineer). Once the diagram is created, how do you resize the tables? I've tried: Enable Developer mode, Choose Protection, Choose None. When I do that, I'm given the impression that I should be able to resize a given table but I cannot actually do it. Enable Developer mode, right-click on a table, Choose Show ShapeSheet, Set all Lock values in the Protection section to 0.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • 'ALTER table' for all tables in a database

    - by BassKozz
    How can I run the following for every table in a database: ALTER table [table_name] type=innodb; I don't want to have to manually run it for each table, but rather run it for all tables in a database. As an aside: If your curious as to why I am running this: http://bugs.mysql.com/bug.php?id=1341 & http://bugs.mysql.com/bug.php?id=1287

    Read the article

  • Does Win 7 still requires copying all files over before burning to a DVD-R or BD-R?

    - by Jian Lin
    It seems that Win 7 still needs to copy all files over to a folder, before it burns all files to the DVD-R or BD-R? I think since XP or Vista, Windows always copy everything over to a temporary folder before it will burn to an empty DVD-R. So if you just want to burn a 4GB file to an empty DVD-R, it will first make a copy of that file, and then burn it, instead of just burning it without making a copy first. And now on Win 7, it seems like it is the case also? Most other 3rd party burning tools won't make an extra copy of the files first... Win 7 is the exception. Is there a way around it? (to avoid copying over 25GB or 50GB of data before burning)

    Read the article

  • How do I set the TEMP environment variable for the "Network Service" user?

    - by Chris Phillips
    We have a system that uses Path.GetTempFile and Path.GetTempPath calls to work with temporary files fairly frequently. This system also runs as the "Network Service" user. We're finding that we're running out of room on the C drive (for other issues, our temp files are cleaned up correctly) and would like to be able to move the temp directory to a different drive. The easiest solution to this seems to be to change the TMP or TEMP environment variables for the Network Service user, but I only seem to be able to set my own user or the "system" variables that are overwritten by the Network Service user profile. How do I set these variables for the Network Service user?

    Read the article

  • mysqldump trigger crashed tables

    - by m4rc
    We had a database crash this morning starting at 1 minute past midnight (when the database backup runs). The exception emails I was getting said "Table './core/content' is marked as crashed and should be repaired". My question is basically, can mysqldump cause tables to crash, if so how and why? And are there any tools which can detect a crashed table and run a repair on it? Thanks in advance.

    Read the article

  • Windows xp - recover document opened directly from IE

    - by Thingfish
    Hi Attempting to help a family member recover a document. The word 2007 document was downloaded and opened directly from a webmail interface using Internet Explorer running on Windows XP. The user saved the document multiple times while working on it for the good part of a day. After closing Word 2007 the user is not able to locate the document, and I have so far not been able to help. The computer has not been turned off, and the user has not attempted to open the document directly from the mail again. Recreating the events on vista/windows 7 its easy enough to locate the document under the Temporary Internet Files folder. I have however not been able to do the same on a Windows XP. Any suggestion for how to locate this document, or if its even possible? Thanks

    Read the article

  • excel pivot tables stopped working after upgrade to office 2007

    - by some random guy
    An excel document with several pivot and lookup tables that previously worked under office xp and 2003 stopped working after an upgrade to office 2007 (linked stuff doesn't update). I originally assumed there's something disabled in 2007 that I need to turn back on, but after having opened it in excel 2007 it no longer works in previous versions either. Any idea what I'm missing/what excel 2007 did?

    Read the article

  • How to include multiple tables programmaticaly into a Sweave document using R

    - by PaulHurleyuk
    Hello, I want to have a sweave document that will include a variable number of tables in. I thought the example below would work, but it doesn't. I want to loop over the list foo and print each element as it's own table. % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \listfiles \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] library(RODBC) library(psych) library(xtable) library(plyr) library(ggplot2) options(width=80) #Produce some example data, here I'm creating some dummy dataframes and putting them in a list foo<-list() foo[[1]]<-data.frame(GRP=c(rep("AA",10), rep("Aa",10), rep("aa",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[2]]<-data.frame(GRP=c(rep("BB",10), rep("bB",10), rep("BB",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[3]]<-data.frame(GRP=c(rep("CC",12), rep("cc",18)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[4]]<-data.frame(GRP=c(rep("DD",10), rep("Dd",10), rep("dd",10)), X1=rnorm(30), X2=rnorm(30,5,2)) @ \title{Docuemnt to test putting a variable number of tables into a sweave Document} \author{"Paul Hurley"} \maketitle \section{Text} This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx \input{time} sec to process. <<label=test, echo=FALSE, results=tex>>= cat("Foo") @ that was a test, so is this <<label=table1test, echo=FALSE, results=tex>>= print(xtable(foo[[1]])) @ \newpage \subsection{Tables} <<label=Tables, echo=FALSE, results=tex>>= for(i in seq(foo)){ cat("\n") cat(paste("Table_",i,sep="")) cat("\n") print(xtable(foo[[i]])) cat("\n") } #cat("<<label=endofTables>>= ") @ <<label=bye, include=FALSE, echo=FALSE>>= endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= fileConn<-file("time.tex", "wt") writeLines(as.character(elapsedtime), fileConn) close(fileConn) @ \end{document} Here, the table1test chunk works as expected, and produced a table based on the dataframe in foo[[1]], however the loop only produces Table(underscore)1.... Any ideas what I'm doing wrong ?

    Read the article

  • Modify MySQL INSERT statement to omit the insertion of certain rows

    - by dave
    I'm trying to expand a little on a statement that I received help with last week. As you can see, I'm setting up a temporary table and inserting rows of student data from a recently administered test for a few dozen schools. When the rows are inserted, they are sorted by the score (totpct_stu, high to low) and the row_number is added, with 1 representing the highest score, etc. I've learned that there were some problems at school #9999 in SMITH's class (every student made a perfect score and they were the only students in the district to do so). So, I do not want to import SMITH's class. As you can see, I DELETED SMITH's class, but this messed up the row numbering for the remainder of student at the school (e.g., high score row_number is now 20, not 1). How can I modify the INSERT statement so as to not insert this class? Thanks! DROP TEMPORARY TABLE IF EXISTS avgpct ; CREATE TEMPORARY TABLE avgpct_1 ( sch_code VARCHAR(3), schabbrev VARCHAR(75), teachername VARCHAR(75), totpct_stu DECIMAL(5,1), row_number SMALLINT, dummy VARCHAR(75) ); -- ---------------------------------------- INSERT INTO avgpct SELECT sch_code , schabbrev , teachername , totpct_stu , @num := IF( @GROUP = schabbrev, @num + 1, 1 ) AS row_number , @GROUP := schabbrev AS dummy FROM sci_rpt WHERE grade = '05' AND totpct_stu >= 1 -- has a valid score ORDER BY sch_code, totpct_stu DESC ; -- --------------------------------------- -- select * from avgpct ; -- --------------------------------------- DELETE FROM avgpct_1 WHERE sch_code = '9999' AND teachername = 'SMITH' ;

    Read the article

  • Getting the avg of the top 10 students from each school

    - by dave
    Hi all -- We have a school district with 38 elementary schools. The kids took a test. The averages for the schools are widely dispersed, but I want to compare the averages of JUST THE TOP 10 students from each school. Requirement: use temporary tables only. I have done this in a very work-intensive, error-prone sort of way as follows. (sch_code = e.g., 9043; -- schabbrev = e.g., "Carter"; -- totpct_stu = e.g., 61.3) DROP TEMPORARY TABLE IF EXISTS avg_top10 ; CREATE TEMPORARY TABLE avg_top10 ( sch_code VARCHAR(4), schabbrev VARCHAR(75), totpct_stu DECIMAL(5,1) ); INSERT INTO avg_top10 SELECT sch_code , schabbrev , totpct_stu FROM test_table WHERE sch_code IN ('5489') ORDER BY totpct_stu DESC LIMIT 10; -- I do that last query for EVERY school, so the total -- length of the code is well in excess of 300 lines. -- Then, finally... SELECT schabbrev, ROUND( AVG( totpct_stu ), 1 ) AS top10 FROM avg_top10 GROUP BY schabbrev ORDER BY top10 ; -- OUTPUT: ----------------------------------- schabbrev avg_top10 ---------- --------- Goulding 75.4 Garth 77.7 Sperhead 81.4 Oak_P 83.7 Spring 84.9 -- etc... Question: So this works, but isn't there a lot better way to do it? Thanks! PS -- Looks like homework, but this is, well...real.

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • Missing tables on Scaffold Dynamic Data page

    - by Ben Amada
    I created a Linq-to-SQL DBML for the first time. I dragged and dropped all my tables over to the designer. The tables all appear in the designer.cs file. In Global.asax, I also have model.RegisterContext() with the ScaffoldAllTables = true option. The routes are also setup. I can pull up the Scaffolding page, but there's at least one table that is missing that I'm trying to get to show up. This missing table has a relationship with a child table that references it. The child table appears. When viewing data for the child table, the column that references the missing/parent table shows the numeric PK int value, rather than showing the "name". So instead of showing "Cars" it shows 1, and instead of showing "Planes" it shows 2, etc. There's another table in the DB that has the same type of structure as the missing table, and it is correctly appearing in the scaffolded tables. For this missing table, I've tried explicitly adding the ScaffoldTable attribute to no avail. Does anyone know what would cause a table like this to not appear in the list Scaffolded tables? Thanks very much.

    Read the article

  • How do I execute a sql statement through a variable (dyname sql) that tries to do an insert into a variable table?

    - by Testifier
    If I do what I wanna do with a TEMPORARY TABLE, it works fine: DECLARE @CTRFR VARCHAR(MAX) SET @CTRFR = 'select blah blah blah' -- <-- very long select statement. this returns a 0 or some greater number. Please note! --> I NEED THIS NUMBER. IF EXISTS ( SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo][#CTRFRResult]') AND type IN ( N'U' ) ) DROP TABLE [dbo].[#CTRFRResult] CREATE TABLE #CTRFRResult ( CTRFRResult VARCHAR(MAX) ) SET @CTRFR = 'insert into #CTRFRResult ' + @CTRFR EXEC(@CTRFR) The above works fine. The problem is that several databases are using the same TEMP table. Therefore I need to use a VARIABLE table (instead of a temporary table). What I have below is not working because it says that the table must be declared. DECLARE @CTRFRResult TABLE ( CTRFRResult VARCHAR(MAX) ) SET @CTRFR = 'insert into @CTRFRResult ' + @CTRFR -- I think the issue is here. EXEC(@CTRFR) Setting @CTRFR to 'insert into...' is not working because I'm assuming the table name is out of scope. How would I go about mimicking the temporary table code using a variable table?

    Read the article

  • Collapse Tables with specific Table ID? JavaScript

    - by medoix
    I have the below JS at the top of my page and it successfully collapses ALL tables on load. However i am trying to figure out how to only collapse tables with the ID of "ctable" or is there some other way of specifying the tables to make collapsible etc? <script type="text/javascript"> var ELpntr=false; function hideall() { locl = document.getElementsByTagName('tbody'); for (i=0;i<locl.length;i++) { locl[i].style.display='none'; } } function showHide(EL,PM) { ELpntr=document.getElementById(EL); if (ELpntr.style.display=='none') { document.getElementById(PM).innerHTML=' - '; ELpntr.style.display='block'; } else { document.getElementById(PM).innerHTML=' + '; ELpntr.style.display='none'; } } onload=hideall; </script>

    Read the article

  • MS Access 2003 - VBA for altering a table after a "SELECT * INTO tblTemp FROM tblMain" statement

    - by Justin
    Hi. I use functions like the following to make temporary tables out of crosstabs queries. Function SQL_Tester() Dim sql As String If DCount("*", "MSysObjects", "[Name]='tblTemp'") Then DoCmd.DeleteObject acTable, "tblTemp" End If sql = "SELECT * INTO tblTemp from TblMain;" Debug.Print (sql) Set db = CurrentDb db.Execute (sql) End Function I do this so that I can then use more vba to take the temporary table to excel, use some of excel functionality (formulas and such) and then return the values to the original table (tblMain). Simple spot i am getting tripped up is that after the Select INTO statement I need to add a brand new additional column to that temporary table and I do not know how to do this: sql = "Create Table..." is like the only way i know how to do this and of course this doesn't work to well with the above approach because I can't create a table that has already been created after the fact, and I cannot create it before because the SELECT INTO statement approach will return a "table already exists" message. Any help? thanks guys!

    Read the article

  • Advanced SQL Data Compare throught multiple tables

    - by podosta
    Hello, Consider the situation below. Two tables (A & B), in two environments (DEV & TEST), with records in those tables. If you look the content of the tables, you understand that functionnal data are identical. I mean except the PK and FK values, the name Roger is sill connected to Fruit & Vegetable. In DEV environment : Table A 1 Roger 2 Kevin Table B (italic field is FK to table A) 1 1 Fruit 2 1 Vegetable 3 2 Meat In TEST environment : Table A 4 Roger 5 Kevin Table B (italic field is FK to table A) 7 4 Fruit 8 4 Vegetable 9 5 Meat I'm looking for a SQL Data Compare tool which will tell me there is no difference in the above case. Or if there is, it will generate insert & update scripts with the right order (insert first in A then B) Thanks a lot guys, Grégoire

    Read the article

  • Creating Tables in Word Programatically

    - by Ben
    I am generating tables and writing them to word on the fly. I do not know how many tables there will be each time i write the data to word and the problem I am having is the second table is written inside the first cell of my first table. If there was a third table it is put inside the first cell of my second table. Is there a way to move the cursor out of the table? I have tried creating a new range with each table also but the same thing happens. I have also tried things like tbl.Range.InsertParagraphAfter() The closest I came was using the Relocate method, but this only worked for two tables. Thanks Ben

    Read the article

  • VS 2010 Server Explorer Database Showing No Tables

    - by Andy
    I'm working on a .Net application that needs to read from an Oracle 10g database behind Siebel. In VS 2010 Server Explorer, I've created a connection using the OracleClient type connector with a reference to the Oracle TNS service name as the "server name." The "Test Connection" button shows that the connection is successful. However, in the Server Explorer, when I go to expand the Tables, no tables are shown. I know for a fact that there are 3000+ tables in the database (thanks Siebel). Anyone know what's happening here? I'd like to create an Entity Framework 4.0 Entity Data Model... Thanks for the help! Andy

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >