Search Results

Search found 10006 results on 401 pages for 'symbol tables'.

Page 19/401 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • Foreign key problem linking tables in phpMyAdmin

    - by alan
    I'm using phpMyAdmin (PHP & MySQL) and I'm having a lot of trouble linking the tables using foreign keys. I'm getting negative values for the field countyId (which is the foriegn key). However, it is linking to my other table and cascading fine. When I go to add data there will be a drop selection for the CountyId and the values will look something like this: " -1 1- " Here is my alter statement: ALTER TABLE Baronies ADD FOREIGN KEY (CountyId) REFERENCES Counties (CountyId) ON DELETE CASCADE

    Read the article

  • Foreign key problem linking tables in phpMyAdmin

    - by alan
    I'm using phpMyAdmin (PHP & MySQL) and I'm having a lot of trouble linking the tables using foreign keys. I'm getting negative values for the field countyId (which is the foriegn key). However, it is linking to my other table and cascading fine. When I go to add data there will be a drop selection for the CountyId and the values will look something like this: " -1 1- " Here is my alter statement: ALTER TABLE Baronies ADD FOREIGN KEY (CountyId) REFERENCES Counties (CountyId) ON DELETE CASCADE

    Read the article

  • How do I resize tables in Visio 2010?

    - by Thomas
    Create a Database Model Diagram Reverse Engineer a database (Database Tab, Reverse Engineer). Once the diagram is created, how do you resize the tables? I've tried: Enable Developer mode, Choose Protection, Choose None. When I do that, I'm given the impression that I should be able to resize a given table but I cannot actually do it. Enable Developer mode, right-click on a table, Choose Show ShapeSheet, Set all Lock values in the Protection section to 0.

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • 'ALTER table' for all tables in a database

    - by BassKozz
    How can I run the following for every table in a database: ALTER table [table_name] type=innodb; I don't want to have to manually run it for each table, but rather run it for all tables in a database. As an aside: If your curious as to why I am running this: http://bugs.mysql.com/bug.php?id=1341 & http://bugs.mysql.com/bug.php?id=1287

    Read the article

  • mysqldump trigger crashed tables

    - by m4rc
    We had a database crash this morning starting at 1 minute past midnight (when the database backup runs). The exception emails I was getting said "Table './core/content' is marked as crashed and should be repaired". My question is basically, can mysqldump cause tables to crash, if so how and why? And are there any tools which can detect a crashed table and run a repair on it? Thanks in advance.

    Read the article

  • excel pivot tables stopped working after upgrade to office 2007

    - by some random guy
    An excel document with several pivot and lookup tables that previously worked under office xp and 2003 stopped working after an upgrade to office 2007 (linked stuff doesn't update). I originally assumed there's something disabled in 2007 that I need to turn back on, but after having opened it in excel 2007 it no longer works in previous versions either. Any idea what I'm missing/what excel 2007 did?

    Read the article

  • How to include multiple tables programmaticaly into a Sweave document using R

    - by PaulHurleyuk
    Hello, I want to have a sweave document that will include a variable number of tables in. I thought the example below would work, but it doesn't. I want to loop over the list foo and print each element as it's own table. % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \listfiles \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] library(RODBC) library(psych) library(xtable) library(plyr) library(ggplot2) options(width=80) #Produce some example data, here I'm creating some dummy dataframes and putting them in a list foo<-list() foo[[1]]<-data.frame(GRP=c(rep("AA",10), rep("Aa",10), rep("aa",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[2]]<-data.frame(GRP=c(rep("BB",10), rep("bB",10), rep("BB",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[3]]<-data.frame(GRP=c(rep("CC",12), rep("cc",18)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[4]]<-data.frame(GRP=c(rep("DD",10), rep("Dd",10), rep("dd",10)), X1=rnorm(30), X2=rnorm(30,5,2)) @ \title{Docuemnt to test putting a variable number of tables into a sweave Document} \author{"Paul Hurley"} \maketitle \section{Text} This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx \input{time} sec to process. <<label=test, echo=FALSE, results=tex>>= cat("Foo") @ that was a test, so is this <<label=table1test, echo=FALSE, results=tex>>= print(xtable(foo[[1]])) @ \newpage \subsection{Tables} <<label=Tables, echo=FALSE, results=tex>>= for(i in seq(foo)){ cat("\n") cat(paste("Table_",i,sep="")) cat("\n") print(xtable(foo[[i]])) cat("\n") } #cat("<<label=endofTables>>= ") @ <<label=bye, include=FALSE, echo=FALSE>>= endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= fileConn<-file("time.tex", "wt") writeLines(as.character(elapsedtime), fileConn) close(fileConn) @ \end{document} Here, the table1test chunk works as expected, and produced a table based on the dataframe in foo[[1]], however the loop only produces Table(underscore)1.... Any ideas what I'm doing wrong ?

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • Missing tables on Scaffold Dynamic Data page

    - by Ben Amada
    I created a Linq-to-SQL DBML for the first time. I dragged and dropped all my tables over to the designer. The tables all appear in the designer.cs file. In Global.asax, I also have model.RegisterContext() with the ScaffoldAllTables = true option. The routes are also setup. I can pull up the Scaffolding page, but there's at least one table that is missing that I'm trying to get to show up. This missing table has a relationship with a child table that references it. The child table appears. When viewing data for the child table, the column that references the missing/parent table shows the numeric PK int value, rather than showing the "name". So instead of showing "Cars" it shows 1, and instead of showing "Planes" it shows 2, etc. There's another table in the DB that has the same type of structure as the missing table, and it is correctly appearing in the scaffolded tables. For this missing table, I've tried explicitly adding the ScaffoldTable attribute to no avail. Does anyone know what would cause a table like this to not appear in the list Scaffolded tables? Thanks very much.

    Read the article

  • Tfs 2010: how to set up a corporate source server?

    - by bwerks
    Hi all, I'm looking for guidance in setting up a corporate source server, but when I google this topic the best I can come up with is articles and walkthrough concerned with configuring VS to use microsoft's public symbol servers for use with debugging .NET assemblies. Provided for background info, the environment I'm concerned with using is Vs2010/Tfs2010. Basically, the workflow I'm looking to facilitate is this: 1) customer reports problem with application 2) application of the appropriate version is installed on a virtual machine 3) developer repros bug attaching to process on virtual machine and leveraging source server (symbol server?) on corporate domain. This is the step I'm concerned with. 4) developer pinpoints problem fixes bug in workspace. 5) developer performs a dll swap on VM to test changes? (side topic, not sure on this) 6) normal development/source control workflows. Any advice is welcome! Edit: since writing this, I have stumbled on this article, which is a nice writeup on the configuration of source server for TFS 2008. Has anyone adapted this for Tfs 2010?

    Read the article

  • Collapse Tables with specific Table ID? JavaScript

    - by medoix
    I have the below JS at the top of my page and it successfully collapses ALL tables on load. However i am trying to figure out how to only collapse tables with the ID of "ctable" or is there some other way of specifying the tables to make collapsible etc? <script type="text/javascript"> var ELpntr=false; function hideall() { locl = document.getElementsByTagName('tbody'); for (i=0;i<locl.length;i++) { locl[i].style.display='none'; } } function showHide(EL,PM) { ELpntr=document.getElementById(EL); if (ELpntr.style.display=='none') { document.getElementById(PM).innerHTML=' - '; ELpntr.style.display='block'; } else { document.getElementById(PM).innerHTML=' + '; ELpntr.style.display='none'; } } onload=hideall; </script>

    Read the article

  • Advanced SQL Data Compare throught multiple tables

    - by podosta
    Hello, Consider the situation below. Two tables (A & B), in two environments (DEV & TEST), with records in those tables. If you look the content of the tables, you understand that functionnal data are identical. I mean except the PK and FK values, the name Roger is sill connected to Fruit & Vegetable. In DEV environment : Table A 1 Roger 2 Kevin Table B (italic field is FK to table A) 1 1 Fruit 2 1 Vegetable 3 2 Meat In TEST environment : Table A 4 Roger 5 Kevin Table B (italic field is FK to table A) 7 4 Fruit 8 4 Vegetable 9 5 Meat I'm looking for a SQL Data Compare tool which will tell me there is no difference in the above case. Or if there is, it will generate insert & update scripts with the right order (insert first in A then B) Thanks a lot guys, Grégoire

    Read the article

  • Creating Tables in Word Programatically

    - by Ben
    I am generating tables and writing them to word on the fly. I do not know how many tables there will be each time i write the data to word and the problem I am having is the second table is written inside the first cell of my first table. If there was a third table it is put inside the first cell of my second table. Is there a way to move the cursor out of the table? I have tried creating a new range with each table also but the same thing happens. I have also tried things like tbl.Range.InsertParagraphAfter() The closest I came was using the Relocate method, but this only worked for two tables. Thanks Ben

    Read the article

  • VS 2010 Server Explorer Database Showing No Tables

    - by Andy
    I'm working on a .Net application that needs to read from an Oracle 10g database behind Siebel. In VS 2010 Server Explorer, I've created a connection using the OracleClient type connector with a reference to the Oracle TNS service name as the "server name." The "Test Connection" button shows that the connection is successful. However, in the Server Explorer, when I go to expand the Tables, no tables are shown. I know for a fact that there are 3000+ tables in the database (thanks Siebel). Anyone know what's happening here? I'd like to create an Entity Framework 4.0 Entity Data Model... Thanks for the help! Andy

    Read the article

  • How to create multiple tables with the same schema using SQLite jdbc

    - by Space_C0wb0y
    I want to split a large table horizontally, and I would like to make sure that all three of them have the same schema. Currently I am using this piece of code to create the tables: statement .executeUpdate("CREATE TABLE AnnotationsMolecularFunction (Id INTEGER PRIMARY KEY ASC AUTOINCREMENT, " + "ProteinId NOT NULL, " + "GOId NOT NULL, " + "UNIQUE (ProteinId, GOId)" + "FOREIGN KEY(ProteinId) REFERENCES Protein(Id))"); There is one such statement for each table. This is bad, because if I decide to change the schema later (which will most certainly happen), I will have to change it three times, which begs for errors, so I would like a way to make sure that the other tables have the same schema without explicitly writing it again. I can use: statement .executeUpdate("CREATE TABLE AnnotationsBiologicalProcess AS SELECT * FROM AnnotationsMolecularFunction"); to create the other tables with the same columns, but the constraints are not aplied. I could of course just generate the same query-string three times with different table-names in Java, but I would like to know if there is an SQL-way of achieving this.

    Read the article

  • How to merge two tables based on common column and sort the results by date

    - by techiepark
    Hello friends, I have two mysql tables and i want to merge the results of these two tables based on the common column rev_id. The merged results should be sorted by the date of two tables. Please help me. CREATE TABLE `reply` ( `id` int(3) NOT NULL auto_increment, `name` varchar(25) NOT NULL default '', `member_id` varchar(45) NOT NULL, `rev_id` int(3) NOT NULL default '0', `description` text, `post_date` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, `flag` char(2) NOT NULL default 'N', PRIMARY KEY (`id`), KEY `member_id` (`member_id`) ) ENGINE=MyISAM; CREATE TABLE `comment` ( `com_id` int(8) NOT NULL auto_increment, `rev_id` int(5) NOT NULL default '0', `member_id` varchar(50) NOT NULL, `comm_desc` text NOT NULL, `date_created` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, PRIMARY KEY (`com_id`), KEY `member_id` (`member_id`) ) ENGINE=MyISAM;

    Read the article

  • Union on two tables with a where clause in the one

    - by Lostdrifter
    Currently I have 2 tables, both of the tables have the same structure and are going to be used in a web application. the two tables are production and temp. The temp table contains one additional column called [signed up]. Currently I generate a single list using two columns that are found in each table (recno and name). Using these two fields I'm able to support my web application search function. Now what I need to do is support limiting the amount of items that can be used in the search on the second table. the reason for this is become once a person is "signed up" a similar record is created in the production table and will have its own recno. doing: Select recno, name from production UNION ALL Select recno, name from temp ...will show me everyone. I have tried: Select recno, name from production UNION ALL Select recno, name from temp WHERE signup <> 'Y' But this returns nothing? Can anyone help?

    Read the article

  • SQL joining 3 tables when 1 table is emty

    - by AdRock
    I am trying to write a query that connects 3 tables. The first table is info about each festival The second table is the number of votes for each festival The third table is reviews for each festival I want to join all 3 tables so i get all the feilds from table1, join table1 with table2 on the festivalid but i also need to count the number of records in table 3 that applys to each festival. The first 2 tables give me a result becuase they both have data in them but table 3 is empty becuase there are no reviews yet so adding that to my query fives me no results SELECT f.*, v.total, v.votes, v.festivalid, r.reviewcount as count FROM festivals f INNER JOIN vote v ON f.festivalid = v.festivalid INNER JOIN (SELECT festivalid, count(*) as reviewcount FROM reviews) GROUP BY festivalid) as r on r.festivalid = v.festivalid

    Read the article

  • Storing Tables of Information on the Android Platform.

    - by Tarmon
    I have about twenty pages of information that is stored in tables that needs to be stored in my Android application. Each column is a designated stop on a bus route and the column is filled with times that the bus will be at the stop. There is also certain information that needs to be associated with some times, such as if the bus is handicap accessible at a certain time. Here is an example of one of the tables: Bus Times I have thought about using a SQL lite as that seems as though it would be able to store these tables quite easily; but when I think of using SQL I think of dynamic data storage and this shouldn't be changing more than once a year. Is SQL appropriate for this application? Is there a better way to do this? Thanks, Rob

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >