Search Results

Search found 36762 results on 1471 pages for 'common table expression'.

Page 233/1471 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • What is the easiest way to find a sql query returns a result or not?

    - by bala3569
    Consider the following sql server query , DECLARE @Table TABLE( Wages FLOAT ) INSERT INTO @Table SELECT 20000 INSERT INTO @Table SELECT 15000 INSERT INTO @Table SELECT 10000 INSERT INTO @Table SELECT 45000 INSERT INTO @Table SELECT 50000 SELECT * FROM ( SELECT *, ROW_NUMBER() OVER(ORDER BY Wages DESC) RowID FROM @Table ) sub WHERE RowID = 3 The result of the query would be 20000 ..... Thats fine as of now i have to find the result of this query, SELECT * FROM ( SELECT *, ROW_NUMBER() OVER(ORDER BY Wages DESC) RowID FROM @Table ) sub WHERE RowID = 6 It will not give any result because there are only 5 rows in the table..... so now my question is What is the easiest way to find a sql query returns a result or not?

    Read the article

  • Disable Foreign key constraint on all tables didn't work ?

    - by Space Cracker
    i try a lot of commands to disable tables constraints in my database to make truncate to all tables but still now it give me the same error Cannot truncate table '' because it is being referenced by a FOREIGN KEY constraint. i try EXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all" EXEC sp_MSforeachtable "TRUNCATE TABLE ?" and i tried this for each table ALTER TABLE [Table Name] NOCHECK CONSTRAINT ALL truncate table [Table Name] ALTER TABLE [Table Name] CHECK CONSTRAINT ALL and every time i have the previous error message .. could any please help me to solve sucha a problem ?

    Read the article

  • Can you add identity to existing column in sql server 2008?

    - by bmutch
    In all my searching I see that you essentially have to copy the existing table to a new table to chance to identity column for pre-2008, does this apply to 2008 also? thanks. most concise solution I have found so far: CREATE TABLE Test ( id int identity(1,1), somecolumn varchar(10) ); INSERT INTO Test VALUES ('Hello'); INSERT INTO Test VALUES ('World'); -- copy the table. use same schema, but no identity CREATE TABLE Test2 ( id int NOT NULL, somecolumn varchar(10) ); ALTER TABLE Test SWITCH TO Test2; -- drop the original (now empty) table DROP TABLE Test; -- rename new table to old table's name EXEC sp_rename 'Test2','Test'; -- see same records SELECT * FROM Test;

    Read the article

  • OBIA on Teradata - Part 1 Loader and Monitoring

    - by Mohan Ramanuja
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The out-of-the-box (OOB) OBIA Informatica mappings come with TPump loader.   TPUMP  FASTLOAD TPump does not lock the table. FastLoad applies exclusive lock on the table. The table that TPump is loading can have data. The table that FastLoad is loading needs to be empty. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} TPump is not efficient with lookups. FastLoad is more efficient in the absence of lookups. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The out-of the box Informatica mappings come with TPump loader. There is chance for bottleneck in writer thread The out-of the box tables in Teradata supplied with OBAW features all Dimension and Fact tables using ROW_WID as the key for primary index. Also, all staging tables use integration_id as the key for primary index. This reduces skewing of data across Teradata AMPs.You can use an SQL statement similar to the following to determine if data for a given table is distributed evenly across all AMP vprocs. The SQL statement displays the AMP with the most used through the AMP with the least-used space, investigating data distribution in the Message table in database RST.SELECT vproc,CurrentPermFROM DBC.TableSizeWHERE Databasename = ‘PRJ_CRM_STGC’AND Tablename = ‘w_party_per_d’ORDER BY 2 descIf you suspect distribution problems (skewing) among AMPS, the following is a sample of what you might enter for a three-column PI:SELECT HASHAMP (HASHBUCKET (HASHROW (col_x, col_y, col_z))), count (*)FROM hash15GROUP BY 1ORDER BY 2 desc; ETL Error Monitoring Error Table – These are tables that start with ET. Location and name can be specified in Informatica session as well as the loader connection.Loader Log – Loader log is available in the Informatica server under the session log folder. These give feedback on the loader parameters such as Packing Factor to use. These however need to be monitored in the production environment. The recommendations made in one environment may not be used in another environment.Log Table – These are tables that start with TL. These are sparse on information.Bad File – This is the Informatica file generated in case there is data quality issues

    Read the article

  • Order of calls to set functions when invoking a flex component

    - by Jason
    I have a component called a TableDataViewer that contains the following pieces of data and their associated set functions: [Bindable] private var _dataSetLoader:DataSetLoader; public function get dataSetLoader():DataSetLoader {return _dataSetLoader;} public function set dataSetLoader(dataSetLoader:DataSetLoader):void { trace("setting dSL"); _dataSetLoader = dataSetLoader; } [Bindable] private var _table:Table = null; public function set table(table:Table):void { trace("setting table"); _table = table; _dataSetLoader.load(_table.definition.id, "viewData", _table.definition.id); } This component is nested in another component as follows: <ve:TableDataViewer width="100%" height="100%" paddingTop="10" dataSetLoader="{_openTable.dataSetLoader}" table="{_openTable.table}"/> Looking at the trace in the logs, the call to set table is coming before the call to set dataSetLoader. Which is a real shame because set table() needs dataSetLoader to already be set in order to call its load() function. So my question is, is there a way to enforce an order on the calls to the set functions when declaring a component?

    Read the article

  • Is it Considered Good SQL practice to use GUID to link multiple tables to same Id field?

    - by Mallow
    I want to link several tables to a many-to-many(m2m) table. One table would be called location and this table would always be on one side of the m2m table. But I will have a list of several tables for example: Cards Photographs Illustrations Vectors Would using GUID's between these tables to link it to a single column in another table be considered 'Good Practice'? Will Mysql let me to have it automatically cascade updates and delete? If so, would multiple cascades lead to an issues? UPDATE I've read that GUID (a hex number) Generally takes up more space in a database and slows queries down. However I could still generate 'unique' ids by just having the table initial's as part of the id so that the table card's id would be c0001, and then Illustrations be I001. Regardless of this change, the questions still stands.

    Read the article

  • What is the MM/DD/YYYY regular expression and how do I use it in php?

    - by zeckdude
    I found the regular expression for MM/DD/YYYY at http://www.regular-expressions.info/regexbuddy/datemmddyyyy.html but I don't think I am using it correctly. Here's my code: $date_regex = '(0[1-9]|1[012])[- /.](0[1-9]|[12][0-9]|3[01])[- /.](19|20)\d\d'; $test_date = '03/22/2010'; if(preg_match($date_regex, $test_date)) { echo 'this date is formatted correctly'; } else { echo 'this date is not formatted correctly'; } When I run this, it still echoes 'this date is not formatted correctly', when it should be saying the opposite. How do I set this regular expression up in php?

    Read the article

  • Can a repeated piece of regular expression create multiple groups? Such as this example...

    - by Yousui
    Hi guys, I'm using RUBY 's regular expression to deal with text such as ${1:aaa|bbbb} ${233:aaa | bbbb | ccc ccccc } ${34: aaa | bbbb | cccccccc |d} ${343: aaa | bbbb | cccccccc |dddddd ddddddddd} ${3443:a aa|bbbb|cccccccc|d} ${353:aa a| b b b b | c c c c c c c c | dddddd} I want to get the trimed text between each pipe line. For example, for the first line of my upper example, I want to get the result aaa and bbbb, for the second line, I want aaa, bbbb and ccc ccccc. Now I have wrote a piece of regular expression and a piece of ruby code to test it: array = "${33:aaa|bbbb|cccccccc}".scan(/\$\{\s*(\d+)\s*:(\s*[^\|]+\s*)(?:\|(\s*[^\|]+\s*))+\}/) puts array Now my problem is the (?:\|(\s*[^\|]+\s*))+ part can't create multiple groups. I don't know how to solve this problem, because the number of text I need in each line is variable. Anyone can help? Great thanks.

    Read the article

  • Is it possible to "learn" a regular expression by user-provided examples?

    - by DR
    Is it possible to "learn" a regular expression by user-provided examples? To clarify: I do not want to learn regular expressions. I want to create a program which "learns" a regular expression from examples which are interactively provided by a user, perhaps by selecting parts from a text or selecting begin or end markers. Is it possible? Are there algorithms, keywords, etc. which I can Google for? EDIT: Thank you for the answers, but I'm not interested in tools which provide this feature. I'm looking for theoretical information, like papers, tutorials, source code, names of algorithms, so I can create something for myself.

    Read the article

  • Database design for a Fantasy league

    - by Samidh T
    Here's the basic schema for my database Table user{ userid numeber primary key, count number } Table player{ pid number primary key, } Table user-player{ userid number primary key foreign key(user), pid number primary key foreign key(player) } Table temp{ pid number primary key, points number } Here's what I intend to do... After every match the temp table is updated which holds the id of players that played the last match and the points they earned. Next run a procedure that will match the pid from temp table with every uid of user-player table having the same pid. add the points from temp table to the count of user table for every matching uid. empty temp table. My questions is considering 200 players and 10000 users,Will this method be efficient? I am going to be using mysql for this.

    Read the article

  • Can I compile and execute C# expression without saving the assembly to disk?

    - by Sasha
    I can compile, get an instance and invoke a method of any C# type programmaticaly. There lots of info on that, including the StackOverflow (http://stackoverflow.com/questions/53844/how-can-i-evaluate-a-c-expression-dynamically). My problem is that I'm in the web environment and cannot save anything to /bin directory. I can compile "in-memory" as the above mentioned link suggests but then I won't be able to "unload" my custom assembly from the current AppDomain. After a while that will become a huge memory problem. Is it possible to open a new AppDomain, compile new assembly "in-memory", evaluate some expression or access some member of that assembly inside of that new AppDomain and kill that AppDomain safely when done, all that without saving anything to a hard drive? Thanks in advance for any links, suggestions, etc.

    Read the article

  • Select records by comparing subsets

    - by devnull
    Given two tables (the rows in each table are distinct): 1) x | y z 2) x | y z ------- --- ------- --- 1 | a a 1 | a a 1 | b b 1 | b b 2 | a 1 | c 2 | b 2 | a 2 | c 2 | b 2 | c Is there a way to select the values in the x column of the first table for which all the values in the y column (for that x) are found in the z column of the second table? In case 1), expected result is 1. If c is added to the second table then the expected result is 2. In case 2), expected result is no record since neither of the subsets in the first table matches the subset in the second table. If c is added to the second table then the expected result is 1, 2. I've tried using except and intersect to compare subsets of first table with the second table, which works fine, but it takes too long on the intersect part and I can't figure out why (the first table has about 10.000 records and the second has around 10). EDIT: I've updated the question to provide an extra scenario.

    Read the article

  • Sql (partial) search in a list and get matched fields

    - by qods
    I have two tables, I want to search TermID in Table-A through TermID in Table-B and If there is a termID like in Table-A and then want to get result table as shown below. TermIDs are in different length. There is no search pattern to search with "like %" TermIDs in Table-A are part of the TermIDs in Table-B Regards, Table-A ID TermID 101256666 126006230 101256586 126006231 101256810 126006233 101256841 126006238 101256818 126006239 101256734 1190226408 101256809 1190226409 101256585 1200096999 101256724 1200096997 101256748 1200097005 Table-B TermNo TermID 14 8990010901190226366F 16 8990010901190226374F 15 8990010901190226382F 18 8990010901190226408F 19 8990010901190226416F 11 8990010901200096981F 10 8990010901200096999F 12 8990010901200097005F 13 8990010901200097013F 17 8990010901260062337F As a result I want to get this table; Result Table -TableA.ID TableA.TermID TableB.TermNo A.ID A.TermID B.TermNo 101256734 1190226408 18 101256585 1200096999 10 101256748 1200097005 12

    Read the article

  • How do we match any single character including line feed in Perl regular expression?

    - by bobo
    I would like to use UltraEdit regular expression (perl) to replace the following text with some other text in a bunch of html files: <style type="text/css"> #some-id{} .some-class{} //many other css styles follow </style> I tried to use <style type="text/css">.*</style> but of course it wouldn't match anything because the dot matches any character except line feed. I would like to match line feed as well and the line feed maybe either \r\n or \n. How should the regular expression look like? Many thanks to you all.

    Read the article

  • How can I run some common code from both (a) scheduled via Windows Task & (b) manually from within W

    - by Greg
    Hi, QUESTION - How can I run some common code from both (a) scheduled via Windows Task & (b) manually from within WinForms app? BACKGROUND: This follows on from the http://stackoverflow.com/questions/2489999/how-can-i-schedule-tasks-in-a-winforms-app thread REQUIREMENTS C# .NETv3.5 project using VS2008 There is an existing function which I want to run both (a) manually from within the WinForms application, and (b) scheduled via Windows Task. APPROACHES So what I'm trying to understand is what options are there to make this work eg Is it possible for a windows task to trigger a function to run within a running/existing WinForms application? (doesn't sound solid I guess) Split code out into two projects and duplicate for both console application that the task manager would run AND code that the winforms app would run Create a common library and re-use this for both the above-mentioned projects in the bullet above Create a service with an interface that both the task manager can access plus the winforms app can manage Actually each of these approaches sounds quite messy/complex - would be really nice to drop back to have the code only once within the one project in VS2008, the only reason I ask about this is I need to have a scheduling function and the suggestion has been to use http://taskscheduler.codeplex.com/ as the means to do this, which takes the scheduling out of my VS2008 project... thanks

    Read the article

  • How can I find the common ancestor of two nodes in a binary tree?

    - by Siddhant
    The Binary Tree here is not a Binary Search Tree. Its just a Binary Tree. The structure could be taken as - struct node { int data; struct node *left; struct node *right; }; The maximum solution I could work out with a friend was something of this sort - Consider this binary tree (from http://lcm.csa.iisc.ernet.in/dsa/node87.html) : The inorder traversal yields - 8, 4, 9, 2, 5, 1, 6, 3, 7 And the postorder traversal yields - 8, 9, 4, 5, 2, 6, 7, 3, 1 So for instance, if we want to find the common ancestor of nodes 8 and 5, then we make a list of all the nodes which are between 8 and 5 in the inorder tree traversal, which in this case happens to be [4, 9, 2]. Then we check which node in this list appears last in the postorder traversal, which is 2. Hence the common ancestor for 8 and 5 is 2. The complexity for this algorithm, I believe is O(n) (O(n) for inorder/postorder traversals, the rest of the steps again being O(n) since they are nothing more than simple iterations in arrays). But there is a strong chance that this is wrong. :-) But this is a very crude approach, and I'm not sure if it breaks down for some case. Is there any other (possibly more optimal) solution to this problem?

    Read the article

  • What do you call a set of Javascript closures that share a common context?

    - by Ed Stauff
    I've been trying to learn closures (in Javascript), which kind of hurts my brain after way too many years with C# and C++. I think I now have a basic understanding, but one thing bothers me: I've visited lots of websites in this Quest for Knowledge, and nowhere have I seen a word (or even a simple two-word phrase) that means "a set of Javascript closures that share a common execution context". For example: function CreateThingy (name, initialValue) { var myName = name; var myValue = initialValue; var retObj = new Object; retObj.getName = function() { return myName; } retObj.getValue = function() { return myValue; } retObj.setValue = function(newValue) { myValue = newValue; } return retObj; }; From what I've read, this seems to be one common way of implementing data hiding. The value returned by CreateThingy is, of course, an object, but what would you call the set of functions which are properties of that object? Each one is a closure, but I'd like a name I can used to describe (and think about) all of them together as one conceptual entity, and I'd rather use a commonly accepted name than make one up. Thanks! -- Ed

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • Is it possible to have multiple sets of key columns in a table?

    - by Peter Larsson
    Filtered indexes is one of my new favorite things with SQL Server 2008. I am currently working on designing a new datawarehouse. There are two restrictions doing this It has to be fed from the old legacy system with both historical data and new data It has to be fed from the new business system with new data When we incorporate the new business system, we are going to do that for one market only. It means the old legacy business system still will produce new data for other markets (together with historical data for all markets) and the new business system produce new data to that one market only. Sounds interesting this far? To accomplish this I did a thorough research about the business requirements about the business intelligence needs. Then I went on to design the sucker. How does this relate to filtered indexes you ask? I'll give one example, the Stock transaction table. Well, the key columns for the old legacy system are different from the key columns from the new business system. The old legacy system has a key of 5 columns Movement date Movement time Product code Order number Sequence number within shipment And to all thing, I found out that the Movement Time column is not really a time. It starts out like a time HH:MM:SS but seconds are added for each delivery within the shipment, so a Movement Time can look like "12:11:68". The sequence number is ordered over the distributors for shipment. As I said, it is a legacy system. The new business system has one key column, the Movement DateTime (accuracy down to 100th of nanosecond). So how to deal with this? On thing would be to have two stock transaction tables, one for legacy system and one for the new business system. But that would lead to a maintenance overhead and using partitioned views for getting data out of the warehouse. Filtered index will be of a great use here. MovementDate DATETIME2(7) MovementTime CHAR(8) NULL ProductCode VARCHAR(15) NOT NULL OrderNumber VARCHAR(30) NULL SequenceNumber INT NULL The sequence number is not even used in the new system, so I created a clustered index for a new IDENTITY column to make a new identity column which can be shared by both systems. Then I created one unique filtered index for old system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Legacy (MovementDate, MovementTime, ProductCode, SequenceNumber) INCLUDE (OrderNumber, Col5, Col6, ... ) WHERE SequenceNumber IS NOT NULL And then I created a new unique filtered index for the new business system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Business (MovementDate) INCLUDE (ProductCode, OrderNumber, Col12, ... ) WHERE SequenceNumber IS NULL This way I can have multiple sets of key columns on same base table which is shared by both systems.

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >