Search Results

Search found 12215 results on 489 pages for 'identity column'.

Page 107/489 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Filtering option list values based on security in UCM

    - by kyle.hatlestad
    Fellow UCM blog writer John Sim recently posted a comment asking about filtering values based on the user's security. I had never dug into that detail before, but thought I would take a look. It ended up being tricker then I originally thought and required a bit of insider knowledge, so I thought I would share. The first step is to create the option list table in Configuration Manager. You want to define the column for the option list value and any other columns desired. You then want to have a column which will store the security attribute to apply to the option list value. In this example, we'll name the column 'dGroupName'. Next step is to create a View based on the new table. For the Internal and Visible column, you can select the option list column name. Then click on the Security tab, uncheck the 'Publish view data' checkbox and select the 'Use standard document security' radio button. Click on the 'Edit Values...' button and add the values for the option list. In the dGroupName field, enter the Security Group (or Account if you use Accounts for security) to apply to that value. Create the custom metadata field and apply the View just created. The next step requires file system access to the server. Open the file [ucm directory]\data\schema\views\[view name].hda in a text editor. Below the line '@Properties LocalData', add the line: schSecurityImplementorColumnMap=dGroupName:dSecurityGroup The 'dGroupName' value designates the column in the table which stores the security value. 'dSecurityGroup' indicates the type of security to check against. It would be 'dDocAccount' if using Accounts. Save the file and restart UCM. Now when a user goes to the check-in page, they will only see the options for which they have read and write privileges to the associated Security Group. And on the Search page, they will see the options for which they have just read access. One thing to note is if a value that a user normally can't view on Check-in or Search is applied to a document, but the document is viewable by the user, the user will be able to see the value on the Content Information screen.

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Why to avoid SELECT * from tables in your Views

    - by Jeff Smith
    -- clean up any messes left over from before: if OBJECT_ID('AllTeams') is not null  drop view AllTeams go if OBJECT_ID('Teams') is not null  drop table Teams go -- sample table: create table Teams (  id int primary key,  City varchar(20),  TeamName varchar(20) ) go -- sample data: insert into Teams (id, City, TeamName ) select 1,'Boston','Red Sox' union all select 2,'New York','Yankees' go create view AllTeams as  select * from Teams go select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Now, add a new column to the Teams table: alter table Teams add League varchar(10) go -- put some data in there: update Teams set League='AL' -- run it again select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Notice that League is not displayed! -- Here's an even worse scenario, when the table gets altered in ways beyond adding columns: drop table Teams go -- recreate table putting the League column before the City: -- (i.e., simulate re-ordering and/or inserting a column) create table Teams (  id int primary key,  League varchar(10),  City varchar(20),  TeamName varchar(20) ) go -- put in some data: insert into Teams (id,League,City,TeamName) select 1,'AL','Boston','Red Sox' union all select 2,'AL','New York','Yankees' -- Now, Select again for our view: select * from AllTeams --Results: -- --id          City       TeamName ------------- ---------- -------------------- --1           AL         Boston --2           AL         New York -- The column labeled "City" in the View is actually the League, and the column labelled TeamName is actually the City! go -- clean up: drop view AllTeams drop table Teams

    Read the article

  • SQL SERVER – Finding Different ColumnName From Almost Identitical Tables

    - by pinaldave
    I have mentioned earlier on this blog that I love social media – Facebook and Twitter. I receive so many interesting questions that sometimes I wonder how come I never faced them in my real life scenario. Well, let us see one of the similar situation. Here is one of the questions which I received on my social media handle. “Pinal, I have a large database. I did not develop this database but I have inherited this database. In our database we have many tables but all the tables are in pairs. We have one archive table and one current table. Now here is interesting situation. For a while due to some reason our organization has stopped paying attention to archive data. We did not archive anything for a while. If this was not enough we  even changed the schema of current table but did not change the corresponding archive table. This is now becoming a huge huge problem. We know for sure that in current table we have added few column but we do not know which ones. Is there any way we can figure out what are the new column added in the current table and does not exist in the archive tables? We cannot use any third party tool. Would you please guide us?” Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. In following example we are going to create two tables. One of the tables has extra column. In our resultset we will get the name of the extra column as we are comparing the catalogue view of the column name. USE AdventureWorks2012 GO CREATE TABLE ArchiveTable (ID INT, Col1 VARCHAR(10), Col2 VARCHAR(100), Col3 VARCHAR(100)); CREATE TABLE CurrentTable (ID INT, Col1 VARCHAR(10), Col2 VARCHAR(100), Col3 VARCHAR(100), ExtraCol INT); GO -- Columns in ArchiveTable but not in CurrentTable SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'ArchiveTable' EXCEPT SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'CurrentTable' GO -- Columns in CurrentTable but not in ArchiveTable SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'CurrentTable' EXCEPT SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'ArchiveTable' GO DROP TABLE ArchiveTable; DROP TABLE CurrentTable; GO The above query will return us following result. I hope this solves the problems. It is not the most elegant solution ever possible but it works. Here is the puzzle back to you – what native T-SQL solution would you have provided in this situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Positive result, negative result and current balance. How do you make starting balance show current result?

    - by Tine
    I have 3 columns. Column A shows positive result and if the result is negative then it is in a column B. Column B shows negative result and if the result is positive then it is in a column A. (meaning that either columns can have 0.00 in the cell (empty zero cells)). Column C has starting assets and it also shows the current balance that while result A or B are adding up and current balance is showing the current result. What is the proper formula for this I hope I was clear with my problem. Please help. Thanks in advance!

    Read the article

  • Ruby on Rails - How to migrate code from float to decimal?

    - by user1723110
    So I've got a ruby on rails code which use float a lot (lots of "to_f"). It uses a database with some numbers also stored as "float" type. I would like to migrate this code and the database to decimal only. Is it as simple as migrating the database columns to decimal (adding a decimal column, copying float column to decimal one, deleting float column, renaming decimal column to old float column name), and replacing "to_f" with "to_d" in the code? Or do I need to do more than that? Thanks a lot everyone Raphael

    Read the article

  • Lining things up while using columns

    - by Charles
    I have a request that may not be possible. I'd like to line up the elements of a form so that the inputs all start at the same place: Name: [ ] Company: [ ] Some question with a long name: [ ] But my list is (somewhat) long and I would like to show them in multiple columns on screens that are wide enough. Ideally, I'd find a POSH method (table-free is semantically appropriate, I think) that works on a reasonable number of browsers. My current page uses a table. I tried CSS with columns: auto; -moz-column-count: auto; -moz-column-width: auto; -webkit-column-count: auto; -webkit-column-width: auto; but Firefox (at least) won't break a table across columns.

    Read the article

  • Conditional formatting & vlookup

    - by zorama
    Please help me with the formula: Main Sheet is Sheet2 B COLUMN I want to look up sheet1 A & B columns with Sheet2 A & B columns from 1 workbook that if sheet2 A are same/equal as Sheet1 A column, also if Sheet2 B column are same/equal as Sheet1 B column , how will I highlight the Sheet2 B column that if Sheet1 A & B + Sheet2 A & B are exactly equal . EXAMPLE: SHEET 1 SHEET 2 SHEET 2 Result A B A B A B CODE NO CODE NO CODE NO A 12 B 205 B 205 (highlight to red) B 105 B 20 B 20 (highlight to red) A 45 B 100 B 100 A 56 A 56 A 56 (highlight to red) A 78 B 25 B 25 A 100 A 12 A 12 (highlight to red) B 77 A 45 A 45 (highlight to red) B 108 A 20000 A 20000 B 20 B 205

    Read the article

  • Checking All Checkboxes in a GridView Using jQuery

    In May 2006 I wrote two articles that showed how to add a column of checkboxes to a GridView and offer the ability for users to check (or uncheck) all checkboxes in the column with a single click of the mouse. The first article, Checking All CheckBoxes in a GridView, showed how to add "Check All" and "Uncheck All" buttons to the page above the GridView that, when clicked, checked or unchecked all of the checkboxes. The second article, Checking All CheckBoxes in a GridView Using Client-Side Script and a Check All CheckBox, detailed how to add a checkbox to the checkbox column in the grid's header row that would check or uncheck all checkboxes in the column.

    Read the article

  • Listing SQL Columns

    - by Bunch
    When I am writing up stored procedures in SSMS sometimes I need to know what column types are used in a table. For instance I will know the table name but I might not remember exactly the length of a varchar column or if a column stored the data as an integer or varchar. And I may not want to scroll through all the tables in Object Explorer to find the one I want. A lot of times it is easier if I can just write a quick query to pull up the information I need. The syntax to do something like this is pretty easy. SELECT COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH FROM yourdbname.information_schema.columns WHERE TABLE_NAME = ‘yourtablename’ After running that you will get a listing in the Results pane just like any other query with the column name, data type and length (if any). Technorati Tags: SQL

    Read the article

  • New .Net Authentication in 4.5.1

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/11/05/new-.net-authentication-in-4.5.1.aspxThere has been a lot of traffic on my post about Simple Membership that came with the File new Project MVC 4 in 2012. I was reading the release notes for Visual Studio 2013 and .Net 4.5.1 and it mentioned a new/updated Authentication approach. “ASP.NET Identity is the new membership system for ASP.NET applications. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables” There’s a great page on the asp.net site that gives an introduction, overview, how to use it, and how to migrate to it. I won’t be doing a new project for awhile at work, but I’ll definitely be looking into this more when I get the time.

    Read the article

  • ??????(??????????)

    - by ???02
    ??????(??????????)??????????????????????????????????????????????????????????????????????????????????·??????????????????????????????????????Web?????·???????????????????????????????????????????????????????????????????·???????????????????????????????????????????????????????????????????????????????? Oracle Adaptive Access Manager????·????????????????????? Oracle Identity Federation????????????????Oracle Entitlements Server ????????????·??????????????????????????? -????·?????-?????????????Oracle Adaptive Access Manager -- ??????????????????????????????Oracle Adaptive Access Manager???????????????????????????????????????????????????????·???????????????????????????????????????????????(????)?????????????????????????????ID???????????????????????????????????(1)???????????????????????????????????????????·?????(2)????????????????????????????????????????????????????????????(3)??????????????????Web??????????(????)?????????????(4)?????????????????????????????????Web?????????????????????????????????????Oracle Identity Federation -- ?????????????Oracle Identity Federation???????????????????????????????????·????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????IT??????????????????(1)????????:??????????????????????·???????????????????????????:SAML?ID-FF?WS-Federation?Windows CardSpace(2)??????????????????????????????????????·???????????????????Oracle Entitlements Server -- ????????????Oracle Entitlements Server????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????UI??????????????????????????????????????????????????????(1)OASIS XACML????????????????????(2)??????????????????????????????????????????????????(3)???????????????????????????????????????(4)????????????????????????????????????????Oracle OpenSSO Security Token Service -- ?????????????????Oracle OpenSSO Security Token Service(OpenSSO STS)????????????????Web ???????????????????????????(????????????)????????????????OASIS WS-Trust ????????????????????(issurance)???(renewal)???(validation)??????????????(1)WS-Trust????????????????????(issuance)???(renewal)???(validation)???(2)Web???????ID???????????????????(3)?????????????????? ?????? Oracle Direct

    Read the article

  • JSF command button attribute is transferred incorrectly

    - by Oleg Rybak
    I have following code in jsf page, backed by jsf managed bean <h:dataTable value="#{poolBean.pools}" var="item"> <h:column> <f:facet name="header"> <h:outputLabel value="Id"/> </f:facet> <h:outputText value="#{item.id}"/> </h:column> <h:column> <f:facet name="header"> <h:outputLabel value="Start Range"/> </f:facet> <h:inputText value="#{item.startRange}" required="true"/> </h:column> <h:column> <f:facet name="header"> <h:outputText value="End Range"/> </f:facet> <h:inputText value="#{item.endRange}" required="true"/> </h:column> <h:column> <f:facet name="header"> <h:outputText value="Pool type"/> </f:facet> <h:selectOneMenu value="#{item.poolType}" required="true"> <f:selectItems value="#{poolBean.poolTypesMenu}"/> </h:selectOneMenu> </h:column> <h:column> <f:facet name="header"/> <h:commandButton id="ModifyPool" actionListener="#{poolBean.updatePool}" image="img/update.gif" title="Modify Pool"> <f:attribute name="pool" value="#{item}"/> </h:commandButton> </h:column> </h:dataTable> This code fragment is dedicated to editing come collection of items. Each row of the table contains "edit" button that submits changed values of the row to the server. It has the item itself as an attribute. Submit is performed by calling actionListener method in the backing managed bean. This code runs correctly on Glassfish v 2.1 But when the server was updated to Glassfish v 2.1.1, the attribute stopped to be passed correctly. Instead of passing edited item (when we change the values in table row, we are actually changing the underlying object fields), the source item is submitted to server, i.e. the item that was previously given to the page. All the changes that were made on the page are discarded. I tried to update jsf version from 1.2_02 to 1.2_14 (we are using jsf RI), but it had no effect. Perhaps anyone came across the same problem? Any help and suggestions will be appreciated.

    Read the article

  • Can printf change its parameters??

    - by martani_net
    EDIT: complete code with main is here http://codepad.org/79aLzj2H and once again this is were the weird behavious is happening for (i = 0; i<tab_size; i++) { //CORRECT OUTPUT printf("%s\n", tableau[i].capitale); printf("%s", tableau[i].pays); printf("%s\n", tableau[i].commentaire); //WRONG OUTPUT //printf("%s --- %s --- %s |\n", tableau[i].capitale, tableau[i].pays, tableau[i].commentaire); } I have an array of the following strcuture struct T_info { char capitale[255]; char pays[255]; char commentaire[255]; }; struct T_info *tableau; This is how the array is populated int advance(FILE *f) { char c; c = getc(f); if(c == '\n') return 0; while(c != EOF && (c == ' ' || c == '\t')) { c = getc(f); } return fseek(f, -1, SEEK_CUR); } int get_word(FILE *f, char * buffer) { char c; int count = 0; int space = 0; while((c = getc(f)) != EOF) { if (c == '\n') { buffer[count] = '\0'; return -2; } if ((c == ' ' || c == '\t') && space < 1) { buffer[count] = c; count ++; space++; } else { if (c != ' ' && c != '\t') { buffer[count] = c; count ++; space = 0; } else /* more than one space*/ { advance(f); break; } } } buffer[count] = '\0'; if(c == EOF) return -1; return count; } void fill_table(FILE *f,struct T_info *tab) { int line = 0, column = 0; fseek(f, 0, SEEK_SET); char buffer[MAX_LINE]; char c; int res; int i = 0; while((res = get_word(f, buffer)) != -999) { switch(column) { case 0: strcpy(tab[line].capitale, buffer); column++; break; case 1: strcpy(tab[line].pays, buffer); column++; break; default: strcpy(tab[line].commentaire, buffer); column++; break; } /*if I printf each one alone here, everything works ok*/ //last word in line if (res == -2) { if (column == 2) { strcpy(tab[line].commentaire, " "); } //wrong output here printf("%s -- %s -- %s\n", tab[line].capitale, tab[line].pays, tab[line].commentaire); column = 0; line++; continue; } column = column % 3; if (column == 0) { line++; } /*EOF reached*/ if(res == -1) return; } return ; } Edit : trying this printf("%s -- ", tab[line].capitale); printf("%s --", tab[line].pays); printf("%s --\n", tab[line].commentaire); gives me as result -- --abi -- Emirats arabes unis I expect to get Abu Dhabi -- Emirats arabes unis -- Am I missing something?

    Read the article

  • Problem with evaluating XPath expression in Java

    - by JSteve
    Can somebody help me find the mistake I am doing in evaluating following XPath expression? I want to get all "DataTable" nodes under the node "Model" in my xml through XPath. Here is my XML doc: <?xml version="1.0" encoding="UTF-8"?> <Root> <Application> <Model> <DataSet name="ds" primaryTable="Members" openRows="1"> <DataTable name="Members" openFor="write"> <DataColumn name="id" type="String" mandatory="true" primaryKey="true" valueBy="user"/> <DataColumn name="name" type="String" mandatory="true" valueBy="user"/> <DataColumn name="address" type="String" mandatory="false" valueBy="user"/> <DataColumn name="city" type="String" mandatory="false" valueBy="user"/> <DataColumn name="country" type="String" mandatory="false" valueBy="user"/> </DataTable> </DataSet> </Model> <View> <Composite> <Grid> <Label value="ID:" row="0" column="0" /> <Label value="Name:" row="1" column="0" /> <Label value="Address:" row="2" column="0" /> <Label value="City:" row="3" column="0" /> <Label value="Country:" row="4" column="0" /> <TextField name="txtId" row="0" column="1" /> <TextField name="txtName" row="1" column="1" /> <TextField name="txtAddress" row="2" column="1" /> <TextField name="txtCity" row="3" column="1" /> <TextField name="txtCountry" row="4" column="1" /> </Grid> </Composite> </View> </Application> </Root> Here the Java code to extract required node list: try { DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance(); domFactory.setNamespaceAware(true); domFactory.setIgnoringComments(true); domFactory.setIgnoringElementContentWhitespace(true); DocumentBuilder builder = domFactory.newDocumentBuilder(); Document dDoc = builder.parse("D:\TEST\myFile.xml"); XPath xpath = XPathFactory.newInstance().newXPath(); NodeList nl = (NodeList) xpath.evaluate("//Model//DataTable", dDoc, XPathConstants.NODESET); System.out.println(nl.getLength()); }catch (Exception ex) { System.out.println(ex.getMessage()); } There is no problem in loading and parsing xml file and I can see correct nodes in dDoc. Problem is with xpath that returns nothing on evaluating my expression. I tried many other expressions for testing purpose but every time resulting NodeList "nl" does not have any item

    Read the article

  • Prolog Cut Not Working

    - by user2295607
    Im having a problem with Prolog since cut is not doing what (i believe) its supposed to do: % line-column handlers checkVallEle(_, _, 6, _):- write('FAIL'), !, fail. checkVallEle(TABULEIRO, VALUE, LINE, COLUMN):- COLUMN>5, NL is LINE+1, checkVallEle(TABULEIRO, VALUE, NL, 0). % if this fails, it goes to the next checkVallEle(TABULEIRO, VALUE, LINE, COLUMN):- (checkHorizontal(TABULEIRO, VALUE, LINE, COLUMN, 0), write('HORIZONTAL '); checkVertical(TABULEIRO, VALUE, LINE, COLUMN, 0), write('VERTICAL'); checkDiagonalRight(TABULEIRO, VALUE, LINE, COLUMN, 0), write('DIAGONALRIGHT'); checkDiagonalLeft(TABULEIRO, VALUE, LINE, COLUMN, 0), write('DIAGONALLEFT')), write('WIN'). % goes to the next if above fails checkVallEle(TABULEIRO, VALUE, LINE, COLUMN):- NC is COLUMN+1, checkVallEle(TABULEIRO, VALUE, LINE, NC). What I wish to do is that if the code ever reaches the first statement, that is, if the line is ever 6, it fails (since it went out of range), without checking for more possibilities. But what happens is, when it reaches the first statement, it keeps going to the below statements and ignores the cut symbol, and I dont see why. I just want the statement to fail when it reaches the first line. I also made an experience... run(6):-write('done'), !, fail. run(X):-X1 is X+1, run(X1). And this is what i get from tracing: | ?- run(0). 1 1 Call: run(0) ? 2 2 Call: _1079 is 0+1 ? 2 2 Exit: 1 is 0+1 ? 3 2 Call: run(1) ? 4 3 Call: _3009 is 1+1 ? 4 3 Exit: 2 is 1+1 ? 5 3 Call: run(2) ? 6 4 Call: _4939 is 2+1 ? 6 4 Exit: 3 is 2+1 ? 7 4 Call: run(3) ? 8 5 Call: _6869 is 3+1 ? 8 5 Exit: 4 is 3+1 ? 9 5 Call: run(4) ? 10 6 Call: _8799 is 4+1 ? 10 6 Exit: 5 is 4+1 ? 11 6 Call: run(5) ? 12 7 Call: _10729 is 5+1 ? 12 7 Exit: 6 is 5+1 ? 13 7 Call: run(6) ? 14 8 Call: write(done) ? done 14 8 Exit: write(done) ? 13 7 Fail: run(6) ? 11 6 Fail: run(5) ? 9 5 Fail: run(4) ? 7 4 Fail: run(3) ? 5 3 Fail: run(2) ? 3 2 Fail: run(1) ? 1 1 Fail: run(0) ? no What are all those Fails after the write? is it still backtracing to previous answers? Is this behaviour the reason why cut is failing in my first code? Please enlighten me.

    Read the article

  • Can I select 0 columns in SQL Server?

    - by Woody Zenfell III
    I am hoping this question fares a little better than the similar Create a table without columns. Yes, I am asking about something that will strike most as pointlessly academic. It is easy to produce a SELECT result with 0 rows (but with columns), e.g. SELECT a = 1 WHERE 1 = 0. Is it possible to produce a SELECT result with 0 columns (but with rows)? e.g. something like SELECT NO COLUMNS FROM Foo. (This is not valid T-SQL.) I came across this because I wanted to insert several rows without specifying any column data for any of them. e.g. (SQL Server 2005) CREATE TABLE Bar (id INT NOT NULL IDENTITY PRIMARY KEY) INSERT INTO Bar SELECT NO COLUMNS FROM Foo -- Invalid column name 'NO'. -- An explicit value for the identity column in table 'Bar' can only be specified when a column list is used and IDENTITY_INSERT is ON. One can insert a single row without specifying any column data, e.g. INSERT INTO Foo DEFAULT VALUES. One can query for a count of rows (without retrieving actual column data from the table), e.g. SELECT COUNT(*) FROM Foo. (But that result set, of course, has a column.) I tried things like INSERT INTO Bar () SELECT * FROM Foo -- Parameters supplied for object 'Bar' which is not a function. -- If the parameters are intended as a table hint, a WITH keyword is required. and INSERT INTO Bar DEFAULT VALUES SELECT * FROM Foo -- which is a standalone INSERT statement followed by a standalone SELECT statement. I can do what I need to do a different way, but the apparent lack of consistency in support for degenerate cases surprises me. I read through the relevant sections of BOL and didn't see anything. I was surprised to come up with nothing via Google either.

    Read the article

  • Manipulating columns of numbers in elisp

    - by ~unutbu
    I have text files with tables like this: Investment advisory and related fees receivable (161,570 ) (71,739 ) (73,135 ) Net purchases of trading investments (93,261 ) (30,701 ) (11,018 ) Other receivables 61,216 (10,352 ) (69,313 ) Restricted cash 20,658 (20,658 ) - Other current assets (39,643 ) 14,752 64 Other non-current assets 71,896 (26,639 ) (26,330 ) Since these are accounting numbers, parenthesized numbers indicate negative numbers. Dashes represent 0 or no number. I'd like to be able to mark a rectangular region such as third column above, call a function (format-column), and automatically have (-73135-11018-69313+64-26330)/1000 sitting in my kill-ring. Even better would be -73.135-11.018-69.313+0.064-26.330 but I couldn't figure out a way to transform 64 -- 0.064. This is what I've come up with: (defun format-column () "format accounting numbers in a rectangular column. format-column puts the result in the kill-ring" (interactive) (let ((p (point)) (m (mark)) ) (copy-rectangle-to-register 0 (min m p) (max m p) nil) (with-temp-buffer (insert-register 0) (goto-char (point-min)) (while (search-forward "-" nil t) (replace-match "" nil t)) (goto-char (point-min)) (while (search-forward "," nil t) (replace-match "" nil t)) (goto-char (point-min)) (while (search-forward ")" nil t) (replace-match "" nil t)) (goto-char (point-min)) (while (search-forward "(" nil t) (replace-match "-" nil t) (just-one-space) (delete-backward-char 1) ) (goto-char (point-min)) (while (search-forward "\n" nil t) (replace-match " " nil t)) (goto-char (point-min)) (kill-new (mapconcat 'identity (split-string (buffer-substring (point-min) (point-max))) "+")) (kill-region (point-min) (point-max)) (insert "(") (yank 2) (goto-char (point-min)) (while (search-forward "+-" nil t) (replace-match "-" nil t)) (goto-char (point-max)) (insert ")/1000") (kill-region (point-min) (point-max)) ) ) ) (global-set-key "\C-c\C-f" 'format-column) Although it seems to work, I'm sure this function is poorly coded. The repetitive calls to goto-char, search-forward, and replace-match and the switching from buffer to string and back to buffer seems ugly and inelegant. My entire approach may be wrong-headed, but I don't know enough elisp to make this more beautiful. Do you see a better way to write format-column, and/or could you make suggestions on how to improve this code?

    Read the article

  • Cross join (pivot) with n-n table containing values

    - by Styx31
    I have 3 tables : TABLE MyColumn ( ColumnId INT NOT NULL, Label VARCHAR(80) NOT NULL, PRIMARY KEY (ColumnId) ) TABLE MyPeriod ( PeriodId CHAR(6) NOT NULL, -- format yyyyMM Label VARCHAR(80) NOT NULL, PRIMARY KEY (PeriodId) ) TABLE MyValue ( ColumnId INT NOT NULL, PeriodId CHAR(6) NOT NULL, Amount DECIMAL(8, 4) NOT NULL, PRIMARY KEY (ColumnId, PeriodId), FOREIGN KEY (ColumnId) REFERENCES MyColumn(ColumnId), FOREIGN KEY (PeriodId) REFERENCES MyPeriod(PeriodId) ) MyValue's rows are only created when a real value is provided. I want my results in a tabular way, as : Column | Month 1 | Month 2 | Month 4 | Month 5 | Potatoes | 25.00 | 5.00 | 1.60 | NULL | Apples | 2.00 | 1.50 | NULL | NULL | I have successfully created a cross-join : SELECT MyColumn.Label AS [Column], MyPeriod.Label AS [Period], ISNULL(MyValue.Amount, 0) AS [Value] FROM MyColumn CROSS JOIN MyPeriod LEFT OUTER JOIN MyValue ON (MyValue.ColumnId = MyColumn.ColumnId AND MyValue.PeriodId = MyPeriod.PeriodId) Or, in linq : from p in MyPeriods from c in MyColumns join v in MyValues on new { c.ColumnId, p.PeriodId } equals new { v.ColumnId, v.PeriodId } into values from nv in values.DefaultIfEmpty() select new { Column = c.Label, Period = p.Label, Value = nv.Amount } And seen how to create a pivot in linq (here or here) : (assuming MyDatas is a view with the result of the previous query) : from c in MyDatas group c by c.Column into line select new { Column = line.Key, Month1 = line.Where(l => l.Period == "Month 1").Sum(l => l.Value), Month2 = line.Where(l => l.Period == "Month 2").Sum(l => l.Value), Month3 = line.Where(l => l.Period == "Month 3").Sum(l => l.Value), Month4 = line.Where(l => l.Period == "Month 4").Sum(l => l.Value) } But I want to find a way to create a resultset with, if possible, Month1, ... properties dynamic. Note : A solution which results in a n+1 query : from c in MyDatas group c by c.Column into line select new { Column = line.Key, Months = from l in line group l by l.Period into period select new { Period = period.Key, Amount = period.Sum(l => l.Value) } }

    Read the article

  • Mapping issue with multi-field primary keys using hibernate/JPA annotations

    - by Derek Clarkson
    Hi all, I'm stuck with a database which is using multi-field primary keys. I have a situation where I have a master and details table, where the details table's primary key contains fields which are also the foreign key's the the master table. Like this: Master primary key fields: master_pk_1 Details primary key fields: master_pk_1 details_pk_2 details_pk_3 In the Master class we define the hibernate/JPA annotations like this: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "idGenerator") @Column(name = "master_pk_1") private long masterPk1; @OneToMany(cascade=CascadeType.ALL) @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1") private List<Details> details = new ArrayList<Details>(); And in the details class I have defined the id and back reference like this: @EmbeddedId @AttributeOverrides( { @AttributeOverride( name = "masterPk1", column = @Column(name = "master_pk_1")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")) }) private DetailsPrimaryKey detailsPrimaryKey = new DetailsPrimaryKey(); @ManyToOne @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1", insertable=false) private Master master; The goal of all of this was that I could create a new master, add some details to it, and when saved, JPA/Hibernate would generate the new id for master in the masterPk1 field, and automatically pass it down to the details records, storing it in the matching masterPk1 field in the DetailsPrimaryKey class. At least that's what the documentation I've been looking at implies. What actually happens is that hibernate appears to corectly create and update the records in the database, but not pass the key to the details classes in memory. Instead I have to manually set it myself. I also found that without the insertable=true added to the back reference to master, that hibernate would create sql that had the master_pk_1 field listed twice in the insert statement, resulting in the database throwing an exception. My question is simply is this arrangement of annotations correct? or is there a better way of doing it?

    Read the article

  • Database not completely updated in rails migration

    - by Aatish Sai
    I am new to Ruby on Rails. I have a migration called create user class CreateUsers < ActiveRecord::Migration def change create_table :users do |t| t.column :username, :string, :limit => 25, :default => "", :null => false t.column :hashed_password, :string, :limit => 40, :default => "", :null => false t.column :first_name, :string, :limit => 25, :default => "", :null => false t.column :last_name, :string, :limit => 40, :default => "", :null => false t.column :email, :string, :limit => 50, :default => "", :null => false t.column :display_name, :string, :limit => 25, :default => "", :null => false t.column :user_level, :integer, :limit => 3, :default => 0, :null => false end User.create(:username=>'test',:hashed_password=>'test',:first_name=>'test',:last_name=>'test',:email=>'[email protected]',:display_name=> 'test',:user_level=>9) end end When I run rake db:migrate the table is created with the columns as mentioned above but the test data are not there mysql>select * from users; Empty set (0.00 sec) EDIT I just dropped the whole database and restarted the migration and now it is showing the following error. rake aborted! An error has occurred, all later migrations canceled: Can't mass-assign protected attributes: username, hashed_password, first_name, last_name, email, display_name, user_level What am I doing wrong please help? Thank you.

    Read the article

  • Importing fixtures with foreign keys and SQLAlchemy?

    - by Chris Reid
    I've been experimenting with using fixture to load test data sets into my Pylons / PostgreSQL app. This works great except that it fails to properly create foreign keys if they reference an auto increment id field. My fixture looks like this: class AuthorsData(DataSet): class frank_herbert: first_name = "Frank" last_name = "Herbert" class BooksData(DataSet): class dune: title = "Dune" author_id = AuthorsData.frank_herbert.ref('id') And the model: t_authors = sa.Table("authors", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("first_name", sa.types.String(100)), sa.Column("last_name", sa.types.String(100)), ) t_books = sa.Table("books", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("title", sa.types.String(100)), sa.Column("author_id", sa.types.Integer, sa.ForeignKey('authors.id')) ) When running "paster setup-app development.ini", SQLAlchemey reports the FK value as "None" so it's obviously not finding it: 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] INSERT INTO books (title, author_id) VALUES (%(title)s, %(author_id)s) RETURNING books.id 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] {'author_id': None, 'title': 'Dune'} The fixture docs actually warn that this might be a problem: "However, in some cases you may need to reference an attribute that does not have a value until it is loaded, like a serial ID column. (Note that this is not supported by the SQLAlchemy data layer when using sessions.)" http://farmdev.com/projects/fixture/using-dataset.html#referencing-foreign-dataset-classes Does this mean that this is just not supported with SQLAlchemy? Or is it possible to load the data without using SA "sessions"? How are other people handling this issue?

    Read the article

  • Why use INCLUDE in a SQL index

    - by StarLite
    I recently encountered an index in a database I maintain that was of the form: CREATE INDEX [IX_Foo] ON [Foo] ( Id ASC ) INCLUDE ( SubId ) In this particular case, the performance problem that I was encountering (a slow SELECT filtering on both Id and SubId) could be fixed by simply moving the SubId column into the index proper rather than as an included column. This got me thinking however that I don't understand the reasoning behind included columns at all, when generally, they could simply be a part of the index itself. Even if I don't particularly care about the items being in the index itself is there any downside to having column in the index rather than simply being included. After some research, I am aware that there are a number of restrictions on what can go into an indexed column (maximum width of the index, and some column types that can't be indexed like 'image'). In these cases I can see that you would be forced to include the column in the index page data. The only thing I can think of is that if there are updates on SubId, the row will not need to be relocated if the column is included (though the value in the index would need to be changed). Is there something else that I'm missing? I'm considering going through the other indexes in the database and shifting included columns in the index proper where possible. Would this be a mistake? I'm primarily interested in MS SQL Server, but information on other DB engines is welcome also.

    Read the article

  • CSS Design question, I've got myself completely turned around.

    - by Matt Dawdy
    Okay, I have a couple of other questions out there, but I think I'd better just ask from the beginning how you CSS experts would do this. Client's page is split into 2 rows -- header has some info, some aligned to left of page, some to right, some in the middle. This is currently done using a table. I'm fine with leaving this alone or changing it. My real question is that I need a page layout to handle the following: 2 columns - column on left is 200px, but can be "close" down to to 10px (not a slider, it's either 200 or 10 px). The column on the right needs to be as big as it needs to be -- which might be larger than the width of the page. When left column is "closed" then the right column slides over of course. Again, this right column might be 300px or it might be 4000 pixels (it's a reporting interface). Now, to add another wrinkle, SOME pages have 3 columns. The first 2 columns are each exactly 200px, and both can be "closed" down to 10 px each. But, the user may not close both columns, maybe just 1. Or none. Or both. The third column needs to act just like I described above, being able to be larger than the page width, and sliding over to take advantage of any of the "closed" left columns. Whew! I'm pretty confused as to how to go about this, as either I get it right but I can't scroll over to the right at all (overflow: hidden) and information is lost, or the right column jumps down below the left 2 columns and just looks plain stupid. My minimum browser requirements are IE8, FF3.5, Chrome and Safari (latest versions of all). Any and all pointers are gladly accepted.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >