Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 219/286 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Return multiple results using dynamic sql (postgresql 8.2)

    - by precose
    I want to loop through schemas and get a result set that looks like this: Count 5 834 345 34 984 However, I can't get it to return anything using dynamic sql...I've tried everything but 8.2 is being a real pain. Here is my function: CREATE OR REPLACE FUNCTION dwh.adam_test4() RETURNS void LANGUAGE plpgsql AS $function$ DECLARE myschema text; rec RECORD; BEGIN FOR myschema IN select distinct c.table_schema, d.p_id from information_schema.tables t inner join information_schema.columns c on (t.table_schema = t.table_schema and t.table_name = c.table_name) join dwh.sgmt_clients d on c.table_schema = lower(d.userid) where c.table_name = 'fact_members' and c.column_name = 'debit_card' and t.table_schema NOT LIKE 'pg_%' and t.table_schema NOT IN ('information_schema', 'ad_delivery', 'dwh', 'users', 'wand', 'ttd') order by table_schema LOOP EXECUTE 'select count(ucic) from '|| myschema || '.' ||'fact_members where debit_card = ''yes''' into rec; RETURN rec; END LOOP; END $function$

    Read the article

  • ggplot: showing % instead of counts in charts of categorical variables

    - by wishihadabettername
    I'm plotting a categorical variable and instead of showing the counts for each category value, I'm looking for a way to get ggplot to display the percentage of values in that category. Of course, it is possible to create another variable with the calculated percentage and plot that one, but I have to do it several dozens of times and I hope to achieve that in one command. I was experimenting with something like qplot (mydataf) + stat_bin(aes(n=nrow(mydataf), y=..count../n)) + scale_y_continuous(formatter="percent") but I must be using it incorrectly, as I got errors. To easily reproduce the setup, here's a simplified example: mydata <- c ("aa", "bb", null, "bb", "cc", "aa", "aa", "aa", "ee", null, "cc"); mydataf <- factor(mydata); qplot (mydataf); #this shows the count, I'm looking to see % displayed. In the real case I'll probably use ggplot instead of qplot, but the right way to use stat_bin still eludes me. Thank you. UPDATE: I've also tried these four approaches: ggplot(mydataf, aes(y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent'); ggplot(mydataf, aes(y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent') + geom_bar(); ggplot(mydataf, aes(x = levels(mydataf), y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent'); ggplot(mydataf, aes(x = levels(mydataf), y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent') + geom_bar(); but all 4 give: Error: ggplot2 doesn't know how to deal with data of class factor The same error appears for the simple case of ggplot (data=mydataf, aes(levels(mydataf))) + geom_bar() so it's clearly something about how ggplot interacts with a single vector. I'm scratching my head, googling for that error gives a single result.

    Read the article

  • how to pull and display range (min-max) data for each page in pagination?

    - by Ty W
    I have a table of data that is searchable and sortable, but likely to produce hundreds or thousands of results for broad searches. Assuming the user searches for "foo" and sorts the foos in descending price order I'd like to show a quick-jump select menu like so: <option value="1">Page 1 ($25,000,000 - $1,625,000)</option> <option value="2">Page 2 ($1,600,000 - $1,095,000)</option> <option value="3">Page 3 ($1,095,000 - $815,000)</option> <option value="4">Page 4 ($799,900 - $699,000)</option> ... Is there an efficient way of querying for this information directly from the DB? I've been grabbing all of the matching records and using PHP to calculate the min and max value for each page which seems inefficient and likely to cause scaling problems. The only possible technique I've been able to come up with is some way of having a calculated variable that increments every X records (X records to a page), grouping by that, and selecting MIN/MAX for each page grouping... unfortunately I haven't been able to come up with a way to generate that variable.

    Read the article

  • Dynamic SQL and Funtions

    - by Unlimited071
    Hi all, is there any way of accomplishing something like the following: CREATE FUNCTION GetQtyFromID ( @oricod varchar(15), @ccocod varchar(15), @ocmnum int, @oinnum int, @acmnum int, @acttip char(2), @unisim varchar(15) ) AS BEGIN DECLARE @Result decimal(18,8) DECLARE @SQLString nvarchar(max); DECLARE @ParmDefinition nvarchar(max); --I need to execute a query stored in a cell which returns the calculated qty. --i.e of AcuQry: select @cant = sum(smt) from table where oricod = @oricod and ... SELECT @SQLString = AcuQry FROM OinActUni WHERE (OriCod = @oricod) AND (ActTipCod = @acttip) AND (UniSim = @unisim) AND (AcuEst > 0) SET @ParmDefinition = N' @oricod varchar(15), @ccocod varchar(15), @ocmnum int, @oinnum int, @acmnum int, @cant decimal(18,8) output'; EXECUTE sp_executesql @SQLString, @ParmDefinition, @oricod = @oricod, @ccocod = @ccocod, @ocmnum = @ocmnum, @oinnum = @oinnum, @acmnum = @acmnum, @cant = @result OUTPUT; RETURN @Result END The problem with this approach is that it is prohibited to execute sp_excutesql in a function... What I need is to do something like: select id, getQtyFromID(id) as qty from table The main idea is to execute a query stored in a table cell, this is because the qty of something depends on it's unit. the unit can be days or it can be metric tons, so there is no relation between the units, therefore the need of a specific query for each unit.

    Read the article

  • PHP --> MSSQL + Excel file upload from a specific row

    - by lucky
    Hello, I have a requirement in which, i need to upload an excel file into MSSQL database. It is working well till this point. But sometimes i am getting excel file in such a way, that the first 2 rows and columns are empty rows. That is not even fixed everytime. so the format of the table after upload is not as expected. It is assigning F1, F2 as column names for the table in mssql. It varies from file to file. I want to program it in such away so that the user can enter the row number and the column number from where the actual data is starting. So that while upload it should from line 3 and column 2. I dont know how to specify that row and column while uploading. Please help me to solve the same.

    Read the article

  • How to create a MySQL query for time based elements with a 'safe window'?

    - by pj4533
    I am no SQL expert, far from it. I am writing a Rails application, and I am new at that as well. I come from a desktop programming background. My application has a table of data, one of the columns is the time at which the data was logged. I want to create a query with a 'safe window' around EACH row. By that I mean, it returns the first row, then for X minutes (based on the timelogged column) it won't return any data, once X minutes is up, it will return the next row. For example: ID | TimeLogged 1 | 3/5/2010 12:01:01 2 | 3/5/2010 12:01:50 3 | 3/5/2010 12:02:03 4 | 3/5/2010 12:10:30 5 | 3/5/2010 01:30:03 6 | 3/5/2010 01:31:05 With a 'safe window' of 5 minutes I want to create a query to return: 1 | 3/5/2010 12:01:01 4 | 3/5/2010 12:10:30 5 | 3/5/2010 01:30:03 (It skipped the 12:01:50 and 12:02:03 items because they occurred within 5 minutes of the first item.) Another example, with a 'safe window' of 15 minutes I want to return: 1 | 3/5/2010 12:01:01 5 | 3/5/2010 01:30:03 Perhaps I have to just return all data and parse it myself?

    Read the article

  • (WinForm/.net) Databind List Of Classes To A DataGridView. But Not Show Certain Public Properties

    - by Pyronaut
    I'm not even sure if i'm doing this correctly. But basically I have a list of objects that are built out of a class. From there, I am binding the list to a datagrid view that is on a Windows Form (C#) From there, it shows all the public properties of the object, in the datagrid view. However there is some properties that i still need accessible from other parts of my application, but aren't really required to be visible in the DataGridView. So is there an attribute or something similar that I can write above the property to exclude it from being shown. P.S. Im binding at runtime. So i cannot edit the columns via the designer. P.P.S. Please no answers of just making public variables (Although if that is the only way, let me know :)).

    Read the article

  • How do I switch the table that is queried with linq-to-sql

    - by Ian Ringrose
    We have two tables with the same set of columns; depending on the “type” of object the value is stored in one of the two tables. I wish to use common code to access these two tables. If I was using “raw sql” I could just use String.Format() to change the table name. (Likewise for updates etc) The two separate tables are needed as the data access patterns are very different for the common queries on the two tables and therefore different indexes are needed. “Views” and “instead of triggers” etc to make the tables look like a single table are not liked here. A lot of our customers use low end version of SqlServer so we cannot use partition tables.

    Read the article

  • Easy way to compute how close an auto_increment is to its maximum value?

    - by David M
    So yesterday we had a table that has an auto_increment PK for a smallint that reached its maximum. We had to alter the table on an emergency basis, which is definitely not how we like to roll. Is there an easy way to report on how close each auto_increment field that we use is to its maximum? The best way I can think of is to do a SHOW CREATE TABLE statement, parse out the size of the auto-incremented column, then compare that to the AUTO_INCREMENT value for the table. On the other hand, given that the schema doesn't change very often, should I store information about the columns' maximum values and get the current AUTO_INCREMENT with SHOW TABLE STATUS?

    Read the article

  • Is it always bad idea to use inline css for used-once property?

    - by user93422
    I have a table, with 10 columns. I want to control the width of each column. Each column is unique, right now I create an external CSS style for each column: div#my-page table#members th.name-col { width: 40px; } I know there is a best practice to avoid inline style. I do approve using external CSS for anything look'n'feel related: fonts, colors, images. But is it really better to use external CSS in this case? It does not incur extra maintenance cost. It is easier to produce. Cons I can think of: If you have separate designers and development team - using inline styles will force designers to modify content-file (aspx in my case). It might use more bandwidth. Anything else I've missed?

    Read the article

  • Pushing an array into a vector.

    - by Sunil
    I've a 2d array, say A[2][3]={{1,2,3},{4,5,6}}; and I want to push it into a 2D vector(vector of vectors). I know you can use two for loops to push the elements one by on on to the first vector and then push that into the another vector which makes it 2d vector but I was wondering if there is any way in C++ to do this in a single loop. For example I want to do something like this: myvector.pushback(A[1]+3); // where 3 is the size or number of columns in the array. I understand this is not a correct code but I put this just for understanding purpose. Thanks

    Read the article

  • JPA Native Query (SQL View)

    - by Uchenna
    I have two Entities Customer and Account. @Entity @Table(name="customer") public class Customer { private Long id; private String name; private String accountType; private String accountName; ... } @Entity @Table(name="account") public class Account { private Long id; private String accountName; private String accountType; ... } i have a an sql query select a.id as account_id, a.account_name, a.account_type, d.id, d.name from account a, customer d Assumption account and customer tables are created during application startup. accountType and accountName fields of Customer entity should not be created. That is, only id and name columns will be created. Question How do i run the above sql query and return a Customer Entity Object with the accountType and accountName properties populated with sql query's account_name and account_type values. Thanks

    Read the article

  • Handling null values with PowerShell dates

    - by Tim Ferrill
    I'm working on a module to pull data from Oracle into a PowerShell data table, so I can automate some analysis and perform various actions based on the results. Everything seems to be working, and I'm casting columns into specific types based on the column type in Oracle. The problem I'm having has to do with null dates. I can't seem to find a good way to capture that a date column in Oracle has a null value. Is there any way to cast a [datetime] as null or empty?

    Read the article

  • MYSQL sum() for distinct rows

    - by makeee
    I'm looking for help using sum() in my SQL query (not posting full query since the scenario is fairly simple). I have COUNT(DISTINCT conversions.id) in my query. I use DISTINCT because I'm doing "group by" for multiple columns and this ensures the same row is not counted more than once. Now I want to add: SUM(conversions.value) as conversion_value The problem is that the "value" for each row is counted more than once (due to the multiple group bys) I basically want to do SUM(conversions.value) for each DISTINCT conversions.id. Is that possible?

    Read the article

  • Reading variable with double float precision from a text file with gnuplot

    - by user3636322
    I have a text file, containing data in 3 columns like below: 0.0100000000 | 0.0058077299 | -0.0000000288 0.0110000000 | 0.0075128707 | -0.0000000373 0.0120000000 | 0.0093579693 | -0.0000000465 I want to get the variables from this file in gnuplot and use them to draw graphs: What I exactly do is like below (e.g: to pick the variable from row 2 column 3): ii= 2 a_0 = system("awk '{ if (NR == " . ii . ") printf \"%f\", $3}' " .datafile) a_0 = a_0+0. but what is written as a_0 is zero! How can I increase the precision to get the exact value?

    Read the article

  • Getting a query to index seek (rather than scan)

    - by PaulB
    Running the following query (SQL Server 2000) the execution plan shows that it used an index seek and Profiler shows it's doing 71 reads with a duration of 0. select top 1 id from table where name = '0010000546163' order by id desc Contrast that with the following with uses an index scan with 8500 reads and a duration of about a second. declare @p varchar(20) select @p = '0010000546163' select top 1 id from table where name = @p order by id desc Why is the execution plan different? Is there a way to change the second method to seek? thanks EDIT Table looks like CREATE TABLE [table] ( [Id] [int] IDENTITY (1, 1) NOT NULL , [Name] [varchar] (13) COLLATE Latin1_General_CI_AS NOT NULL) Id is primary clustered key There is a non-unique index on Name and a unique composite index on id/name There are other columns - left them out for brevity

    Read the article

  • get data from database by manually giving the server name

    - by syedsaleemss
    Im using c# .net windows application form. I have created many databases with many tables in each. I have a datagrid view and a display button. when i click on this button, the system must prompt me to enter the server name and after typing the server name, it should display all the databases related to that server in some combo box. and again if i select a database it should show all the tables present in that database into a combobox. and if i select a table, it should prompt an option to select only required columns into the datagridview. How can i do this?

    Read the article

  • Problem using OLEDBCOMMANDBUILDER.

    - by Lullly
    So, here it goes: I need to copy data from table in access database, in another table from another access database. Column names from tables are the same, except the fact that the FROM table has 5 columns, the TO table has 6. here is my code: dsFrom.Clear() dsTO.Clear() daFrom = Nothing daTO = Nothing conn_string1 = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source="etc.mdb;" conn_string2 = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source="database.mdb;" query1 = "Select * from nomenclator_produse" query2 = "Select * from nomenclator_produse" Conn1 = New OleDbConnection(conn_string1) conn2 = New OleDbConnection(conn_string2) Conn1.Open() conn2.Open() daFrom = New OleDbDataAdapter(query1, Conn1) daTO = New OleDbDataAdapter(query2, conn2) daFrom.AcceptChangesDuringFill = False dsFrom.HasChanges() daFrom.Fill(dsFrom, "nomenclator_Produse") dsFrom.HasChanges() Dim cb = New OleDbCommandBuilder(daFrom) dsTO = dsFrom.Copy daTO.UpdateCommand = cb.GetUpdateCommand daTO.InsertCommand = cb.GetInsertCommand daTO.Update(dsTO, "nomenclator_produse") Because the FROM table has 5 rows and the other has 6, i'm trying to use the InsertCommand generated by the DataAdapter of the first table. It works, only that it inserts the data from the FROMTABLE in the same FROMTABLE, instead of TOTABLE. :| please help me :(

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • SQLite: Simple DELETE statement did not work

    - by user186446
    I have a table MRU, that has 3 columns. (VALUE varchar(255); TYPE varchar(20); DT_ADD datetime) This is a table simply storing an entry and recording the date time it was recorded. What I wanted to do is: delete the oldest entry whenever I add a new entry that exceeds a certain number. here is my query: delete from MRU where type = 'FILENAME' ORDER BY DT_ADD limit 1; The query returns an error. Thanks

    Read the article

  • LINQ to SQL - Get only substring from a field

    - by domanokz
    I'm studying ASP.NET MVC and I use LINQ to SQL for model. I have a table named "Note" with the fields "Title" and "Content". The "Content" field can contain thousand characters. What I want to do is to display the LIST of notes in a page. I use table with two columns, for "Title" and SUBSTRING of the "Content" (50 characters). My problem is, I don't know how to edit the model so that it will display only the substring of the "Content". Thanks in advance!

    Read the article

  • Multiple column Union Query without duplicates

    - by Adam Halegua
    I'm trying to write a Union Query with multiple columns from two different talbes (duh), but for some reason the second column of the second Select statement isn't showing up in the output. I don't know if that painted the picture properly but here is my code: Select empno, job From EMP Where job = 'MANAGER' Union Select empno, empstate From EMPADDRESS Where empstate = 'NY' Order By empno The output looks like: EMPNO JOB 4600 NY 5300 MANAGER 5300 NY 7566 MANAGER 7698 MANAGER 7782 MANAGER 7782 NY 7934 NY 9873 NY Instead of 5300 and 7782 appearing twice, I thought empstate would appear next to job in the output. For all other empno's I thought the values in the fields would be (null). Am I not understanding Unions correctly, or is this how they are supposed to work? Thanks for any help in advance.

    Read the article

  • Horizontal scroll in rich:panel to default right to left

    - by TaylorSmolik
    I have a rich:panel with a style="overflow: scroll" tag inside. By default, the scroll slides left to right. I am constantly adding a new dataTable to a dataGrid with the click of a button and I want the user to always see the most recent one, and since I have it set up so that each dataTable is added as a column to the dataGrid, the most recent one will always be on the right side of the dataGrid. Is there a way I can default the scroll to go right to left? Or maybe creating the columns from right to left?

    Read the article

  • Timestamps and Intervals: NUMTOYMINTERVAL SYSTDATE CALCULATION SQL QUERY

    - by MeachamRob
    I am working on a homework problem, I'm close but need some help with a data conversion I think. Or sysdate - start_date calculation The question is: Using the EX schema, write a SELECT statement that retrieves the date_id and start_date from the Date_Sample table (format below), followed by a column named Years_and_Months_Since_Start that uses an interval function to retrieve the number of years and months that have elapsed between the start_date and the sysdate. (Your values will vary based on the date you do this lab.) Display only the records with start dates having the month and day equal to Feb 28 (of any year). DATE_ID START_DATE YEARS_AND_MONTHS_SINCE_START 2 Sunday , February 28, 1999 13-8 4 Monday , February 28, 2005 7-8 5 Tuesday , February 28, 2006 6-8 Our EX schema that refers to this question is simply a Date_Sample Table with two columns: DATE_ID NUMBER NOT Null START_DATE DATE I Have written this code: SELECT date_id, TO_CHAR(start_date, 'Day, MONTH DD, YYYY') AS start_date , NUMTOYMINTERVAL((SYSDATE - start_date), 'YEAR') AS years_and_months_since_start FROM date_sample WHERE TO_CHAR(start_date, 'MM/DD') = '02/28'; But my Years and months since start column is not working properly. It's getting very high numbers for years and months when the date calculated is from 1999-ish. ie, it should be 13-8 and I'm getting 5027-2 so I know it's not correct. I used NUMTOYMINTERVAL, which should be correct, but don't think the sysdate-start_date is working. Data Type for start_date is simply date. I tried ROUND but maybe need some help to get it right. Something is wrong with my calculation and trying to figure out how to get the correct interval there. Not sure if I have provided enough information to everyone but I will let you know if I figure it out before you do. It's a question from Murach's Oracle and SQL/PL book, chapter 17 if anyone else is trying to learn that chapter. Page 559.

    Read the article

  • Finding records within a 5 min time interval in SQL

    - by Mellonjollie
    I have a table with over 100,000 rows that contain the following columns: ID, Time, and Boolean. The time column tracks time down to the second. I need a query that will find all instances of Boolean = 1 for every 5 minute interval of time from the start of the table to the end, then group the count by time interval. The table represents 4 hours of data, so I should get 48 rows of results. I'm using MS SQL Server. I've tried a few approaches, but the time interval logic is giving me a hard time.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >