Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 216/286 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Insert consecutive numbers

    - by Markus
    Hi. I have a table A (Acons, A1, A2, A3) in which I should insert information from another table B with columns (B1, B2, B3). The Acons is a column in which should contain some consecutive numbers (it is not an identity and I cannot make it identity). I know xmin - starting the from number the sequence has to be computed. How can I insert the rows into the table A, using a single Insert statement? I tried like the following, but it didn't work: DECLARE @i AS INT; SET @i = xmin; INSERT INTO A(Acons, A1, A2, A3) SELECT @i = (Bcons = (@i + 1)), B1, B2, B3 FROM B Unfortunatelly, the above solution does not work;

    Read the article

  • In which document do file specifications belong?

    - by Andrew
    In which document would a file specification belong? Perhaps this file is used as an input to a third-party system. Would it belong in its own document? Or would it be better to put it in the functional or design spec? Or somewhere else? When I say file specification, I mean a description of what format the file is (CSV, fixed width, etc), columns, data types, etc. Also, where should you document how the file is generated? i.e. business rules/algorithms which are used to generate the file.

    Read the article

  • How to use SqlDataSource for filling combobox as well as datatable or Dataset

    - by Shantanu Gupta
    I am trying to fetch a column value from a datasource when some value is selected from a dropdownlist on its change event. <asp:DropDownList ID="ddlCityName" runat="server" DataSourceID="dsCity" DataTextField="CityName" DataValueField="CityID" AutoPostBack="True" OnTextChanged="CityName_OnTextChanged"> </asp:DropDownList> <asp:SqlDataSource ID="dsCity" runat="server" ConnectionString="<%$ ConnectionStrings:GmapConnectionString %>" SelectCommand="SELECT * FROM [vcity]" ></asp:SqlDataSource> Here I want to fetch any other column's value that is not binded to a ddlCityName from sqldatasource. I have four columns in datasource i.e. name, id, address, phno. I want to fetch an address of a person who selects some value from ddl.

    Read the article

  • "Othello" game needs some clarification

    - by pappu
    I am trying to see if my understanding of "othello" fame is correct or not. According to the rules, we flip the dark/light sides if we get some sequence like X000X = XXXXX. The question I have is if in the process of flipping 0-X or X- 0, do we also need to consider the rows/columns/diagonals of newly flipped elements? e.g. consider board state as shown in above image(New element X is placed @ 2,3) When we update board, we mark elements from 2,3 to 6,3 as Xs but in this process elements like horizontal 4,3 to 4,5 and diagonal 2,3 to 4,5 are also eligible for update? so do we update those elements as well? or just the elements which have starting as 2,3 (i.e update rows/column/diagonal whose starting point is the element we are dealing with, in our case 2,3?) Please help me understand it

    Read the article

  • Subquery vs Traditional join with WHERE clause?

    - by BradC
    When joining to a subset of a table, any reason to prefer one of these formats over the other? Subquery version: SELECT ... FROM Customers AS c INNER JOIN (SELECT * FROM Classification WHERE CustomerType = 'Standard') AS cf ON c.TypeCode = cf.Code INNER JOIN SalesReps s ON cf.SalesRepID = s.SalesRepID vs the WHERE clause at the end: SELECT ... FROM Customers AS c INNER JOIN Classification AS cf ON c.TypeCode = cf.Code INNER JOIN SalesReps AS s ON cf.SalesRepID = s.SalesRepID WHERE cf.CustomerType = 'Standard' The WHERE clause at the end feels more "traditional", but the first is arguably more clear, especially as the joins get increasingly complex. Only other reason I can think of to prefer the second is that the "SELECT *" on the first might be returning columns that aren't used later (In this case, I'd probably only need to return cf.Code and Cf.SalesRepID)

    Read the article

  • Python - How to wake up a sleeping process- multiprocessing?

    - by user1162512
    I need to wake up a sleeping process ? The time (t) for which it sleeps is calculated as t = D/S . Now since s is varying, can increase or decrease, I need to increase/decrease the sleeping time as well. The speed is received over a UDP procotol. So, how do I change the sleeping time of a process, keeping in mind the following:- If as per the previous speed `S1`, the time to sleep is `(D/S1)` . Now the speed is changed, it should now sleep for the new time,ie (D/S2). Since, it has already slept for D/S1 time, now it should sleep for D/S2 - D/S1. How would I do it? As of right now, I'm just assuming that the speed will remain constant all throughout the program, hence not notifying the process. But how would I do that according to the above condition? def process2(): p = multiprocessing.current_process() time.sleep(secs1) # send some packet1 via UDP time.sleep(secs2) # send some packet2 via UDP time.sleep(secs3) # send some packet3 via UDP Also, as in threads, 1) threading.activeCount(): Returns the number of thread objects that are active. 2) threading.currentThread(): Returns the number of thread objects in the caller's thread control. 3) threading.enumerate(): Returns a list of all thread objects that are currently active. What are the similar functions for getting activecount, enumerate in multiprocessing?

    Read the article

  • Using Rails, how can I set my primary key to not be an integer-typed column?

    - by Rudd Zwolinski
    I'm using Rails migrations to manage a database schema, and I'm creating a simple table where I'd like to use a non-integer value as the primary key (in particular, a string). To abstract away from my problem, let's say there's a table employees where employees are identified by an alphanumeric string, e.g. "134SNW". I've tried creating the table in a migration like this: create_table :employees, {:primary_key => :emp_id} do |t| t.string :emp_id t.string :first_name t.string :last_name end What this gives me is what seems like it completely ignored the line t.string :emp_id and went ahead and made it an integer column. Is there some other way to have rails generate the PRIMARY_KEY constraint (I'm using PostgreSQL) for me, without having to write the SQL in an execute call? NOTE: I know it's not best to use string columns as primary keys, so please no answers just saying to add an integer primary key. I may add one anyway, but this question is still valid.

    Read the article

  • Sum of distinc rows after a 1-many table join

    - by Lock
    I have 2 tables that I am joining. Table 1 has 1-many relationship with table 2. That is, table 2 can return multiple rows for a single row of table 1. Because of this, the records of table 1 is duplicated for as many rows as are on table 2.. which is expected. Now, I have a sum on one of the columns from table 1, but because of the multiple rows that get returned on the join, the sum is obviously multiplying. Is there a way to get this number back to its original number? I tried dividing by the count of rows from table 2 but this didnt quite give me the expected result. Are there any analytical functions that could do this? I almost want something like "if this row has not yet been counted in the sum, add it to the sum"

    Read the article

  • How to prevent bad formatted data input in DataGridViewCell

    - by JuanNunez
    I have an automatically binded DataGridView that obtains data and update data directly from a Strongly Typed Dataset and its TableAdapter. the DataGridView allows data editing but I'm having issues dealing with bad formatted data input. For example, one of the columns is a date, formatted in the database as datetime, 11/05/2010. You can edit the date and the DataGridView opens a TextBox in wich you can enter letters, simbols and other unauthorised characters. When you finish editing the cell if has such bad data it throws a System.FormatException How can I prevent some data to be entered? Is there a way to "filter" that data before it is sent back to the DataGridView?

    Read the article

  • MySQLi String comparisons using keys

    - by asdasd
    I have a table with lets say 2 columns. id number, and value. Value is a string (var char). Lets say i have a number x, and a list of numbers a1, a2, a3, a4, a5..... where x is not in the list. All of these numbers correspond to a unique row in the table. I want to know if the string value for x in the table is contained in one of the string values for any table entry for a1, a2, a3, a4... Lets say i have these rows: x, aaa a1, bbb a2, ccc a3, ddd a4, aaabbbcc then i want somehow a confirmation that yes, the value for x is included in one of the values in my list of numbers (a4 contains x). I know i can do this in a couple queries and shove it down some PHP and get my answer. But can i do this with one query?

    Read the article

  • Grouping and retrieving most recent entry in a table for each group

    - by Lisa
    First off, please bear with me if I don't state the SQL question correctly. I have a table that has multiple columns of data. The selection criteria for my table groups based on column 1(order #). There could be multiple items on each order, but the item #'s are not grouped together. Example: Order Customer Order Date Order Time Item Quantity 123456 45 01/02/2010 08:00 140 4 123456 45 01/02/2010 08:30 270 29 123456 45 03/03/2010 09:00 140 6 123456 45 04/02/2010 09:30 140 10 123456 45 04/02/2010 10:00 270 35 What I need is a result like: Order Customer Order Date Order Time Item Quantity 123456 45 04/02/2010 09:30 140 10 123456 45 04/02/2010 10:00 270 35 This result shows that after all the changes the final order includes 10 of Item 140 and 35 of Item 270. Is this possible. python

    Read the article

  • How can I neatly clean my R workspace while preserving certain objects?

    - by briandk
    Suppose I'm messing about with some data by binding vectors together, as I'm wont to do on a lazy sunday afternoon. x <- rnorm(25, mean = 65, sd = 10) y <- rnorm(25, mean = 75, sd = 7) z <- 1:25 dd <- data.frame(mscore = x, vscore = y, caseid = z) I've now got my new dataframe dd, which is wonderful. But there's also still the detritus from my prior slicings and dicings: > ls() [1] "dd" "x" "y" "z" What's a simple way to clean up my workspace if I no longer need my "source" columns, but I want to keep the dataframe? That is, now that I'm done manipulating data I'd like to just have dd and none of the smaller variables that might inadvertently mask further analysis: > ls() [1] "dd" I feel like the solution must be of the form rm(ls[ -(dd) ]) or something, but I can't quite figure out how to say "please clean up everything BUT the following objects."

    Read the article

  • Reasons for sticking with TEXT, NTEXT and IMAGE instead of (N)VARCHAR(max) and VARBINARY(max)

    - by John Assymptoth
    TEXT, NTEXT and IMAGE have been deprecated a long time ago and will, eventually, be removed from SQL Server. However, they are not going to be discontinued right away, not even in the next version of SQL Server, so it's not convenient for my enterprise to transform thousands of columns right away, even if it is using SQL Server 2012. What arguments can I use to postpone this migration? I know there are some advantages in using the new types. But I'm strictly looking for reasons not to migrate my data that is already functioning pretty well in the old types.

    Read the article

  • Selecting data effectively sql

    - by learner135
    Hi, I have a very large table with over 1000 records and 200 columns. When I try to retreive records matching some criteria in the WHERE clause using SELECT statement it takes a lot of time. But most of the time I just want to select a single record that matches the criteria in the WHERE clause rather than all the records. I guess there should be a way to select just a single record and exit which would minimize the retrieval time. I tried ROWNUM=1 in the WHERE clause but it didn't really work cause I guess the engine still checks all the records even after finding the first record matching the WHERE criteria. Is there a way to optimize in case if I want to select just a few records? Thanks in advance. Edit: I am using oracle 10g.

    Read the article

  • Error " Index exceeds Matrix dimensions"

    - by Mola
    Hi experts, I am trying to read an excel 2003 file which consist of 62 columns and 2000 rows and then draw 2d dendrogram from 2000 pattern of 2 categories of a data as my plot in matlab. When i run the script, it gives me the above error. I don't know why. Anybody has any idea why i have the above error? My data is here: http://rapidshare.com/files/383549074/data.xls Please delete the 2001 column if you want to use the data for testing. and my code is here: % Script file: cluster_2d_data.m d=2000; n1=22; n2=40; N=62 Data=xlsread('data.xls','A1:BJ2000'); X=Data'; R=1:2000; C=1:2; clustergram(X,'Pdist','euclidean','Linkage','complete','Dimension',2,... 'ROWLABELS',R,'COLUMNLABELS',C,'Dendrogram',{'color',5})

    Read the article

  • How to parse mathematical expressions involving parentheses

    - by Rob P.
    Please forgive my title, I really don't know how to phrase it better. This isn't a school assignment or anything, but I realize it's a mostly academic question. But, what I've been struggling to do is parse 'math' text and come up with an answer. For Example - I can figure out how to parse '5 + 5' or '3 * 5' - but I fail when I try to correctly chain operations together. (5 + 5) * 3 It's mostly just bugging me that I can't figure it out. If anyone can point me in a direction, I'd really appreciate it. EDIT Thanks for all of the quick responses. I'm sorry I didn't do a better job of explaining. First - I'm not using regular expressions. I also know there are already libraries available that will take, as a string, a mathematical expression and return the correct value. So, I'm mostly looking at this because, sadly, I don't "get it". Second - What I've tried doing (is probably misguided) but I was counting '(' and ')' and evaluating the deepest items first. In simple examples, this worked; but my code is not pretty and more complicated stuff crashes. When I 'calculated' the lowest level, I was modifying the string. So... (5 + 5) * 3 Would turn into 10 * 3 Which would then evaluate to 30 But it just felt 'wrong'. I hope that helps clarify things. I'll certainly check out the links provided.

    Read the article

  • Handling null values with PowerShell dates

    - by Tim Ferrill
    I'm working on a module to pull data from Oracle into a PowerShell data table, so I can automate some analysis and perform various actions based on the results. Everything seems to be working, and I'm casting columns into specific types based on the column type in Oracle. The problem I'm having has to do with null dates. I can't seem to find a good way to capture that a date column in Oracle has a null value. Is there any way to cast a [datetime] as null or empty?

    Read the article

  • Getting started with MIT Proto

    - by Charles
    MIT Proto lacks a basic getting started guide. How do I find a shell that accepts commands like (def foo...) and proto -n 1000 -l -m ...? http://groups.csail.mit.edu/stpg/proto.html I can run in my bash shell: ./proto -n 1000 -s 0.1 -T -l "(red (gradient (= (mid) 0)))" I can't figure out how to run e.g. channel.proto: (def channel (src dst width) (let* ((d (distance src dst)) (trail (<= (+ (gradient src) (gradient dst)) (+ d 0.01))) ;; float error ;; (trail (= (+ (gradient src) (gradient dst)) d)) ) (dilate trail width))) ;; To see a channel calculated from geometric primitives, run: ;; proto -n 1000 -l -m -s 0.5 "(blue (channel (sense 1) (sense 2) 10))" ;; click on a device and hit 't' to set up the source, then click on ;; another device and hit 'y' to designate the destination. At first ;; every device will be blue, but then it should clear and you should ;; see a thick blue path connecting the two devices you selected. Thanks! P.S. Somebody please tag this mit-proto. I can't.

    Read the article

  • Conditional PIVOT/transform problem

    - by IanC
    Hi folks I have a table with three columns, which we'll call ID1, ID2, and Value. Sample data: ID ID1 Value 1 1 0 1 2 1 1 3 1 1 3 2 1 4 0 1 4 1 1 5 0 1 5 2 2 1 2 Value is limited to 0, 1, or 2. What I need to do is pivot/transform this data into a column-based count of how many times each possible Value appears, grouped by ID, ID1. The output of the above should be: ID ID1 Val0 Val1 Val2 1 1 1 0 0 1 2 0 2 0 1 3 0 1 1 1 4 1 1 0 1 5 1 0 1 2 1 0 0 1 I'm using SQL Server 2008. How do I do this?

    Read the article

  • How does the dataset determine the return type of a scalar query?

    - by Tobias Funke
    I am attempting to add a scalar query to a dataset. The query is pretty straight forward, it's just adding up some decimal values in a few columns and returning them. I am 100% confident that only one row and one column is returned, and that it is of decimal type (SQL money type). The problem is that for some reason, the generated method (in the .designer.cs code file) is returning a value of type object, when it should be decimal. What's strange is that there's another scalar query that has the exact same SQL but is returning decimal like it should. How does the dataset designer determine the data type, and how can I tell it to return decimal?

    Read the article

  • Automatically Add a Prefix to Column Names for @Embeddable Classes

    - by VeeArr
    I am developing a project in which I am persisting some POJOs by adding Hibernate annotations. One problem I am running into is that code like this fails, as Hibernate tries to map the sub-fields within the Time_T onto the same column (i.e. startTime.sec and stopTime.sec both try to map to the colum sec, causing an error). @Entity public class ExampleClass { @Id long eventId; Time_T startTime; Time_T stopTime; } @Embeddable public class Time_T { int sec; int nsec; } As there will be many occurrences like this throughout the system, it would be nice if there was an option to automatically append a prefix to the column name (e.g. make the columns be startTime_sec, startTime_nsec, stopTime_sec, stopTime_nsec), without having to apply overrides on a per-field basis. Does Hibernate have this capability, or is there any other reasonable work-around?

    Read the article

  • C# MVVM Calculating Total

    - by LnDCobra
    I need to calculate a trade value based on the selected price and quantity. How can The following is my ViewModel: class ViewModel : ViewModelBase { public Trade Trade { get { return _trade; } set { SetField(ref _trade, value, () => Trade); } } private Trade _trade; public decimal TradeValue { get { return Trade.Amount * Trade.Price; } } } ViewModelBase inherits INotifyPropertyChanged and contains SetField() The Following is the Trade class: public class Trade : INotifyPropertyChaged { public virtual Decimal Amount { get { return _amount; } set { SetField(ref _amount, value, () => Amount); } } private Decimal _amount; public virtual Decimal Price { get { return _price; } set { SetField(ref _price, value, () => Price); } } private Decimal _price; ...... } I know due to the design my TradeValue only gets calculated once (when its first requested) and UI doesn't get updated when amount/price changes. What is the best way of achieving this? Any help greatly appreciated.

    Read the article

  • Sorting records datewise in datatable.

    - by Harikrishna
    I have datatable and I am storing the records in that datatable. Now one of the column in the datatable is type of DateTime.That column I added by code Table.Columns.Add(new DataColumn("TradingDate",System.Type.GetType("System.DateTime"))); Now I want to sort the records datewise in ascending order. I have used following code but the records are not sorted. DataView view = new DataView(Table); view.Sort = "TradingDate ASC"; dataGridView1.DataSource=view.Table; But records are not sorted so how to do it ?

    Read the article

  • How do you calculate expanding mean on time series using pandas?

    - by mlo
    How would you create a column(s) in the below pandas DataFrame where the new columns are the expanding mean/median of 'val' for each 'Mod_ID_x'. Imagine this as if were time series data and 'ID' 1-2 was on Day 1 and 'ID' 3-4 was on Day 2. I have tried every way I could think of but just can't seem to get it right. left4 = pd.DataFrame({'ID': [1,2,3,4],'val': [10000, 25000, 20000, 40000],'Mod_ID': [15, 35, 15, 42], 'car': ['ford','honda', 'ford', 'lexus']}) right4 = pd.DataFrame({'ID': [3,1,2,4],'color': ['red', 'green', 'blue', 'grey'], 'wheel': ['4wheel','4wheel', '2wheel', '2wheel'], 'Mod_ID': [15, 15, 35, 42]}) df1 = pd.merge(left4, right4, on='ID').drop('Mod_ID_y', axis=1)

    Read the article

  • SQL Select between two fields depending on the value of one field

    - by Filip
    Hi. I am using a PostgreSQL database, and in a table representing some measurements I've two columns: measurement, and interpolated. In the first I've the observation (measurement), and b is the interpolated value depending on nearby values. Every record with an original value has also an interpolated value. However, there are a lot of records without "original" observations (NULL), hence the values are interpolated and stored in the second column. So basically there are just two cases in the database: Value Value NULL Value Of course, it is preferable to use the value from the first column if available, hence I need to build a query to select the data from the first column, and if not available (NULL), then the database returns the value from the second column for the record in question. I have no idea how to build the SQL query. Please help. Thanks.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >