Search Results

Search found 6407 results on 257 pages for 'reorder columns'.

Page 193/257 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Doctrine: Mixing YAML markup and db manager (navicat) editing?

    - by ropstah
    I think the answer to this question should be: No. However I hope to be corrected. I'd like to edit our database using a mixture of YAML markup + Doctrine createTables() and Navicat editing. Can I maintain the inheritance which is marked up? Example (4 steps, at step 4, Doctrine is in no way able to re-create the inheritance schema... or is it?): Step 1: Create YAML with inheritance --- Entity: columns: username: string(20) password: string(16) created_at: timestamp updated_at: timestamp User: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 1 Group: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 2 Step 2: Create tables using Doctrine (and drop/create db if nessecary) Created sql: CREATE TABLE entity (id BIGINT AUTO_INCREMENT, username VARCHAR(20), password VARCHAR(16), created_at DATETIME, updated_at DATETIME, type VARCHAR(255), PRIMARY KEY(id)) ENGINE = INNODB Step 3: Edit table using Navicat Step 4: Refresh YAML file because of 'external' edits...

    Read the article

  • read contents of a sql table into a data grid view

    - by syedsaleemss
    Im using c# .net windows application form. i have created a database named "resources" in SQL server 2005. In that i have a table named "resourcetable" which has two columns. now i need to read the data in the table into a datagridview. I can do this by using the connection string and querry by giving the table name. But my problem is i have a button and if i click on that button, an open dialog box should open and i should be able to select a database amomng many databases and in that selected database i have to select one table amomg many tables. i.e select a table ofa particular database. and the contents of that table are to be displayed in a datagridview.

    Read the article

  • Grouping and retrieving most recent entry in a table for each group

    - by Lisa
    First off, please bear with me if I don't state the SQL question correctly. I have a table that has multiple columns of data. The selection criteria for my table groups based on column 1(order #). There could be multiple items on each order, but the item #'s are not grouped together. Example: Order Customer Order Date Order Time Item Quantity 123456 45 01/02/2010 08:00 140 4 123456 45 01/02/2010 08:30 270 29 123456 45 03/03/2010 09:00 140 6 123456 45 04/02/2010 09:30 140 10 123456 45 04/02/2010 10:00 270 35 What I need is a result like: Order Customer Order Date Order Time Item Quantity 123456 45 04/02/2010 09:30 140 10 123456 45 04/02/2010 10:00 270 35 This result shows that after all the changes the final order includes 10 of Item 140 and 35 of Item 270. Is this possible. python

    Read the article

  • Reasons for sticking with TEXT, NTEXT and IMAGE instead of (N)VARCHAR(max) and VARBINARY(max)

    - by John Assymptoth
    TEXT, NTEXT and IMAGE have been deprecated a long time ago and will, eventually, be removed from SQL Server. However, they are not going to be discontinued right away, not even in the next version of SQL Server, so it's not convenient for my enterprise to transform thousands of columns right away, even if it is using SQL Server 2012. What arguments can I use to postpone this migration? I know there are some advantages in using the new types. But I'm strictly looking for reasons not to migrate my data that is already functioning pretty well in the old types.

    Read the article

  • Database migration

    - by graham.reeds
    We are migrating a client's own database schema to our own (both SQL-Server). Most of the mappings from their schema to ours have been indentified and rules been agreed on if the columns don't exactly align (default values etc.) Previously, depending on who was assigned the task, this has been done either with a mixture of sql scripts or one-off vb apps. I was thinking there must be a application (commercial or otherwise) where you can assign these mappings/rules and have it stream the data across. Surely the setting up and configuration of this tool would be less than the creation of ad-hoc scripts... Is there an app? Apart from the obvious 'be careful' any tips to mitigate the stress of a non-DBA porting one schema to another?

    Read the article

  • How can I efficiently retrieve a large number of database settings as PHP variables?

    - by Steven
    Currently all of my script's settings are located in a PHP file which I 'include'. I'm in the process of moving these settings (about 100) to a database table called 'settings'. However I'm struggling to find an efficient way of retrieving all of them into the file. The settings table has 3 columns: ID (autoincrements) name value Two example rows might be: 1 admin_user john 2 admin_email_address [email protected] The only way I can think of retrieving each setting is like this: $result = mysql_query("SELECT value FROM settings WHERE name = 'admin_user'"); $row = mysql_fetch_array($result); $admin_user = $row['value']; $result = mysql_query("SELECT value FROM settings WHERE name = 'admin_email_address'"); $row = mysql_fetch_array($result); $admin_email_address = $row['value']; etc etc Doing it this way will take up a lot of code and will likely be slow. Is there a better way? Thanks.

    Read the article

  • php codeigniter MySQL search query

    - by kalafun
    I want to create a search query on MySQL database that will consist of 5 different strings typed in from user. I want to query 5 different table columns with these strings. When I for example have input fields like: first name, last name, address, post number, city. How should I query the database that I dont always get all the rows. My query is something like this: SELECT user_id, username from users where a like %?% AND b like %?% AND c like %?% AND d like %?% AND e like %?%; When I exchange the AND for OR I always get all the results which makes sense, and when I use AND I get only the exact matches... Is there any function or statement that would help me with this?

    Read the article

  • Which index is used in select and why?

    - by Lukasz Lysik
    I have the table with zip codes with following columns: id - PRIMARY KEY code - NONCLUSTERED INDEX city When I execute query SELECT TOP 10 * FROM ZIPCodes I get the results sorted by id column. But when I change the query to: SELECT TOP 10 id FROM ZIPCodes I get the results sorted by code column. Again, when I change the query to: SELECT TOP 10 code FROM ZIPCodes I get the results sorted by code column again. And finally when I change to: SELECT TOP 10 id,code FROM ZIPCodes I get the results sorted by id column. My question is in the title of the question. I know which indexes are used in the queries, but my question is, why those indexes are used? I the second query (SELECT TOP 10 id FROM ZIPCodes) wouldn't it be faster if the clusteder index was used? How the query engine chooses which index to use?

    Read the article

  • Example of user-defined integrity rule in database systems?

    - by Pavel
    Hey everyone. I'm currently preparing for my exams and would like to know some examples of user-defined integrity rule in database systems. As far as I understand, it means that I can set up certain conditions for the columns and when data is inserted it needs to fulfill these conditions. For example: if I set up a rule that an ID needs to consist of 5 integers ONLY then when I insert a row with ID which is made up of integers and some chars then it won't accept it and return an error. Could someone confirm and give me some opinion on that? Thank you very much in advance!

    Read the article

  • Selecting data effectively sql

    - by learner135
    Hi, I have a very large table with over 1000 records and 200 columns. When I try to retreive records matching some criteria in the WHERE clause using SELECT statement it takes a lot of time. But most of the time I just want to select a single record that matches the criteria in the WHERE clause rather than all the records. I guess there should be a way to select just a single record and exit which would minimize the retrieval time. I tried ROWNUM=1 in the WHERE clause but it didn't really work cause I guess the engine still checks all the records even after finding the first record matching the WHERE criteria. Is there a way to optimize in case if I want to select just a few records? Thanks in advance. Edit: I am using oracle 10g.

    Read the article

  • Return cell reference as result of if statement with vlookups.

    - by EMJ
    I have two sets of data in excel. One contains a set of data which represents the initial step of a process. The other set of data represents the additional steps which take place after the first step is completed. Each of the data records in the "additional step data" has an id in a column. I need to find the identifying codes of the "additional step data" which correspond with the initial step data records. The problem is that I have to match the data in 4 columns between the two data sets and return the id of the "additional step data". I started by doing a combination of an if and vlookup functions, but I got stuck when I tried to figure out how to get the if statement to reference the id of the matching "additional step data". Basically I am trying to avoid having to search by manually filtering between two sets of data and finding corresponding records. Does anyone have any idea about how to do this?

    Read the article

  • using regular expression / Remove special characters with linq to sql

    - by Prasad
    How can i use regular expressions with linq to sql in the asp.net mvc(C#) application? The columns in my MSSQL table (Products) has some special characters like (, - % ',.....). While searching for a product i need to search without that special chareters. For ex.: I have a product say (ABC-Camp's / One), when i search for "abccamp", it should pull the product. I am trying the query like: from p in _context.pu_Products where p.User_Id == userId && p.Is_Deleted == false && p.Product_Name.ToLower().Contains(text.ToLower()) select new Product { ProductId = p.Product_Id, ProductName = p.Product_Name.Replace("’", "").Replace("\"", ""), RetailPrice = p.Retail_Price ?? 0M, LotSize = p.Lot_Size > 0 ? p.Lot_Size ?? 1 : 1, QuantityInHand = p.Quantity_In_Hand ?? 0 } But i need to search without any special characters...

    Read the article

  • Use LINQ to insert data from dataset to SQL

    - by Mayo
    Let's say I have a dataset in an ASP.NET website (.NET 3.5) with 5 tables, each has roughly 30,000 rows and an average of 12 columns. I want to insert all of the data from the dataset into 5 very-similar-but-not-quite-identical tables in SQL Server 2008. I also want to use LINQ (personal preference - trying to learn something new). Is it as simple as iterating through the dataset and, for each row, creating a new instance of the associated class, initializing its data with the dataset's row, adding it to the data model, and then doing one giant SubmitChanges at the end? Are there better ways of doing this with LINQ? Or is this the de-facto standard?

    Read the article

  • JPA @version - can it be used to calcualate version of a table entry

    - by OpenSource
    Hi, Please consider the following table (created using a corresponding entity) request ------- id requestor type version items 1 a t1 1 5 2 a t1 2 3 3 b t1 1 2 4 a t2 1 4 5 a t1 3 9 The above is what I want to achieve. The version field is a calculated field others are user provided. Basically the request's version needs to be calculated based on the combination of requestor and the type. The first occurance with a given combination will have a version 1 then version 2 and so on. I tried various things using @version on a different entity with just the three columns and joining the two entities using ManytoOne etc but I'm not able to get to the desired outcome. I dont want to confuse you with the things I tried. Since the objective is simple there should be an easier way I suppose? Can you please help? - any help greatly appreciated! thanks in advance

    Read the article

  • why this with excel sheet reading?

    - by Lalit
    Hi, I am reading the excel sheet from c# interop services cell by cell. where as my excel sheet have Date cells. It generates some double values , I am converting them in date by : double dbl = Convert.ToDouble(((Excel.Range)worksheet.Cells[iRowindex, colIndex_q17]).Value2); string strDate3 = DateTime.FromOADate(dbl).ToShortDateString(); drRow[dtSourceEXLData.Columns[constants.VisualDate]] = strDate3; ok? but some time happening the value of ((Excel.Range)worksheet.Cells[iRowindex,colIndex_q17]).Value2 getting date formate.why this is happeing? plaease guide me. it throws excepion of "input string not in correct format".why it is not generating double value like other cell of same column?

    Read the article

  • Replacing Text which does not match a pattern in Oracle

    - by kutekrish
    I have below text in a CLOB in a table Table Name: tbl1 Columns col1 - number (Primary Key) col2 - clob (as below) Row#1 ----- Col1 = 1 Col2 = 1331882981,ab123456,Some text here which can run multiple lines and have a lot of text... ~1331890329,pqr123223,Some more text... Row#2 ----- Col1 = 2 Col2 = 1331882981,abc333,Some text here which can run multiple lines and have a lot of text... ~1331890329,pqrs23,Some more text... Now I need to know how we can get below output Col1 Value ---- --------------------- 1 1331882981,ab123456 1 1331890329,pqr123223 2 1331882981,abc333 2 1331890329,pqrs23 ([0-9]{10},[a-z 0-9]+.), == This is the regular expression to match "1331890329,pqrs23" and I need to know how can replace which are not matching this regex and then split them into multiple rows

    Read the article

  • How detect length of a numpy array with only one element?

    - by mishaF
    I am reading in a file using numpy.genfromtxt which brings in columns of both strings and numeric values. One thing I need to do is detect the length of the input. This is all fine provided there are more than one value read into each array. But...if there is only one element in the resulting array, the logic fails. I can recreate an example here: import numpy as np a = np.array(2.3) len(a) returns an error saying: TypeError: len() of unsized object however, If a has 2 or more elements, len() behaves as one would expect. import numpy as np a = np.array([2.3,3.6]) len(a) returns 2 My concern here is, if I use some strange exception handling, I can't distinguish between a being empty and a having length = 1.

    Read the article

  • Transform LINQ Dataset into a Matrix for export

    - by Mad Halfling
    Hi folks, I've got a data table with columns in which include Item, Category and Value (and others, but those are the only relevant ones for this problem) that I access via LINQ in a C# ASP.Net MVC app. I want to transform these into a matrix and output that as a CSV file to pull into Excel as matrix with the items down the side, the categories across the top and the values in the row cells. However, I don't know how many, or what, categories there will be in this table, nor will there always be a record for each item/category combination. I've written this by looping round, getting my "master category" list, then looking again for each item, filling in either blank or Value, depending on whether the item/category record exists, but as there are currently 27000 records in the table, this isn't as fast as I'd like. Is there a slicker and faster way I can do this, maybe via LINQ (firing into a quicker SQL statement so the DB server can do the leg-work), or will any method essentially come back to what I am doing? Thx MH

    Read the article

  • How to change data structure in mysql using mysqldump without deleting files

    - by Don Quixote
    Essentially what I'm trying to do is sync a production server with a sandbox server, but only the table structures and stored procedures. The procedures aren't any problem since they can be overriden, but the problem is the tables. I want to sync and alter their structures on the production server using mysqldump (or any other way that you can propose) without altering any existing data. If it helps, I only want to add more columns, not remove any existing ones. Also, I am using mysqlyog. Is there any way to do this?

    Read the article

  • Error " Index exceeds Matrix dimensions"

    - by Mola
    Hi experts, I am trying to read an excel 2003 file which consist of 62 columns and 2000 rows and then draw 2d dendrogram from 2000 pattern of 2 categories of a data as my plot in matlab. When i run the script, it gives me the above error. I don't know why. Anybody has any idea why i have the above error? My data is here: http://rapidshare.com/files/383549074/data.xls Please delete the 2001 column if you want to use the data for testing. and my code is here: % Script file: cluster_2d_data.m d=2000; n1=22; n2=40; N=62 Data=xlsread('data.xls','A1:BJ2000'); X=Data'; R=1:2000; C=1:2; clustergram(X,'Pdist','euclidean','Linkage','complete','Dimension',2,... 'ROWLABELS',R,'COLUMNLABELS',C,'Dendrogram',{'color',5})

    Read the article

  • How to coerce type of ActiveRecord attribute returned by :select phrase on joined table?

    - by tribalvibes
    Having trouble with AR 2.3.5, e.g.: users = User.all( :select => "u.id, c.user_id", :from => "users u, connections c", :conditions => ... ) Returns, e.g.: => [#<User id: 1000>] >> users.first.attributes => {"id"=>1000, "user_id"=>"1000"} Note that AR returns the id of the model searched as numeric but the selected user_id of the joined model as a String, although both are int(11) in the database schema. How could I better form this type of query to select columns of tables backing multiple models and retrieving their natural type rather than String ? Seems like AR is punting on this somewhere. How could I coerce the returned types at AR load time and not have to tack .to_i (etc.) onto every post-hoc access?

    Read the article

  • oracle call stored procedure inside select

    - by CC
    Hi everybody. I'm working on a query (a SELECT) and I need to insert the result of this one in a table. Before doing the insert I have some checking to do, and if all columns are valid, I will do the insert. The checking is done in a stored procedure. The same procedure is used somewhere else too. So I'm thinking using the same procedure to do my checks. The procedure does the checkings and insert the values is all OK. I tryied to call the procedure inside my SELECT but it does not works. SELECT field1, field2, myproc(field1, field2) from MYTABLE. This kind of code does not works. I think it can be done using a cursor, but I would like to avoid the cursors. I'm looking for the easiest solution. Anybody, any idea ? Thanks alot. C.C.

    Read the article

  • How to store data to Excel from DataSet without going cell-by-cell?

    - by Jason Barnwell
    Duplicate of: What’s the simplest way to import a System.Data.DataSet into Excel? Using c# under VS2008, we can create an excel app, workbook, and then worksheet fine by doing this: Application excelApp = new Application(); Workbook excelWb = excelApp.Workbooks.Add(template); Worksheet excelWs = (Worksheet)this.Application.ActiveSheet; Then we can access each cell by "excelWs.Cells[i,j]" and write/save without problems. However with large numbers of rows/columns, we are expecting a loss in efficiency. Is there a way to "data bind" from a DataSet object into the worksheet without using the cell-by-cell approach? Most of the methods we have seen at some point revert to the cell-by-cell approach. Thanks for any suggestions.

    Read the article

  • SQL: joining multiples tables into one.

    - by Graveen
    I have 4 tables. r1, r2, r3 and r4. The table columns are the following: rId | rName I want to have, in fine, an unique table - let's call it R. Obviously, R will have the following structure: rTableName | rId | rName I'm looking for a solution, and the more natural for me is to: add a single column to all rX insert this column the table name i'm processing generate SQLs and concatenate them all Although I see exactly how to perform 1 and 3 with batching, editing, etc... (I have only to perform it once and for all), I don't see how to do the point 2: self-getting the tablename to insert into SQL. Have you an idea / or a different way to do that could solve my problem? Note: In fact, there are 250+ rX tables. That's why i can't do this manually. Note2: Precisely, this is with MySQL.

    Read the article

  • How can I neatly clean my R workspace while preserving certain objects?

    - by briandk
    Suppose I'm messing about with some data by binding vectors together, as I'm wont to do on a lazy sunday afternoon. x <- rnorm(25, mean = 65, sd = 10) y <- rnorm(25, mean = 75, sd = 7) z <- 1:25 dd <- data.frame(mscore = x, vscore = y, caseid = z) I've now got my new dataframe dd, which is wonderful. But there's also still the detritus from my prior slicings and dicings: > ls() [1] "dd" "x" "y" "z" What's a simple way to clean up my workspace if I no longer need my "source" columns, but I want to keep the dataframe? That is, now that I'm done manipulating data I'd like to just have dd and none of the smaller variables that might inadvertently mask further analysis: > ls() [1] "dd" I feel like the solution must be of the form rm(ls[ -(dd) ]) or something, but I can't quite figure out how to say "please clean up everything BUT the following objects."

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >