Search Results

Search found 28052 results on 1123 pages for 't sql tuesday'.

Page 591/1123 | < Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >

  • Oracle SQLPlus: How to display the output of a sqlplus command without having to first issue the spo

    - by ziggy
    Is there a way to display the output of a sqlplus command without having to first issue the spool off command? I am spooling the results of a sqlplus session to a file while at the same time tailing the file. The reason for this is that for table with very long rows the format is easier to look at from a file. The problem is to see the output i have to issue the spool off command everytime i run a command in sqlplus. Can i configure sqlplus so that after i have issued the spool command all the output is viewable straight away on the file. (Formating the way the rows are displayed on the screen is not an option. ) THanks

    Read the article

  • How do I create a check constraint?

    - by Zack Peterson
    Please imagine this small database... Diagram Tables Volunteer Event Shift EventVolunteer ========= ===== ===== ============== Id Id Id EventId Name Name EventId VolunteerId Email Location VolunteerId Phone Day Description Comment Description Start End Associations Volunteers may sign up for multiple events. Events may be staffed by multiple volunteers. An event may have multiple shifts. A shift belongs to only a single event. A shift may be staffed by only a single volunteer. A volunteer may staff multiple shifts. Check Constraints Can I create a check constraint to enforce that no shift is staffed by a volunteer that's not signed up for that shift's event? Can I create a check constraint to enforce that two overlapping shifts are never staffed by the same volunteer?

    Read the article

  • Wiki Database, is there one?

    - by Faiz
    I was searching the net for something like a wiki database, just like wikipedia but instead stores structured content, editable by users. What I was looking for was an online database accessible by everyone where people can design the schema and data with proper versioning of both schema and data. I couldn't find any such site. I am not sure if it is my search skills or if there really is no wiki database as of now. Does anyone out there know anything like this? I think there is a great potential for something like this. A possible example will be a website with a GUI for querying a MySQL DB where any website visitor can create DB objects and populate data.

    Read the article

  • Is it possible to "merge" the values of multiple records into a single field without using a stored

    - by j0rd4n
    A co-worker posed this question to me, and I told them, "No, you'll need to write a sproc for that". But I thought I'd give them a chance and put this out to the community. Essentially, they have a table with keys mapping to multiple values. For a report, they want to aggregate on the key and "mash" all of the values into a single field. Here's a visual: --- ------- Key Value --- ------- 1 A 1 B 1 C 2 X 2 Y The result would be as follows: --- ------- Key Value --- ------- 1 A,B,C 2 X,Y They need this in SQLServer 2005. Again, I think they need to write a stored procedure, but if anyone knows a magic out-of-the-box function that does this, I'd be impressed.

    Read the article

  • PLPGSQL : How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • javax.sql.DataSource.getConnection() locks system

    - by Ryan Elkins
    I'm using the Apache Commons DBCP library for connection pooling in a desktop application. I've done this before and never had a problem but the latest application has started sometimes locking up on the call to getConnection() on my DataSource. The application just hangs after that call. I'm closing up my resources when I'm done with them. Is there any known reason why this might happen? I'm not even sure where to being troubleshooting this now that I've got it narrowed down to this method. It doesn't always hang - sometimes it happens fairly quickly, sometimes it takes a long time. Sometimes it doesn't happen at all, although lately I can get it to happen within a few minutes.

    Read the article

  • Entity Framework self referencing entity deletion.

    - by Viktor
    Hello. I have a structure of folders like this: Folder1 Folder1.1 Folder1.2 Folder2 Folder2.1 Folder2.1.1 and so on.. The question is how to cascade delete them(i.e. when remove folder2 all children are also deleted). I can't set an ON DELETE action because MSSQL does not allow it. Can you give some suggesions? UPDATE: I wrote this stored proc, can I just leave it or it needs some modifications? SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE sp_DeleteFoldersRecursive @parent_folder_id int AS BEGIN SET NOCOUNT ON; IF @parent_folder_id = 0 RETURN; CREATE TABLE #temp(fid INT ); DECLARE @Count INT; INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE FolderId = @parent_folder_id; SET @Count = @@ROWCOUNT; WHILE @Count > 0 BEGIN INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE EXISTS (SELECT FolderId FROM #temp WHERE Folders.ParentId = #temp.fid) AND NOT EXISTS (SELECT FolderId FROM #temp WHERE Folders.FolderId = #temp.fid); SET @Count = @@ROWCOUNT; END DELETE Folders FROM Folders INNER JOIN #temp ON Folders.FolderId = #temp.fid; DROP TABLE #temp; END GO

    Read the article

  • Is there a PHP library that performs MySQL Data Validation and Sanitization According to Column Type

    - by JW
    Do you know of any open source library or framework that can perform some basic validation and escaping functionality for a MySQL Db. i envisage something along the lines of: //give it something to perform the quote() quoteInto() methods $lib->setSanitizor($MyZend_DBAdaptor); //tell it structure of the table - colnames/coltypes/ etc $lib->setTableDescription($tableDescArray); //use it to validate and escape according to coltype foreach ($prospectiveData as $colName => $rawValue) if ( $lib->isValid($colName, $rawValue)) { //add it to the set clause $setValuesArray[$lib->escapeIdentifier($colName);] = $lib->getEscapedValue($colName,$rawValue); } else { throw new Exception($colName->getErrorMessage()); } etc... I have looked into - Zend_Db_Table (which knows about a table's description), and - Zend_Db_Adaptor (which knows how to escape/sanitize values depending on TYPE) but they do not automatically do any clever stuff during updates/inserts Anyone know of a good PHP library to preform this kind of validation that I could use rather than writing my own? i envisage alot of this kind of stuff: ... elseif (eregi('^INT|^INTEGER',$dataset_element_arr[col_type])) { $datatype='int'; if (eregi('unsigned',$dataset_element_arr[col_type])) { $int_max_val=4294967296; $int_min_val=0; } else { $int_max_val=2147483647; $int_min_val=-2147483648; } } (p.s I know eregi is deprecated - its just an example of laborious code)

    Read the article

  • Database with users design

    - by 1110
    I am in database design development phase. Application will work with large number of users (LARGE :)) I designed 80% of database but I have one Users table which is connected to everything else: Users {UserId, FirstName, LastName, Username, Password, PasswordQuestion, PasswordAnswer, Gender, RoleId, LastLoginDate etc etc} I saw asp.net membership database structure where Users and Membership are two tables. My questions are: Should I use one users table with all users data in it or more tables? If answer is 'more tables', what tables to use? Any advice on how to structure relation between those tables? This is sample relation that I have, and trying to improve. I don't understand why user and userChild are separated tables?

    Read the article

  • Sharepoint OLE DB - cannot insert records? "Field not updateable" error

    - by Pandincus
    I need to write a simple C# .NET application to retrieve, update, and insert some data in a Sharepoint list. I am NOT a Sharepoint developer, and I don't have control over our Sharepoint server. I would prefer not to have to develop this in a proper sharepoint development environment simply because I don't want to have to deploy my application on the Sharepoint server -- I'd rather just access data externally. Anyway, I found out that you can access Sharepoint data using OLE DB, and I tried it successfully using some ADO.NET: var db = DatabaseFactory.CreateDatabase(); DataSet ds = new DataSet(); using (var command = db.GetSqlStringCommand("SELECT * FROM List")) { db.LoadDataSet(command, ds, "List"); } The above works. However, when I try to insert: using (var command = db.GetSqlStringCommand("INSERT INTO List ([HeaderName], [Description], [Number]) VALUES ('Blah', 'Blah', 100)")) { db.ExecuteNonQuery(command); } I get this error: Cannot update 'HeaderName'; field not updateable. I did some Googling and apparently you cannot insert data through OLE DB! Does anyone know if there are some possible workarounds? I could try using Sharepoint Web Services, but I tried that initially and was having a heck of a time authenticating. Is that my only option?

    Read the article

  • Sync a WinForm with DatagridView

    - by Ruben Trancoso
    I have a Form with a DataGridView which DataSource is a BindingSource to a table. This view will have a single row selection and a button to delete, edit the current selected row in a popup Form and a insert button that will use the same Form as well. My question is how can I sync the pop Form with the current row? I tryied to use the RowStateChanged event to get and store the current selected Row to be used in the Form but I coudnt. After the event I get the row that was selected before. Other thing I dont understand yet in C# how to have a single recordSet and know wich is the current record even if its a new being inserted in a way that once in the Form all data being entered will show up at the same time in the DataGridView.

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • How can we secure our data from DBA?

    - by KoolKabin
    Hi guys, I have very confidential data in my database. I am trying to secure my data from dba. I am a member of development team. We develop our software and delpoy in a server which has its own dba. We have limited control over the server. In this scenario how can i deny dba of the server to lookup my data and deny making changes to them. Is it possible?

    Read the article

  • MYSQL Select and group by date

    - by Gil
    I'm not sure how create a right query to get the result I'm looking for. What I have is 2 tables. first has ID, Name columns and second has date and adminID, which is referenced from table 1 column ID. Now, what I want to get is basically number of times each admin loged in per day during the month. From structure like this one I want to get per day and month data so result would be similar to 1, 2, 2 march total 5 for admin 4. ID | Date ------------------ 4 | 2010/03/01 4 | 2010/03/04 4 | 2010/03/04 4 | 2010/03/05 4 | 2010/03/05

    Read the article

  • How can I adjust the CommandTImeout in DbFit for long running queries?

    - by Ben Farmer
    Is there any way to increase the CommandTimeout for DbFit queries? I have a long running stored procedure that times out when running it in a DbFit Test. It's possible for the procedure to run for a really long time (processing millions of records) and would like to have DbFit wait until it's completed, even if it takes several minutes. We are using the latest version of FitSharp (downloaded it yesterday) and use the version of DbFit that is included with FitSharp.

    Read the article

  • Calculate Percentage help

    - by DAVID
    Hi this code gives me employee salaries and manager salaries. SELECT E.EMP_FNAME AS MANAGER, E.EMP_SALARY, D.DEPT_NO, A.EMP_FNAME AS EMPLOYEE, A.EMP_SALARY FROM EMPLOYEE E, EMPLOYEE A, DEPARTMENT D WHERE E.EMP_NIN = A.EMP_MANAGER AND A.EMP_MANAGER = D.EMP_MANAGER; How can i only show the employees that have a salary within 10% of their manager salary?

    Read the article

  • Django query: Count and Group BY

    - by Tyler Lane
    I have a query that I'm trying to figure the "django" way of doing it: I want to take the last 100 calls from Call. Which is easy: calls = Call.objects.all().order_by('-call_time')[:100] However the next part I can't find the way to do it via django's ORM. I want to get a list of the call_types and the number of calls each one has WITHIN that previous queryset i just did. Normally i would do a query like this: "SELECT COUNT(id),calltype FROM call WHERE id IN ( SELECT id FROM call ORDER BY call_time DESC LIMIT 100 ) GROUP BY calltype;" I can't seem to find the django way of doing this particular query. Here are my 2 models: class Call( models.Model ): call_time = models.DateTimeField( "Call Time", auto_now = False, auto_now_add = False ) description = models.CharField( max_length = 150 ) response = models.CharField( max_length = 50 ) event_num = models.CharField( max_length = 20 ) report_num = models.CharField( max_length = 20 ) address = models.CharField( max_length = 150 ) zip_code = models.CharField( max_length = 10 ) geom = models.PointField(srid=4326) calltype = models.ForeignKey(CallType) objects = models.GeoManager() class CallType( models.Model ): name = models.CharField( max_length = 50 ) description = models.CharField( max_length = 150 ) active = models.BooleanField() time_init = models.DateTimeField( "Date Added", auto_now = False, auto_now_add = True ) objects = models.Manager()

    Read the article

  • Integer comparison as string

    - by J Pollack
    Hi I have an integer column and I want to find numbers that start with specific digits. For example they do match if I look for '123': 1234567 123456 1234 They do not match: 23456 112345 0123445 Is the only way to handle the task by converting the Integers into Strings before doing string comparison? Also I am using Postgre regexp_replace(text, pattern, replacement) on numbers which is very slow and inefficient way doing it. The case is that I have large amount of data to handle this way and I am looking for the most economical way doing this.

    Read the article

  • LINQ: Create persistable Associations in Code, Without Foreign Key

    - by Alex
    Hello, I know that I can create LINQ Associations without a Foreign Key. The problem is, I've been doing this by adding the [Association] attribute in the DBML file (same as through the designer), which will get erased again after I refresh my database (and reload the entire table structure). I know that there is the MyData.cs file (as part of the DBML) in which I can place my partial extensions etc. to domain objects (to persist even after I refresh the DBML), but I don't know how to create an association there?

    Read the article

  • Why index_merge is not used here?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing?

    Read the article

  • MsSQL 2005 query performance

    - by Max
    I have the following query: select ............. from //one table and about 20 left joins// where ( ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) ) order by this_.id asc And I have two environment: One with 175 000 records in table "this_" and second with 25 000 records in table "this_". This query works right on 175k database and it works smth about 2 seconds, but on base with 25k this query freezes. and if drop one the folloing item from where clause: ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) query runs normally. How can I to increase performance of this query?

    Read the article

  • Non-Linear Database Retrieval

    - by pws5068
    I'm building an article system, and each article will have one more Tags associated with it (similar to the Tags on this site). The tables are set up something like this: Article_Table Article_ID | Title | Author_ID | Content | Date_Posted | IP ... Tag_Table Tag_ID | Name ... Tag_Intersect_Table Tag_ID | Article_ID Is it possible query an article and all of its associated tags in one database call? If so, how is this done?

    Read the article

< Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >