Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 248/429 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Create a trigger on Oracle Databae that updates the field in a table when a field in onether table i

    - by GigaPr
    Hi, i have two tables Order(id, date, note) and Delivery(Id, Note, Date) I want to create a trigger that updates the date in delivery when the date is updated in Order I was thinking to do something like CREATE OR REPLACE TRIGGER your_trigger_name BEFORE UPDATE ON Order DECLARE BEGIN UPDATE Delivery set date = ??? where id = ??? END; How do i get the date and row id? thanks

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • SQL Complex Select - Trouble forming query

    - by JoshSpacher
    I have three tables, Customers, Sales and Products. Sales links a CustomerID with a ProductID and has a SalesPrice. select Products.Category, AVG(SalePrice) from Sales inner join Products on Products.ProductID = Sales.ProductID group by Products.Category This lets me see the average price for all sales by category. However, I only want to include customers that have more than 3 sales records or more in the DB. I am not sure the best way, or any way, to go about this. Ideas?

    Read the article

  • get data using junction table with linq-sql confused :$

    - by raklos
    Im using linq-sql .I have 3 tables. e.g Project, People and a ProjectsPeople(fk's ProjectID and PeopleID) (junction) Table. given a set of peopleIDArray (an array of ints as people ID's) how can i get only Projects that have atleast one of the peopleId's associated with them? i.e there will be atleast one (may be more) record in the ProjectsPeople table that will have a ProjectId and an id from the peopleIDArray ) thanks

    Read the article

  • MSSQL SQL Building Software

    - by TheGambler
    What are the option in terms of applications that help build SQL statements against a MSSQL database? We have some users that need to build sql statements, perferably through drag and dropping or linking up tables etc.., against a MSSQL databse who don't have any experience in this area. Any Ideas?

    Read the article

  • How to get stream to "in-memory" database created via H2DB?

    - by Reynevan
    I have to create such a mechanism: Create in-memory (H2DB) database; Create tables and fill them using some data; Get stream to that database; Send that stream via WebDAV or something else; I know everything except that "How to get stream to "in-memory" database created via H2DB"? And some explanations: I can't create file because of some server restrictions; I need that stream to create a file;

    Read the article

  • Java: Looking for hack to deal with Windows file paths in Linux

    - by Chase Seibert
    Say you have a large legacy ColdFusion on top of Java on Windows application. File access is done both via java.io.File and by CFFILE (which in turn also uses java.io.File), but not centralised in any way into a single file access library. Further, say you have file paths both hard-coded in the code, and also in a database. In other words, assume the file paths themselves cannot change. They could be either local or remote Windows file paths: c:\temp\file.txt \\server\share\file.txt Is there a way to run this application on Linux with minimal code changes? I'm looking for creative solutions that do not involve touching the legacy code. Some ideas: Run it on WINE. This actually works, because WINE will translate the local paths, and has a samba client for the remote paths. Is there a way to override java.io.File to perform the file path translation with custom code? In this case, I would translate the remote paths to a mount point.

    Read the article

  • Questions About SQl BulkCopy

    - by chobo2
    Hi I am wondering how can do a mass insert and bulk copy at the same time? I have 2 tables that should be affect by the bulk copy as they both depend on each other. So I want it that if while inserting table 1 a record dies it gets rolled back and table 2 never gets updated. Also if table 1 inserts good and table 2 an update fails table 1 gets rolled back. Can this be done with bulk copy?

    Read the article

  • iOS Downloading Videos and saving in Application Support folder

    - by Satyam svv
    In my application, i've to download videos around 10 to my application and play accordingly. Each video is around 50 MB. I'm using following code and then after downloading the video, i'm saving it to Application support folder to avoid icloud sync. But the problem is that when downloading the videos its crashing. [NSURLConnection sendAsynchronousRequest:req queue:[[NSOperationQueue alloc] init] completionHandler:^(NSURLResponse *response, NSData *rcvdDat, NSError * err) { . . . } What I'm thinking is that, while downloading the video, it resides in memory and so the total memory occupying by the app is increasing. Finally iOS is making the app to close. I would like to download the video and when ever a stream of data received, write to temp file and when completes move it to application support folder. Can some one help me on how to write it to file and save it at the end? I cannot use 3rd party libraries (unless its small) due to legal issues.

    Read the article

  • Regex whitespace and special characters

    - by Sam R.
    I have this regular expression: [^\\s\"']+|\"([^\"]*)\"|'([^']*)' which works for splitting a string by white spaces, and anything within a quotation is not delimited. However, I notice that if I put in a string that starts with "" no matches are found. How would I correct this? For example, if I enter " test 2". I want it to match to [, test, 2] Note: using java to compile the regex, here is some code Pattern pattern = Pattern.compile("[^\\s\"']+|\"([^\"]*)\"|'([^']*)'"); Matcher matcher = pattern.matcher(SomeString); while (matcher.find()){ String temp = matcher.group(); //... Do something ... } Thanks.

    Read the article

  • How to syncronize merge subscription in Sql Compact DB (on a Mobile device emulator)

    - by Bero
    Using SQL ManagementStudio 2008 I created SQL 3.5 Compact DB (TestCompact.sdf) and I have created subscription to existing Publication. Using SQL Management Studio it is working. I have transfered TestCompact.sdf to Windows Mobile 5 emulator device and with QueryAnalyzer for Mobile I could query existing tables in TestCompact.sdf. I don't know how to start replication synchronization on that mobile device. Do I need to write some C# code or it is possible do it more simple?

    Read the article

  • Localized text in Java

    - by Eager Learner
    My requirement is to display localized text messages in a J2EE web application. I know J2EE provides very good support for this. My question is what is the practice followed to have the localized messages stored to be used by the application. If I want to display Japanese / Chinese kind of messages which are not like English like char sets how do we get that messages/text into the properties files or Database tables.

    Read the article

  • Distinct select on Oracle

    - by funktku
    What i am trying to do is a simple recommender , must take the biggest weighted top 40 element's node2 element. Calculation for weight comes from (E.WEIGHT * K.GRADE). Now this code succesfully returns top 40 elements. However, i don't want E.NODE2 to return duplicates. POSTGRE SQL allowed me to do SELECT DISTINCT ON (NODE2) E.NODE2 , (E.WEIGHT * K.GRADE). How can i do the same in oracle? The complete sql query; SELECT * FROM (SELECT DISTINCT E.NODE2 , (E.WEIGHT * K.GRADE) FROM KUAISFAST K, EDGES E WHERE K.ID = 1 AND K.COURSE_ID = E.NODE1 AND E.NODE2 NOT IN( SELECT K2.COURSE_ID FROM KUAISFAST K2 WHERE K2.ID = 1 ) ORDER BY( E.WEIGHT * K.GRADE ) DESC) TEMP WHERE rownum <= 40

    Read the article

  • domain name vs ip address, same server, but different speed

    - by bn
    I have two similar sites: - two of them have almost exactly the same codes, and running on the same server - both sites are the same, they just use different language. - database of the slower site is populated (maybe only the user table) the other tables for site content is the same - the faster uses root to access database one of the sites is not released yet, so it uses IP Address to access the site instead of domain name the site that is using IP address is faster (lot faster) the site that is using domain name is slower do you know why is this happening what could be the reason?

    Read the article

  • 2k rows update is very slow in MySQL

    - by sergeik
    Hi all, I have 2 tables: 1. news (450k rows) 2. news_tags (3m rows) There are some triggers on news table update which updating listings. This SQL executes too long... UPDATE news SET news_category = some_number WHERE news_id IN (SELECT news_id FROM news_tags WHERE tag_id = some_number); #about 3k rows How can I make it faster? Thanks in advance, S.

    Read the article

  • many-to-many mapping in NHibernate

    - by Chris Stewart
    I'm looking to create a many to many relationship using NHibernate. I'm not sure how to map these in the XML files. I have not created the classes yet, but they will just be basic POCOs. Tables Person personId name Competency competencyId title Person_x_Competency personId competencyId Would I essentially create a List in each POCO for the other class? Then map those somehow using the NHibernate configuration files?

    Read the article

  • 100k+ Records and sp_xml_preparedocument

    - by Jonn
    I've been encountering a seeming deadlock with one of my tables and the only place I can trace it back to is a stored procedure that uses sp_xml_preparedocument on a list of data. The data inserted, btw, consists of a 100k+ records on average. Is it possible that it is causing the deadlock? What other pitfalls does using sp_xml_preparedocument have?

    Read the article

  • How to build a database from an XSD schema and import XML data

    - by FreshCode
    I have a complex XSD schema and hundreds of XML files conforming to the schema. How do I automate the creation of related SQL Server tables to store the XML data? I've considered creating C# classes from the XSD schema using the xsd.exe tool and letting something like Subsonic figure out how to make a shiny database out of it, but not sure if it's the best way to approach it. Has anyone managed to elegantly import XSD files into SQL Server?

    Read the article

  • Oracle: what information can I derive from the SCN?

    - by Mark Harrison
    Given an SCN (system change number), and assuming an SCN for which the data is still in the undo logs, what information about the SCN can I derive? of course, SCN_TO_TIMESTAMP() gives an approximate time the data was committed. Is there any other information I can derive? What transaction, what tables, what data were affected? etc?

    Read the article

  • How to implement table-per-concrete-type using entity framework

    - by SDReyes
    Hello Guys! I'm mapping a set of tables that share a common set of fields: So as you can see I'm using a table-per-concrete-type strategy to map the inheritance. But... I have not could to relate them to an abstract type containing these common properties. It's possible to do it using EF? BONUS: The only non documented Entity Data Model Mapping Scenario is Table-per-concrete-type inheritance http://msdn.microsoft.com/en-us/library/cc716779.aspx : P

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >