Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 106/575 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • How to handle dates that repeat indefinitely

    - by Addsy
    I am implementing a fairly simple calendar on a website using PHP and MySQL. I want to be able to handle dates that repeat indefinitely and am not sure of the best way to do it. For a time limited repeating event it seems to make sense to just add each event within the timeframe into my db table and group them with some form of recursion id. But when there is no limit to how often the event repeats, is it better to a) put records in the db for a specific time frame (eg the next 2 years) and then periodically check and add new records as time goes by - The problem with this is that if someone is looking 3 years ahead, the event won't show up b) not actually have records for each event but instead when i check in my php code for events within a specified time period, calculate wether a repeated event will occur within this time period - The problem with this is that it means there isn't a specific record for each event which i can see being a pain when i then want to associate other info (attendance etc) with that event. It also seems like it might be a bit slow Has anyone tried either of these methods? If so how did it work out? Or is there some other ingenious crafty method i'm missing?

    Read the article

  • How do I left join tables in unidirectional many-to-one in Hibernate?

    - by jbarz
    I'm piggy-backing off of http://stackoverflow.com/questions/2368195/how-to-join-tables-in-unidirectional-many-to-one-condition. If you have two classes: class A { @Id public Long id; } class B { @Id public Long id; @ManyToOne @JoinColumn(name = "parent_id", referencedColumnName = "id") public A parent; } B - A is a many to one relationship. I understand that I could add a Collection of Bs to A however I do not want that association. So my actual question is, Is there an HQL or Criteria way of creating the SQL query: select * from A left join B on (b.parent_id = a.id) This will retrieve all A records with a Cartesian product of each B record that references A and will include A records that have no B referencing them. If you use: from A a, B b where b.a = a then it is an inner join and you do not receive the A records that do not have a B referencing them. I have not found a good way of doing this without two queries so anything less than that would be great. Thanks.

    Read the article

  • What's the best way to select max over multiple fields in SQL?

    - by allyourcode
    The I kind of want to do is select max(f1, f2, f3). I know this doesn't work, but I think what I want should be pretty clear (see update 1). I was thinking of doing select max(concat(f1, '--', f2 ...)), but this has various disadvantages. In particular, doing concat will probably slow things down. What's the best way to get what I want? update 1: The answers I've gotten so far aren't what I'm after. max works over a set of records, but it compares them using only one value; I want max to consider several values, just like the way order by can consider several values. update 2: Suppose I have the following table: id class_name order_by1 order_by_2 1 a 0 0 2 a 0 1 3 b 1 0 4 b 0 9 I want a query that will group the records by class_name. Then, within each "class", select the record that would come first if you ordered by order_by1 ascending then order_by2 ascending. The result set would consist of records 2 and 3. In my magical query language, it would look something like this: select max(* order by order_by1 ASC, order_by2 ASC) from table group by class_name

    Read the article

  • XSLT line counter - is it that hard?

    - by Mr AH
    I have cheated every time I've needed to do a line count in XSLT by using JScript, but in this case I can't do that. I simply want to write out a line counter throughout an output file. This basic example has a simple solution: <xsl:for-each select="Records/Record"> <xsl:value-of select="position()"/> </xsl:for-each> Output would be: 1 2 3 4 etc... But what if the structure is more complex with nested foreach's : <xsl:for-each select="Records/Record"> <xsl:value-of select="position()"/> <xsl:for-each select="Records/Record"> <xsl:value-of select="position()"/> </xsl:for-each> </xsl:for-each> Here, the inner foreach would just reset the counter (so you get 1, 1, 2, 3, 2, 1, 2, 3, 1, 2 etc). Does anyone know how I can output the position in the file (ie. a line count)?

    Read the article

  • TSQL - make a literal float value

    - by David B
    I understand the host of issues in comparing floats, and lament their use in this case - but I'm not the table author and have only a small hurdle to climb... Someone has decided to use floats as you'd expect GUIDs to be used. I need to retrieve all the records with a specific float value. sp_help MyTable -- Column_name Type Computed Length Prec -- RandomGrouping float no 8 53 Here's my naive attempt: --yields no results SELECT RandomGrouping FROM MyTable WHERE RandomGrouping = 0.867153569942739 And here's an approximately working attempt: --yields 2 records SELECT RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.867153569942739 - 0.00000001 AND 0.867153569942739 + 0.00000001 -- 0.867153569942739 -- 0.867153569942739 In my naive attempt, is that literal a floating point literal? Or is it really a decimal literal that gets converted later? If my literal is not a floating point literal, what is the syntax for making a floating point literal? EDIT: Another possibility has occurred to me... it may be that a more precise number than is displayed is stored in this column. It may be impossible to create a literal that represents this number. I will accept answers that demonstrate that this is the case. EDIT: response to DVK. TSQL is MSSQLServer's dialect of SQL. This script works, and so equality can be performed deterministically between float types: DECLARE @X float SELECT top 1 @X = RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.839110948199148 - 0.000000000001 AND 0.839110948199148 + 0.000000000001 --yields two records SELECT * FROM MyTable WHERE RandomGrouping = @X I said "approximately" because that method tests for a range. With that method I could get values that are not equal to my intended value. The linked article doesn't apply because I'm not (intentionally) trying to straddle the world boundaries between decimal and float. I'm trying to work with only floats. This isn't about the non-convertibility of decimals to floats.

    Read the article

  • use awk to identify multi-line record and filtering

    - by nanshi
    I need to process a big data file that contains multi-line records, example input: 1 Name Dan 1 Title Professor 1 Address aaa street 1 City xxx city 1 State yyy 1 Phone 123-456-7890 2 Name Luke 2 Title Professor 2 Address bbb street 2 City xxx city 3 Name Tom 3 Title Associate Professor 3 Like Golf 4 Name 4 Title Trainer 4 Likes Running Note that the first integer field is unique and really identifies a whole record. So in the above input I really have 4 records although I dont know how many lines of attributes each records may have. I need to: - identify valid record (must have "Name" and "Title" field) - output the available attributes for each valid record, say "Name", "Title", "Address" are needed fields. Example output: 1 Name Dan 1 Title Professor 1 Address aaa street 2 Name Luke 2 Title Professor 2 Address bbb street 3 Name Tom 3 Title Associate Professor So in the output file, record 4 is removed since it doen't have the "Name" field. Record 3 doesn't have Address field but still being print to the output since it is a valid record that has "Name" and "Title". Can I do this with awk? But how do i identify a whole record using the first "id" field on each line? Thanks a lot to the unix shell script expert for helping me out! :)

    Read the article

  • Copying a subset of data to an empty database with the same schema

    - by user193655
    I would like to export part of a database full of data to an empty database. Both databases has the same schema. I want to maintain referential integrity. To simplify my cases it is like this: MainTable has the following fields: 1) MainID integer PK 2) Description varchar(50) 3) ForeignKey integer FK to MainID of SecondaryTable SecondaryTable has the following fields: 4) MainID integer PK (referenced by (3)) 5) AnotherDescription varchar(50) The goal I'm trying to accomplish is "export all records from MainTable using a WHERE condition", for example all records where MainID < 100. To do it manually I shuold first export all data from SecondaryTable contained in this select: select * from SecondaryTable ST outer join PrimaryTable PT on ST.MainID=PT.MainID then export the needed records from MainTable: select * from MainTable where MainID < 100. This is manual, ok. Of course my case is much much much omre complex, I have 200+ tables, so donig it manually is painful/impossible, I have many cascading FKs. Is there a way to force the copy of main table only "enforcing referntial integrity". so that my query is something like: select * from MainTable where MainID < 100 WITH "COPYING ALL FK sources" In this cases also the field (5) will be copied. ====================================================== Is there a syntax or a tool to do this? Table per table I'd like to insert conditions (like MainID <100 is only for MainTable, but I have also other tables).

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • I am not able to update form data to MySQL using PHP and jQuery

    - by Jimson Jose
    My problem is that I am unable to update the values entered in the form. I have attached all the files. I'm using MYSQL database to fetch data. What happens is that I'm able to add and delete records from form using jQuery and PHP scripts to MYSQL database, but I am not able to update data which was retrieved from the database. The file structure is as follows: index.php is a file with jQuery functions where it displays form for adding new data to MYSQL using save.php file and list of all records are view without refreshing page (calling load-list.php to view all records from index.php works fine, and save.php to save data from form) - Delete is an function called from index.php to delete record from MySQL database (function calling delete.php works fine) - Update is an function called from index.php to update data using update-form.php by retriving specific record from MySQL table, (works fine) Problem lies in updating data from update-form.php to update.php (in which update query is written for MySQL) I have tried in many ways - at last I had figured out that data is not being transferred from update-form.php to update.php; there is a small problem in jQuery AJAX function where it is not transferring data to update.php page. Something is missing in calling update.php page it is not entering into that page. I am new bee in programming. I had collected this script from many forums and made this one. So I was limited in solving this problem. I came to know that this is good platform for me and many where we get a help to create new things. Please find the link below to download all files which is of 35kb (virus free assurance): download mysmallform files in ZIPped format, including mysql query

    Read the article

  • Getting wierd issue with TO_NUMBER function in Oracle

    - by Fazal
    I have been getting an intermittent issue when executing to_number function in the where clause on a varchar2 column if number of records exceed a certain number n. I used n as there is no exact number of records on which it happens. On one DB it happens after n was 1 million on another when it was 0.1. million. E.g. I have a table with 10 million records say Table Country which has field1 varchar2 containing numberic data and Id If I do a query as an example select * from country where to_number(field1) = 23 and id 1 and id < 100000 This works But if i do the query select * from country where to_number(field1) = 23 and id 1 and id < 100001 It fails saying invalid number Next I try the query select * from country where to_number(field1) = 23 and id 2 and id < 100001 It works again As I only got invalid number it was confusing, but in the log file it said Memory Notification: Library Cache Object loaded into SGA Heap size 3823K exceeds notification threshold (2048K) KGL object name :with sqlplan as ( select c006 object_owner, c007 object_type,c008 object_name from htmldb_collections where COLLECTION_NAME='HTMLDB_QUERY_PLAN' and c007 in ('TABLE','INDEX','MATERIALIZED VIEW','INDEX (UNIQUE)')), ws_schemas as( select schema from wwv_flow_company_schemas where security_group_id = :flow_security_group_id), t as( select s.object_owner table_owner,s.object_name table_name, d.OBJECT_ID from sqlplan s,sys.dba_objects d It seems its related to SGA size, but google did not give me much help on this. Does anyone have any idea about this issue with TO_NUMBER or oracle functions for large data?

    Read the article

  • Commercial web application--scalable database design

    - by Rob Campbell
    I'm designing a set of web apps to track scientific laboratory data. Each laboratory has several members, each of whom will access both their own data and that of their laboratory as a whole. Many typical queries will thus be expected to return records of multiple members (e.g. my mouse, joe's mouse and sally's mouse). I think I have the database fairly well normalized. I'm now wondering how to ensure that users can efficiently access both their own data and their lab's data set when it is mixed among (hopefully) a whole ton of records from other labs. What I've come up with so far is that most tables will end with two fields: user_id and labgroup_id. The WHERE clause of any SELECT statement will include the appropriate reference to one of the id fields ("...WHERE 'labroup_id=n..." or "...WHERE user_id=n..."). My questions are: Is this an approach that will scale to 10^6 or more records? If so, what's the best way to use these fields in a query so that it most efficiently searches the relevant subset of the database? e.g. Should the first step in querying be to create a temporary table containing just the labgroup's data? Or will indexing using some combination of the id, user_id, and labroup_id fields be sufficient at that scale? I thank any responders very much in advance.

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • SQL Queries SELECT IN and SELECT NOT IN

    - by Sequenzia
    Does anyone know why the results of the following 2 queries do not add up to the results of the 3rd one? SELECT COUNT(leadID) FROM leads WHERE makeID NOT IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 1) AND modelID NOT IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 2) SELECT COUNT(leadID) FROM Leads WHERE makeID IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 1) OR modelID IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 2) SELECT COUNT(leadID) FROM Leads The first query is the count I need. The second one is to tell the user how many records were suppressed based on the contents of the DG_App.dbo.uploadData table. The third query is just a straight count of all the records. When I run these the results of query 1 + the results of query 2 comes up about 46K records less than the count of the entire table. I have played with grouping the WHERE statements with () but that did not change the counts at all. This is MSSQL Server 2012. Any input on this would be great. Thanks

    Read the article

  • SQL Duplicates Issue SQL SERVER 2000

    - by jeff
    I have two tables : Product and ProductRateDetail. The parent table is Product. I have duplicate records in the product table which need to be unique. There are entries in the ProductRateDetail table which correspond to duplicate records in the product table. Somehow I need to update the ProductRateDetail table to match the original (older) ID from the Product table and then remove the duplicates from the product table. I would do this manually but there are 100's of records. i.e. something like UPDATE tbl_productRateDetail SET productID = (originalID from tbl_product) then something like DELETE from tbl_product WHERE duplicate ID and only delete the recently added ID data example: (sorry can't work out this formatting thing) tbl_Product select * from dbo.Product where ProductCode = '10003' ProductID ProductTypeID ProductDescription ProductCode ProductSize 365 1 BEND DOUBLE FLANGED 10003 80mmX90deg 1354 1 BEND DOUBLE FLANGED 10003 80mmX90deg tbl_ProductRateDetail SELECT * FROM [MSTS2].[dbo].[ProductRateDetail] WHERE ProductID in (365,1354) ProductRateDetailID ProductRateID ProductID UnitRate 365 1 365 16.87 1032 5 365 16.87 2187 10 365 16.87 2689 11 365 16.87 3191 12 365 16.87 7354 21 1354 21.30 7917 22 1354 21.30 8480 23 1354 21.30 9328 25 1354 21.30 9890 26 1354 21.30 10452 27 1354 21.30 Please help!

    Read the article

  • MySQL Need some help with a query

    - by Jules
    I'm trying to fix some data by adding a new field. I have a backup from a few months ago and I have restored this database to my server. I'm looking at table called pads, its primary key is PadID and the field of importance is called RemoveMeDate. In my restored (older) database there is less records with an actual date set in RemoveMeDate. My control date is 2001-01-01 00:00:00 meaning that the record is not hidden aka visible. What I need to do is select all the records from the older database / table with the control date and join with those from the newer db /table where the control date is not set. I hope I've explained that correctly. I'll try again, with numbers. I have 80,000 visible records in the older table (with control date set) and 30,000 in the newer db/table. I need to select the 50,000 from the old database, to perform an update query. Heres my query, which I'd can't get to work as I'd like. jules-fix-reasons is the old database, jules is the newer one. select p.padid from `jules-fix-reasons`.`pads` p JOIN `jules`.`pads` ON p.padid = `jules`.`pads`.`PadID` where p.RemoveMeDate <> '2001-01-01 00:00:00' AND `jules`.`pads`.RemoveMeDate = '2001-01-01 00:00:00'

    Read the article

  • should i advocate migrating from access to (my)sql

    - by HotOil
    Hi: We have a windows MFC app that is written against an access database on a company server. The db is not that big: 19 MB. There are at most 2-3 users accessing it at any one time. It is used in a factory environment where access speed (or lack thereof) over the intranet becomes noticeable as it is part of the manufacturing time for our widgets. The scenario is this: as each widget is completed, it gets a record in the db.. by the end of the year, the db is larger and searching for a record takes longer and longer. The solution so far has been to manually move older records to an archival table about once a year. We are reworking other portions of this app right now, and it would be a good time to move to another db if we are going to do it. It is my understanding that if we were using sql, the search time would not go up as the table gets bigger because the entire .mdb does not have to be sent over the network each time. Is this correct? Does anyone have any insight about whether it could be worth it to go to the trouble (time and money) of migrating to a new db, or should I just add more functionality to the application we have now, and maybe automatically purge the older records from time to time, and add additional facilities to the app to get at the older records when needed? Thanks for any wisdom you can share..

    Read the article

  • How to get around DnsRecordListFree error in .NET Framework 4.0?

    - by Greg Finzer
    I am doing an MxRecordLookup. I am getting an error when calling the DnsRecordListFree in the .NET Framework 4.0. I am using Windows 7. How do I get around it? Here is the error: System.MethodAccessException: Attempt by security transparent method to call native code through method. Here is my code: [DllImport("dnsapi", EntryPoint = "DnsQuery_W", CharSet = CharSet.Unicode, SetLastError = true, ExactSpelling = true)] private static extern int DnsQuery([MarshalAs(UnmanagedType.VBByRefStr)]ref string pszName, QueryTypes wType, QueryOptions options, int aipServers, ref IntPtr ppQueryResults, int pReserved); [DllImport("dnsapi", CharSet = CharSet.Auto, SetLastError = true)] private static extern void DnsRecordListFree(IntPtr pRecordList, int FreeType); public List<string> GetMXRecords(string domain) { List<string> records = new List<string>(); IntPtr ptr1 = IntPtr.Zero; IntPtr ptr2 = IntPtr.Zero; MXRecord recMx; try { int result = DnsQuery(ref domain, QueryTypes.DNS_TYPE_MX, QueryOptions.DNS_QUERY_BYPASS_CACHE, 0, ref ptr1, 0); if (result != 0) { if (result == 9003) { //No Record Exists } else { //Some other error } } for (ptr2 = ptr1; !ptr2.Equals(IntPtr.Zero); ptr2 = recMx.pNext) { recMx = (MXRecord)Marshal.PtrToStructure(ptr2, typeof(MXRecord)); if (recMx.wType == 15) { records.Add(Marshal.PtrToStringAuto(recMx.pNameExchange)); } } } finally { DnsRecordListFree(ptr1, 0); } return records; }

    Read the article

  • Accessing MS Access database from C#

    - by Abilash
    I want to use MS Access as database for my C# windows form application.I have used OleDb driver for connecting MS Access. I am able to select the records from the MS Access using OleDbConnection and ExecuteReader.But I am un able to insert,update and delete records. My code is as follows: OleDbConnection con=new OleDbConnection(strCon); try { con.Open(); OleDbCommand com = new OleDbCommand("INSERT INTO DPMaster(DPID,DPName,ClientID,ClientName) VALUES('53','we','41','aw')", con); int a=com.ExecuteNonQuery(); //OleDbCommand com = new OleDbCommand("SELECT * FROM DPMaster", con); //OleDbDataReader dr = com.ExecuteReader(); //while (dr.Read()) //{ // MessageBox.Show(dr[2].ToString()); //} MessageBox.Show(a.ToString()); } catch { MessageBox.Show("cannot"); } If I execute the commented block the application works.But the insert block doesnt works.Why I am unable to insert/update/delete the records into database? My Connection String is as follows: string strCon="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=xyz.mdb;Persist Security Info=True";

    Read the article

  • Getting the first of a GROUP BY clause in SQL

    - by Michael Bleigh
    I'm trying to implement single-column regionalization for a Rails application and I'm running into some major headaches with a complex SQL need. For this system, a region can be represented by a country code (e.g. us) a continent code that is uppercase (e.g. NA) or it can be NULL indicating the "default" information. I need to group these items by some relevant information such as a foreign key (we'll call it external_id). Given a country and its continent, I need to be able to select only the most specific region available. So if records exist with the country code, I select them. If, not I want a records with the continent code. If not that, I want records with a NULL code so I can receive the default values. So far I've figured that I may be able to use a generated CASE statement to get an arbitrary sort order. Something like this: SELECT *, CASE region WHEN 'us' THEN 1 WHEN 'NA' THEN 2 ELSE 3 END AS region_sort FROM my_table WHERE region IN ('us','NA') OR region IS NULL GROUP BY external_id ORDER BY region_sort The problem is that without an aggregate function the actual data returned by the GROUP BY for a given row seems to be untameable. How can I massage this query to make it return only the first record of the region_sort ordered groups?

    Read the article

  • Is My DataTable Snippet Written Correctly?

    - by user311509
    public static DataTable GetDataTable(SqlCommand sqlCmd) { DataTable tblMyTable = new DataTable(); DataSet myDataSet = new DataSet(); try { //1. Create connection mSqlConnection = new SqlConnection(mStrConnection); //2. Open connection mSqlConnection.Open(); mSqlCommand = new SqlCommand(); mSqlCommand = sqlCmd; //3. Assign Connection mSqlCommand.Connection = mSqlConnection; //4. Create/Set DataAdapter mSqlDataAdapter = new SqlDataAdapter(); mSqlDataAdapter.SelectCommand = mSqlCommand; //5. Populate DataSet mSqlDataAdapter.Fill(myDataSet, "DataSet"); tblMyTable = myDataSet.Tables[0]; } catch (Exception ex) { } finally { //6. Clear objects if ((mSqlDataAdapter != null)) { mSqlDataAdapter.Dispose(); } if ((mSqlCommand != null)) { mSqlCommand.Dispose(); } if ((mSqlConnection != null)) { mSqlConnection.Close(); mSqlConnection.Dispose(); } } //7. Return DataSet return tblMyTable; } I use the above code to return records from database. The above snippet would run in web application which expected to have around 5000 visitors daily. The records returned reach 20,000 or over. The returned records are viewed (read-only) in paged GridView. Would it be better to use DataReader instead of DataTable? NOTE: two columns in the GridView are hyperlinked.

    Read the article

  • Oracle Forms on-button-pressed trigger to solve three scenarios

    - by DBase486
    Hello, I'm writing a when-button-pressed trigger on a save button for an Oracle Forms 6i form, and it has to fulfill a couple of scenarios. Here's some background information: the fields we're primarily concerned with are: n_number, alert_id, end_date For all three scenarios we are comparing candidate records against the following records in the database (for the sake of argument, let's assume they're the only records in the database so far): alert_id|| n_number|| end_date ------------------------------------- 1|| 5|| _______ 2|| 6|| 10/25/2009 Scenario 1: The user enters a new record: alert_id 1 n_number 5 end_date NULL Objective: prevent the user from committing duplicate rows Scenario 2: The user enters a new record: alert_id 1 n_number 10 end_date NULL Objective: Notify the user that this alert_id already exists, but allow the user the ability to commit the row, if desired. Scenario 3: The user enters a new record: alert_id 2 n_number 6 end_date NULL Objective: Notify the user that this alert_id has occurred in the past (i.e. it has a not-null end_date), but allow the user to commit the row, if desired. I've written the code, which seems to comply with the first two scenarios, but prevents me from fulfilling the third. Issues: When I enter the third scenario case, I am prompted to commit the record, but when I attempt this, the "duplicate_stop" alert pops up, preventing me. Issues: I'm getting the following error: ORA-01843: not a valid month. While testing the code for the third scenario in Toad (hard-coding the values, etc) things seemed to be fine. Why would I encounter these problems at run-time? Help is very much appreciated. Thank you

    Read the article

  • MS Access Bulk Insert exits app

    - by Brij
    In the web service, I have to bulk insert data in MS Access database. First, There was single complex Insert-Select query,There was no error but app exited after inserting some records. I have divided the query to make it simple and using linq to remove complexity of query. now I am inserting records using for loop. there are approx 10000 records. Again, same problem. I put a breakpoint after loop, but No hit. I have used try catch, but no error found. For Each item In lstSelDel Try qry = String.Format("insert into Table(Id,link,file1,file2,pID) values ({0},""{1}"",""{2}"",""{3}"",{4})", item.WebInfoID, item.Links, item.Name, item.pName, pDateID) DAL.ExecN(qry) Catch ex As Exception Throw ex End Try Next Public Shared Function ExecN(ByVal SQL As String) As Integer Dim ret As Integer = -1 Dim nowConString As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + System.AppDomain.CurrentDomain.BaseDirectory + "DataBase\\mydatabase.mdb;" Dim nowCon As System.Data.OleDb.OleDbConnection nowCon = New System.Data.OleDb.OleDbConnection(nowConString) Dim cmd As New OleDb.OleDbCommand(SQL, nowCon) nowCon.Open() ret = cmd.ExecuteNonQuery() nowCon.Close() nowCon.Dispose() cmd.Dispose() Return ret End Function After exiting app, I see w3wp.exe uses more than 50% of Memory. Any idea, What is going wrong? Is there any limitation of MS Access?

    Read the article

  • Is there any performance issue using Row_Number to implement table paging in Sql Server 2008?

    - by majkinetor
    I want to implement table paging using this method: SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum ,* FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate ,OrderID; Is there anything I should be aware of ? Table has millions of records. Thx. EDIT: After using suggested MAXROWS method for some time (which works really really fast) I had to switch back to ROW_NUMBER method because of its greater flexibility. I am also very happy about its speed so far (I am working with View having more then 1M records with 10 columns). To use any kind of query I use following modification: PROCEDURE [dbo].[PageSelect] ( @Sql nvarchar(512), @OrderBy nvarchar(128) = 'Id', @PageNum int = 1, @PageSize int = 0 ) AS BEGIN SET NOCOUNT ON Declare @tsql as nvarchar(1024) Declare @i int, @j int if (@PageSize <= 0) OR (@PageSize > 10000) SET @PageSize = 10000 -- never return more then 10K records SET @i = (@PageNum - 1) * @PageSize + 1 SET @j = @PageNum * @PageSize SET @tsql = 'WITH MyTableOrViewRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY ' + @OrderBy + ') AS RowNum ,* FROM MyTableOrView WHERE ' + @Sql + ' ) SELECT * FROM MyTableOrViewRN WHERE RowNum BETWEEN ' + CAST(@i as varchar) + ' AND ' + cast(@j as varchar) exec(@tsql) END If you use this procedure make sure u prevented sql injection.

    Read the article

  • Design pattern to integrate Rails with a Comet server

    - by empire29
    I have a Ruby on Rails (2.3.5) application and an APE (Ajax Push Engine) server. When records are created within the Rails application, i need to push the new record out on applicable channels to the APE server. Records can be created in the rails app by the traditional path through the controller's create action, or it can be created by several event machines that are constantly monitoring various inputstream and creating records when they see data that meets a certain criteria. It seems to me that the best/right place to put the code that pushes the data out to the APE server (which in turn pushes it out to the clients) is in the Model's after_create hook (since not all record creations will flow through the controller's create action). The final caveat is I want to push a piece of formatted HTML out to the APE server (rather than a JSON representation of the data). The reason I want to do this is 1) I already have logic to produce the desired layout in existing partials 2) I don't want to create a javascript implementation of the partials (javascript that takes a JSON object and creates all the HTML around it for presentation). This would quickly become a maintenance nightmare. The problem with this is it would require "rendering" partials from within the Model (which im having trouble doing anyhow because they don't seem to have access to Helpers when they're rendered in this manner). Anyhow - Just wondering what the right way to go about organizing all of this is. Thanks

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >