Search Results

Search found 10732 results on 430 pages for 'pi db'.

Page 38/430 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Strange results from OdbcDataReader reading Sqlite DB

    - by stout
    This method returns some strange results, and was wondering if someone could explain why this is happening, and possibly a solution to get my desired results. Results: FileName = what I'd expect FileSize = what I'd expect Buffer = all bytes = 0 BytesRead = 0 BlobString = string of binary data FieldType = BLOB (what I'd expect) ColumnType = System.String Furthermore, if the file is greater than a few KB, the reader throws an exception stating the StringBuilder capacity argument must be greater than zero (presummably because the size is greater than Int32.MaxValue). I guess my question is how does one properly read large BLOBs from an OdbcDataReader? public static String SaveBinaryFile(String Key) { try { Connect(); OdbcCommand Command = new OdbcCommand("SELECT [_filename_],[_filesize_],[_content_] FROM [_sys_content] WHERE [_key_] = '" + Key + "';", Connection); OdbcDataReader Reader = Command.ExecuteReader(CommandBehavior.SequentialAccess); if (Reader.HasRows == false) return null; String FileName = Reader.GetString(0); int FileSize = int.Parse(Reader.GetString(1)); byte[] Buffer = new byte[FileSize]; long BytesRead = Reader.GetBytes(2, 0, Buffer, 0, FileSize); String BlobString = (String)Reader["_content_"]; String FieldType = Reader.GetDataTypeName(2); Type ColumnType = Reader.GetFieldType(2); return null; } catch (Exception ex) { Tools.ErrorHandler.Catch(ex); return null; } }

    Read the article

  • Sturts DB query execution problem

    - by henderunal
    Hello, I am trying to insert a data to my db2 9.7 database from IBM RAD 7.5 using struts 1.3 But when i execute the query i got this errors [Pastebin link][1] Here is my code KayitBean kayit=(KayitBean)form; //String name = kayit.getName(); String name="endee"; DBConn hb = new DBConn(); Connection conn =hb.getConnection(); System.out.println("basarili"); //String sql = "SELECT * FROM ENDER.\"MEKANDENEME\""; String sql = "INSERT INTO ENDER.\"MEKANDENEME\" VALUES (\'endere\' ,\'bos\');"; System.out.println(sql); System.out.println("basarili2"); PreparedStatement ps = conn.prepareStatement(sql); System.out.println("basarili3"); ResultSet rs = ps.executeQuery(); // String ender=rs.getArray(1).toString(); System.out.println("basarili4"); // System.out.println(rs); conn.close(); I am receiving this after System.out.println("basarili3");" Please help me.

    Read the article

  • Tracking DB changes with Zend Framework?

    - by Chad Johnson
    I am trying to decide between the Zend Framework and Ruby On Rails for my web application. If I go with ZF, I need the following: A way to incrementally track changes to my database, as with RoR's migration feature (001_something.sql, 002_something_else.sql). A place to put SQL for the next release of my software. At work in our custom PHP solution, we just have release.sql, which gets run, archived, and blanked out upon release. ZF has Zend_Db_Schema_Manager, which does the same thing, but I'm not interested as its not official, complete, or maintained. Is there an official mechanism that ZF provides for doing something similar to what I described? EDIT I ended up going with Rails. Nothing compares.

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • rake db:create gives some problem when used from limited account

    - by Xinxua
    I am using mysql 5.1 and mysql gems version is 2.73 This is giving the following error message when I try to run it from a limited account in my XP. If try it using the admin account, it works fine. I think this is wierd because it cannot be the problem of mysql gem. (in F:/Temp/wassup) !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! 5: Access is denied. - C:/Program Files/Ruby/lib/ruby/gems/1.8/gems/mysql-2.7. 3-x86-mswin32/ext/mysql.so (See full trace by running task with --trace) I need to work from the limited account. Can anyone let me know why is this happening?

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Rewrite this function as DB query?

    - by aLk
    I'm cleaning up my code, should i change the following function to a MySQL query? If so what would be a nice MySQL function to achieve this functionality? public ArrayList getNewTitles(ArrayList candidateTitles, ArrayList existingTitles) { ArrayList newTitles = new ArrayList(); Movie movie = new Movie(); boolean isNew = true; for(int i=0; i<candidateTitles.size(); i++) { for(int j=0; j<existingTitles.size(); j++) { movie = (Movie)existingTitles.get(j); if(((String)candidateTitles.get(i)).equals(movie.getRawTitle())) { isNew = false; } } if(isNew == true) { System.out.println("newTitle for crawling: " + (String)candidateTitles.get(i)); newTitles.add((String)candidateTitles.get(i)); } else { System.out.println("candidate binned: " + (String)candidateTitles.get(i)); } isNew = true; } return newTitles; }

    Read the article

  • Import Data from text file into sql DB using CSLA

    - by New Developer
    I am trying to import data from ~ Delimited Text file into SQL Server using CSLA. My text file has 92,000 records in it. Here are the issues i am having with the import When i create a BusinnessListBase .new and add all my records to it, it gives me a "Out of meory exception". So to fix it i create a new bussinessbase object and save it. this works fine and is much faster too. It takes 15 minutes I have to run my program again and check for any changes and hence update them, this is where it takes too much time. Is there any alternative way to speed up my import?

    Read the article

  • update mysql db through form in php

    - by DAFFODIL
    I am getting this error, You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' address='xxxxx', city='sssssssss', pincode='333333333', state='Assam', count' at line 1 Thanks in advance. http://dpaste.com/hold/181959/

    Read the article

  • Custom session state provider needed for DB storage?

    - by subt13
    I know this question is related to many others, but please bear with me. I am trying an experiment to store all information in database tables instead of the ASP.NET session. In ASP.NET 4 one can create a custom provider for session. So, again should I implement a Custom Session-State Provider or should I just disable session (in Web.config)? Thanks! From the comments my question can be misunderstood. Hopefully this tidbit will help clarify: I don't want to store the session in the database. I want to store information in the database that you would typically store in the session. One reason why: I don't want to carry around a session on every page, especially if that page doesn't care about 90 percent of the information in the session

    Read the article

  • Delays in .net app when connecting to oracle db using Oracle.DataAccess

    - by chris
    I have a .net desktop app that connects to an oracle database. At times, there are very noticable delays. I ran a trace on the code, and it was always in the DataReader.Read(). I turned on sql tracing, and found the following, which corresponds to the delays I'm seeing: (2128) [23-MAR-2010 13:00:07:310] nsprecv: reading from transport... (2128) [23-MAR-2010 13:00:07:310] nttrd: entry (2128) [23-MAR-2010 13:00:24:655] nttrd: socket 676 had bytes read=2047 (2128) [23-MAR-2010 13:00:24:655] nttrd: exit (2128) [23-MAR-2010 13:00:24:655] nsprecv: 2047 bytes from transport There's about a 14 second pause in there. I'm pretty sure that there's not a problem in the code, but not sure where to look at next. Is there anyone out there with experience with oracle trace that can explain what's going on?

    Read the article

  • Dynamically Specify Linked Server and DB Names in Stored Procedure

    - by hgulyan
    I have the same query in a stored procedure that needs to be executed on different servers and databases according to parameters. How can I arrange this without exec or sp_executesql? I'm using SQL Server 2008. Thank you. UPDATE I've found some links http://www.eggheadcafe.com/software/aspnet/29397800/dynamically-specify-serve.aspx http://www.sommarskog.se/dynamic_sql.html Is using SYNONYM possible solution? If yes, than how? UPDATE 2 I forgot to mention that all this servers are linked to the server where stored procedure is stored.

    Read the article

  • script to dynamically fix ophaned users after db restore

    - by JJgates
    After performing a database restore, I want to run a dynamic script to fix ophaned users. My script below loops through all users that are displayed after executing sp_change_users_login 'report' and apply "alter user [username] with login = [username]" to fix SID conflicts verses static go statements. Although, I'm getting an "incorrect syntax error on line 15." can't figure out why... DECLARE @Username varchar(100), @cmd varchar(100) DECLARE userLogin_cursor CURSOR FAST_FORWARD FOR SELECT UserName = name FROM sysusers WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0) AND suser_sname(sid) IS NULL ORDER BY name FOR READ ONLY OPEN userLogin_cursor FETCH NEXT FROM userLogin_cursor INTO @Username WHILE @@fetch_status = 0 BEGIN SET @cmd = ‘ALTER USER ‘+@username+‘ WITH LOGIN ‘+@username EXECUTE(@cmd) FETCH NEXT FROM userLogin_cursor INTO @Username END CLOSE userLogin_cursor DEALLOCATE userLogin_cursor

    Read the article

  • MySQL : incrementing text id in DB

    - by BarsMonster
    I need to have text IDs in my application. For example, we have acceptable charset azAZ09, and allowed range of IDs [aaa] - [cZ9]. First generated id would be aaa, then aab, aac, aad e.t.c. How one can return ID & increment lower bound in transaction-fashion? (provided that there are hundreds of concurrent requests and all should have correct result) To lower the load I guess it's possible to define say 20 separate ranges, and return id from random range - this should reduce contention, but it's not clear how to do single operation in the first place. Also, please note that number of IDs in range might exceed 2^32. Another idea is having ranges of 64-bit integers, and converting integer-char id in software code, where it could be done asyncroniously. Any ideas?

    Read the article

  • SQL Express as a main DB

    - by JD
    Hi everyone, Does anyone out there use SQL Express 2008 R2 in a production environment? I am looking at hosting a Windows VPS as a server for a client software product I have developed. Clients connect to the server, send their data to a service running on the server, and the service updates the database. I'm trying to keep running costs down, and whilst I have a license for SQL Developer I obviously can't use this in a production environment. Would it be wise/possible to use SQL Express 2008, and if so why/why not? Thanks, JD

    Read the article

  • Saving an IP adddress to DB

    - by Mark
    I want to save a user's IP address to my database just in case any legal issues come up and we need to track down who performed what action. Since I highly doubt I will ever actually need to use this data (well, maybe for counting unique hits or something) do you think I can just dump the REMOTE_ADDR into a field? If so, what should the length of that field be? 39 chars should fit an IPv6 address, no? I don't know if I'll ever get any of those, but just in case...

    Read the article

  • Access DB Transaction on insert or updating

    - by Raju Gujarati
    I am going to implement the database access layer of the Window application using C#. The database (.accdb) is located to the project files. When it comes to two notebooks (clients) connecting to one access database through switches, it throws DBConcurrency Exception Error. My target is to check the timestamp of the sql executed first and then run the sql . Would you please provide me some guidelines to achieve this ? The below is my code protected void btnTransaction_Click(object sender, EventArgs e) { string custID = txtID.Text; string CompName = txtCompany.Text; string contact = txtContact.Text; string city = txtCity.Text; string connString = ConfigurationManager.ConnectionStrings["CustomersDatabase"].ConnectionString; OleDbConnection connection = new OleDbConnection(connString); connection.Open(); OleDbCommand command = new OleDbCommand(); command.Connection = connection; OleDbTransaction transaction = connection.BeginTransaction(); command.Transaction = transaction; try { command.CommandText = "INSERT INTO Customers(CustomerID, CompanyName, ContactName, City, Country) VALUES(@CustomerID, @CompanyName, @ContactName, @City, @Country)"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@CustomerID", custID); command.Parameters.AddWithValue("@CompanyName", CompName); command.Parameters.AddWithValue("@ContactName", contact); command.Parameters.AddWithValue("@City", city); command.ExecuteNonQuery(); command.CommandText = "UPDATE Customers SET ContactName = @ContactName2 WHERE CustomerID = @CustomerID2"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@CustomerID2", custIDUpdate); command.Parameters.AddWithValue("@ContactName2", contactUpdate); command.ExecuteNonQuery(); adapter.Fill(table); GridView1.DataSource = table; GridView1.DataBind(); transaction.Commit(); lblMessage.Text = "Transaction successfully completed"; } catch (Exception ex) { transaction.Rollback(); lblMessage.Text = "Transaction is not completed"; } finally { connection.Close(); } }

    Read the article

  • write image file larger than 4096

    - by ntan
    Hi, *************EDIT********** i am using ODBC and found that can not read more than 4096 for a field Any suggestions *************EDIT************ i am reading an image from db $image=$row["image-contents"]; Now try to write the file to disk $image_name="test.jpg"; $file = fopen( "images/".$image_name, "w" ); fwrite( $file, $image); fclose( $file ); The problem is that the file created is only 4096 bytes and the image file is corrupt because $image is larger than 4096. I now that fwrite use blocks for write but i dont know how do it. Help plz!

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >