Search Results

Search found 28052 results on 1123 pages for 't sql tuesday'.

Page 660/1123 | < Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >

  • Access to Oracle Database with sqlapi C++

    - by Meloun
    Hi, I need to write some data in several database. I choose sqlapi.com I have made it for mysql and mssql. Now I have Problem with Oracle database. I have installed server and client on Ubuntu. In browser it works, but sqlapi says: libnnz10.so: cannot open shared object file: No such file or directory DBMS API Library 'libclntsh.so' loading fails This library is a part of DBMS client installation, not SQLAPI++ Make sure DBMS client is installed and this required library is available for dynamic loading Linux/Unix: 1) The directories in the user's LD_LIBRARY_PATH environment variable 2) The list of libraries cached in /etc/ld.so.cache 3) /usr/lib, followed by /lib There are both of these files depp inside /usr/lib. I have tried a lot of ways to say eclipse path to this folder, but nothing works. Thanks for help.

    Read the article

  • Update table with if statement PS/SQL

    - by Matt
    I am trying to do something like this but am having trouble putting it into oracle coding. BEGIN IF ((SELECT complete_date FROM task_table WHERE task_id = 1) IS NULL) THEN UPDATE task_table SET complete_date = //somedate WHERE task_id = 1; ELSE UPDATE task_table SET complete_date = NULL; END IF; END; But this does not work i also tried IF EXISTS(SELECT complete_date FROM task_table WHERE task_id = 1) with no luck

    Read the article

  • read text file line by line and insert/update values in table

    - by I__
    i am exploring the option of whether DoCmd.TransferText will do what i need, and it seems like it wont. i need to insert data if it does not exist and update it if it does exist i am planning to read a text file line by line like this: Dim intFile As Integer Dim strLine As String intFile = FreeFile() Open myFile For Input As #intFile Line Input #intFile, strLine Close #intFile i guess each individual line will be a record. it will probably be comma separated and some fields will have a " text qualifier because within the field itself i will have commas my question is how would i read a comma delimited text file that has double quotes sometimes as text qualifiers into a table in access?

    Read the article

  • Whatz wrong with this MSSQl Query ?

    - by ClixNCash
    Whatz wrong this MSSQl Query : Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click Dim SQLData As New System.Data.SqlClient.SqlConnection("Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database.mdf;Integrated Security=True;User Instance=True") Dim cmdSelect As New System.Data.SqlClient.SqlCommand("SELECT COUNT(*) FROM Table1 WHERE Name ='" + TextBox1.Text + "'", SQLData) SQLData.Open() If cmdSelect.ExecuteScalar > 0 Then Label1.Text = "You have already voted this service" Return End If Dim con As New SqlConnection Dim cmd As New SqlCommand con.Open() cmd.Connection = con cmd.CommandText = "INSERT INTO Tabel1 (Name) VALUES('" & Trim(Label1.Text) & "')" cmd.ExecuteNonQuery() Label1.Text = "Thank You !" SQLData.Close() End Sub

    Read the article

  • Is it possible with dynamic TSQL query ?

    - by eugeneK
    I have very long select query which i need to filter based on some params, i'm trying to avoid having different stored procedures or if statements inside of single stored procedure by using partly dynamic TSQL... I will avoid long select just for example sake select a from b where c=@c or d=@d @c and @d are filter params, only one can filter at the same time but also both filters could be disabled. 0 for each of these means param is disables so i can create nvarchar with where statement in it... How do i integrate in here dynamic query so 'where' can be added to normal query. I cannot add all the query as big nvarchar because there is too many things in it which will require changes ( ie. when's, subqueries, joins)

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • Multiple user database design

    - by dieguitoweb
    I have to develop a basic social network for an academic purpose; but I need some tips for the users management.. The users are subdivided into 3 groups with different privilege: admins,analysts and standards users. For every user should be stored into the database the following information: name,lastname,e-mail,age,password. I'm not quite sure how I should design the database between theese two solutions: 1)one table called 'users' with the 'role' attribute that explain what a user can do and what can't do, and the permissions are managed via php 2)every application user is a database user created with the query 'CREATE ROLE' (It's a postgres database) and he has permissions on some tables granted with the 'GRANT' statement You should take into account that the project is for a database exam.. thanks

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • Help me choose between XML or SQL Lite on android

    - by Ngetha
    I have an android app that periodically, say once a week downloads content from a server in XML. The content is used by the app, different Acitivities use different parts of the content. My question is a design one, should I save the data in SQlite or just keep it as an XML file, which one would be faster to read? The app can only use one content piece at a time, which means subsequent XML content downloads replace the old one.

    Read the article

  • How do I get every nth row in a table, or how do I break up a subset of a table into sets or rows of

    - by Jherico
    I have a table of heterogeneous pieces of data identified by a primary key (ID) and a type identifier (TYPE_ID). I would like to be able to perform a query that returns me a set of ranges for a given type broken into even page sizes. For instance, if there are 10,000 records of type '1' and I specify a page size of 1000, I want 10 pairs of numbers back representing values I can use in a BETWEEN clause in subsequent queries to query the DB 1000 records at a time. My initial attempt was something like this select id, rownum from CONTENT_TABLE where type_id = ? and mod(rownum, ?) = 0 But this doesn't work.

    Read the article

  • How to automatically check out a database file in a source controlled web application ?

    - by TheRHCP
    Hello, I am working on an ASP.NET web application, we are a small team (4 students) and we do not have access to a dedicated server to host the database instance. So for this web application we decided just to put the database file in the App_Data folder. The problem is that our project is source controled on TFS, so every time you open the solution and try to launch the web application, we get an expcetion saying that database is read-only. That is logical because the databse file is not automatically checked-out. Is there a workaround to avoid a manual check-out of the database file everytime we open the solution ? Thanks.

    Read the article

  • Which MySql line is faster:

    - by Camran
    I have a classified_id variable which matches one document in a MySql table. I am currently fetching the information about that one record like this: SELECT * FROM table WHERE table.classified_id = $classified_id I wonder if there is a faster approach, for example like this: SELECT 1 FROM table WHERE table.classified_id = $classified_id Wont the last one only select 1 record, which is exactly what I need, so that it doesn't have to scan the entire table but instead stops searching for records after 1 is found? Or am I dreaming this? Thanks

    Read the article

  • Getting the last element of a Postgres array, declaratively

    - by Wojciech Kaczmarek
    How to obtain the last element of the array in Postgres? I need to do it declaratively as I want to use it as a ORDER BY criteria. I wouldn't want to create a special PGSQL function for it, the less changes to the database the better in this case. In fact, what I want to do is to sort by the last word of a specific column containing multiple words. Changing the model is not an option here. In other words, I want to push Ruby's sort_by {|x| x.split[-1]} into the database level. I can split a value into array of words with Postgres string_to_array or regexp_split_to_array functions, then how to get its last element?

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • my output parameters are always null when i use BeginExecuteNonQuery

    - by CharlesO
    I have a stored procedure that returns a varchar(160) as an output parameter of a stored procedure. Everything works fine when i use ExecuteNonQuery, i always get back the expected value. However, once i switch to use BeginExecuteNonQuery, i get a null value for the output. I am using connString + "Asynchronous Processing=true;" in both cases. Sadly the BeginExecuteNonQuery is about 1.5 times faster in my case...but i really need the output parameter. Thanks!

    Read the article

  • Complicated Order By Clause?

    - by Todd
    Hi. I need to do what to me is an advanced sort. I have this two tables: Table: Fruit fruitid | received | basketid 1 20100310 2 2 20091205 3 3 20100220 1 4 20091129 2 Table: Basket id | name 1 Big Discounts 2 Premium Fruit 3 Standard Produce I'm not even sure I can plainly state how I want to sort (which is probably a big part of the reason I can't seem to write code to do it, lol). I do a join query and need to sort so everything is organized by basketid. The basketid that has the oldest fruit.received date comes first, then the other rows with the same basketid by date asc, then the basketid with the next earliest fruit.received date followed by the other rows with the same basketid, and so on. So the output would look like this: Fruitid | Received | Basket 4 20091129 Premuim Fruit 1 20100310 Premuim Fruit 2 20091205 Standard Produce 3 20100220 Big Discounts Any ideas how to accomplish this in a single execution?

    Read the article

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Select count() max() Date HELP!!! mysql oracle

    - by DAVID
    Hi guys i have a table with shifts history along with emp ids im using this code to retrieve a list of employees and their total shifts by specifying the range to count from: SELECT ope_id, count(ope_id) FROM operator_shift WHERE ope_shift_date >=to_date( '01-MAR-10','dd-mon-yy') and ope_shift_date <= to_date('31-MAR-10','dd-mon-yy') GROUP BY OPE_ID which gives OPE_ID COUNT(OPE_ID) 1 14 2 7 3 6 4 6 5 2 6 5 7 2 8 1 9 2 10 4 10 rows selected. NOW how do i choose the employee with the highest no of shifts under the specified range date, please this is really important

    Read the article

< Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >