Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 356/1252 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • What tool for managing Oracle DB do you suggest?

    - by Artic
    What tool for managing Oracle DB do you suggest? I need to execute scripts and manage data in tables and develop some scripts and packages. I'v tried SQL developer and actually don't like it. Want some more features for developing (debug, code assist, integrated help and so on.)

    Read the article

  • Connect to Oracle 11g using VS2010.net Drivers/Cleints

    - by StealthRT
    Hey all i am trying to redistribute my app that uses Oracle 11g: Imports Oracle.DataAccess.Client The problem i am having is that it will not run on a machine that doesnt seem to have the correct drivers that its looking for. When i install ODAC 11.2 Release 3 (11.2.0.2.1) with Oracle Developer Tools for Visual Studio on the test VM it works just fine but thats a 230+mb file to download and install! Not to mention that if the user already has Oracle 10/11g on their machine that it may mess up their current connections/etc by installing that setup file. Is there another setup package that i can install that only has the Oracle Data Provider for .NET 2.0 11.2.0.2.0 or whatever its needing from that ODAC 11.2 Release 3 file. So any help about what i need to go about fixing this problem would be great! :) Thanks, David

    Read the article

  • what does "out of range" mean?

    - by user329820
    Hi I have checked these statements with mysql and no error will happen and also the out put will be 0 rows BUT my friend checked it and he found an error for SELECT becaouse it is out of range !! IS he correct? thanks CREATE TABLE T1(A INTEGER NULL); SELECT * FROM T1;

    Read the article

  • PHP: Which DB/DB Engine supports search well?

    - by KeyStroke
    Hi, I'm starting a site which relies heavily on search. While it's probably going to search basic meta data in the beginning, it might grow to something bigger in the future. So which DB/DB Engine is best in your opinion when it comes to search performance and future scalability? Appreciate your help

    Read the article

  • Good basic tutorial for installing and using SqlServer

    - by ripper234
    I know mysql, and I'd like to learn sqlserver. I'm currently stuck on the basics of basics: How to install and configure sql server How to connect to it I installed Sql Server through Web Platform Installer, and have Visual Studio 2008 installed. Still, I can't understand how to connect to my server: I see that the SQL service itself (SQLEXPRESS) is running in both in services.msc and Sql Server Configuration Manager I try to connect to it via the Management Studio, but I don't understand what to do. Where do I begin?

    Read the article

  • retrieving the record which has only one value

    - by ashwani66476
    Hello All, Please suggest me a query, which retrieves only those record which has the single row in table. For Example table1. name age aaa 20 bbb 10 ccc 20 ddd 30 If I run "select distinct age from table1. result will be age 20 10 30 But I need a query, which give the result like name age bbb 10 ddd 30 Thanks....

    Read the article

  • What type of data store should I use for my ios app?

    - by mwiederrecht
    I am pretty new to ios and using servers so forgive me. I am building an ios app for research. I need to monitor things that the user does and then push it up to a server for analysis (yes, with user and IRB permission). On the client's side I need to keep quite a bit of data that won't really change except in the case of pulling an updated version from the server, and then a minimal amount of user-specific data. Most of the data I will collect needs to be pushed to a server for analysis and then can be deleted from the client side. I am struggling to figure out what kind of data store I need to use, especially since I am not quite sure how the pushing and pulling from the server process works yet. Does it make sense to use Core Data? XML? SQLite? I like the Core Data idea, but I am not sure what kind of problems I will run into when I need to send large amounts of data to it and from it from the server. I imagine I might need to send data in a different form than it is probably stored in on either end - so what kind of overhead am I likely to run into in the process of converting that data? Is there a good format to save stuff in that would work well for me on both ends AND for sending the data? As you can probably tell, I could use some advice. Thanks!

    Read the article

  • Can I set ignore_dup_key on for a primary key?

    - by Mr. Flibble
    I have a two-column primary key on a table. I have attempted to alter it to set the ignore_dup_key to on with this command: ALTER INDEX PK_mypk on MyTable SET (IGNORE_DUP_KEY = ON); But I get this error: Cannot use index option ignore_dup_key to alter index 'PK_mypk' as it enforces a primary or unique constraint. How else should I set IGNORE_DUP_KEY to on?

    Read the article

  • When/Why to use Cascading in SQL Server?

    - by Joel Coehoorn
    When setting up foreign keys in SQL Server, under what circumstances should you have it cascade on delete or update, and what is the reasoning behind it? This probably applies to other databases as well. I'm looking most of all for concrete examples of each scenario, preferably from someone who has used them successfully.

    Read the article

  • How to change data structure in mysql using mysqldump without deleting files

    - by Don Quixote
    Essentially what I'm trying to do is sync a production server with a sandbox server, but only the table structures and stored procedures. The procedures aren't any problem since they can be overriden, but the problem is the tables. I want to sync and alter their structures on the production server using mysqldump (or any other way that you can propose) without altering any existing data. If it helps, I only want to add more columns, not remove any existing ones. Also, I am using mysqlyog. Is there any way to do this?

    Read the article

  • Storing user settings in table - how?

    - by Mdillion
    I have settings for the user about 200 settings, these include notice settings and tracing settings from user activities on objects. The problem is how to store it in the DB? Should each setting be a row or a column? If colunm then table will have 200 colunms. If row then about 3 colunms but 200 rows per user x even 10 million users = not good. So how else can i store all these settings? NOTE: these settings are a mix of text entry and FK lookups to other tables. Thanks.

    Read the article

  • using dummy row with NOT NULL to solve DEFAUT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup valve field. I have the first row in every table which is PK id = 1 as a dummy row with just "none" in all the columns. this way I can use NOT NULL in my schema and if needed reference to the none row values which should be null. Is this a good design or any other work arounds?

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • What's the best way to test MSSQL connection programmatically?

    - by backslash17
    I need to develop a single routine that will be fired each 5 minutes to check if a list of MSSQL servers (10 to 12) are up and running. I can try to obtain a simple query in each one of the servers but this means that I have to create a table, view or stored procedure in every server, even if I use any already made SP I need to have a registered user in each server too. The servers are not in the same phisical location so having those requirements would be a complex task. Is there a way to simply "ping" from c# one MSSQL server? Thanks in advance!

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • Many tables for many users?

    - by Seagull
    I am new to web programming, so excuse the ignorance... ;-) I have a web application that in many ways can be considered to be a multi-tenant environment. By this I mean that each user of the application gets their own 'custom' environment, with absolutely no interaction between those users. So far I have built the web application as a 'single user' environment. In other words, I haven't actually done anything to support multi-users, but only worked on the functionality I want from the app. Here is my problem... What's the best way to build a multi-user environment: All users point to the same 'core' backend. In other words, I build the logic to separate users via appropriate SQL queries (eg. select * from table where user='123' and attribute='456'). Each user points to a unique tablespace, which is built separately as they join the system. In this case I would simply generate ALL the relevant SQL tables per user, with some sort of suffix for the user. (eg. now a query would look like 'select * from table_ where attribute ='456'). In short, it's a difference between "select * from table where USER=" and "select * from table_USER".

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >