Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 123/193 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • ODBC Linked server in sql 2005 doesn’t work from remote box

    - by mhj96813
    I have a dev workstation with sql 2005 installed and in it I created a linked server to a odbc connection to a clarion database. I can run select statements against it inside sql Mgt studio. When I take a second workstation and connect to the sql on the first box using sql mgt studio, then try the exact same query I get OLE DB provider "MSDASQL" for linked server "liveclarion" returned message "[SoftVelocity Inc.][TopSpeed ODBC Driver][ISAM]ISAM Table Not Found". Any thoughts? It appears to have the same functionality on a second sql server. No remote sql mgt studio connect success in queries against my linked ODBC clarion DB. All done with windows authentication and the same AD user.

    Read the article

  • How does mysql define DISTINCT()

    - by goran
    AFAIK SQL defines two uses of DISTINCT keywords - SELECT DISTINCT field... and SELECT COUNT(DISTINCT field) ... However in one of web applications that I administer I've noticed performance issues on queries like SELECT DISTINCT(field1), field2, field3 ... DISTINCT() on a single column makes no sense and I am almost sure it is interpreted as SELECT DISTINCT field1, field2, field3 ... but how can I prove this? I've searched mysql site for a reference on this particular syntax but could not find any. Does anyone have a link to definition of DISTINCT() in mysql or knows about other authoritative source on this? Best

    Read the article

  • I am not able to drop foreign key in mysql Error 150. Please help

    - by Shantanu Gupta
    i am trying to create a foreign key in my table. But when i executes my query it shows me error 150 Error Code : 1005 Can't create table '.\vts#sql-6ec_1.frm' (errno: 150) (0 ms taken) My Queries are Query to create a foreign Key alter table `vts`.`tblguardian` add constraint `FK_tblguardian` FOREIGN KEY (`GuardianPickPointId`) REFERENCES `tblpickpoint` (`PickPointId`) EDIT: Now I am trying to drop this constraint But it fails again and shows me same error as it was giving when i was trying to create foreign key. alter table `vts`.`tblguardian` drop index `FK_tblguardian` Primary Key table CREATE TABLE `tblpickpoint` ( `PickPointId` int(4) NOT NULL auto_increment, `PickPointName` varchar(500) default NULL, `PickPointLabel` varchar(500) default NULL, `PickPointLatLong` varchar(100) NOT NULL, PRIMARY KEY (`PickPointId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC Foreign Key Table CREATE TABLE `tblguardian` ( `GuardianId` int(4) NOT NULL auto_increment, `GuardianName` varchar(500) default NULL, `GuardianAddress` varchar(500) default NULL, `GuardianMobilePrimary` varchar(15) NOT NULL, `GuardianMobileSecondary` varchar(15) default NULL, `GuardianPickPointId` int(4) default NULL, PRIMARY KEY (`GuardianId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

    Read the article

  • Ruby ways to authenticate using headers?

    - by webdestroya
    I am designing an API system in Ruby-on-Rails, and I want to be able to log queries and authenticate users. However, I do not have a traditional login system, I want to use an APIkey and a signature that users can submit in the HTTP headers in the request. (Similar to how Amazon's services work) Instead of requesting /users/12345/photos/create I want to be able to request /photos/create and submit a header that says X-APIKey: 12345 and then validate the request with a signature. Are there any gems that can be adapted to do that? Or better yet, any gems that do this without adaptation? Or do you feel that it would be wiser to just have them send the API key in each request using the POST/GET vars?

    Read the article

  • Freebase Query with "JOINS"

    - by codemonkey
    ... Yeah, yeah, I know traditional joins don't exist. I actually like the freebase query methodology in theory, just having a little trouble getting it to actually work for me : ) Anyone have a dumb-simple example of getting Freebase data via MQL that pulls from two different "tables"? In particular, I'm trying to get automotive data... so for example, pulling fields from both /automotive/model_year and /automotive/trim_year. I've read the documentation (for hours actually). There's a distinct possibility that I'm looking right at such an example somewhere and just not seeing it because my OLTP brain just doesn't comprehend what it's seeing. * Note * ... that the two "types" I'm working with above are siblings, not parent/child. Does freebase even allow joining data between sibling nodes... I see examples of queries pulling from parent/child, but not from siblings I don't think (or I've overlooked them).

    Read the article

  • Looking for a generic handler/service for mongodb and asp.net / c#

    - by JohnAgan
    I am new to MongoDB and have a perfect place in mind to use it. However, it's only worth it if I can make the queries from JavaScript and return JSON. I read another post on here of someone asking a similar question, but not specific to C#. What's the easiest way I can implement a generic service/handler in asp.net/c# that would allow me to interact with mongodb via JavaScript? I understand JavaScript can't call mongodb directly, so the next best thing is what I'm looking for.

    Read the article

  • PDO update query with conditional?

    - by dmontain
    I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • Linq to SQL gives NotSupportedException when using local variables

    - by zwanz0r
    It appears to me that it matters whether you use a variable to temporary store an IQueryable or not. See the simplified example below: This works: List<string> jobNames = new List<string> { "ICT" }; var ictPeops = from p in dataContext.Persons where ( from j in dataContext.Jobs where jobNames.Contains(j.Name) select j.ID).Contains(p.JobID) select p; But when I use a variable to temporary store the subquery I get an exception: List<string> jobNames = new List<string> { "ICT" }; var jobs = from j in dataContext.Jobs where jobNames.Contains(j.Name) select j.ID; var ictPeops = from p in dataContext.Persons where jobs.Contains(p.JobID) select p; "System.NotSupportedException: Queries with local collections are not supported" I don't see what the problem is. Isn't this logic that is supposed to work in LINQ?

    Read the article

  • What are the pros and cons to keeping SQL in Stored Procs versus Code

    - by Guy
    What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best. So far I have: Advantages for in Code: Easier to maintain - don't need to run a SQL script to update queries Easier to port to another DB - no procs to port Advantages for Stored Procs: Performance Security

    Read the article

  • How to delete data in DB efficiently using LinQ to NHibernate (one-shot-delete)

    - by kastanf
    Hello, producing software for customers, mostly using MS SQL but some Oracle, a decision was made to plunge into Nhibernate (and C#). The task is to delete efficiently e.g. 10 000 rows from 100 000 and still stay sticked to ORM. I've tried named queries - link already, IQuery sql = s.GetNamedQuery("native-delete-car").SetString(0, "Kirsten"); sql.ExecuteUpdate(); but the best I have ever found seems to be: using (ITransaction tx = _session.BeginTransaction()) { try { string cmd = "delete from Customer where Id < GetSomeId()"; var count = _session.CreateSQLQuery(cmd).ExecuteUpdate(); ... Since it may not get into dB to get all complete rows before deleting them. My questions are: If there is a better way for this kind of delete. If there is a possibility to get the Where condition for Delete like this: Having a select statement (using LinQ to NHibernate) = which will generate appropriate SQL for DB = we get that Where condition and use it for Delete. Thanks :-)

    Read the article

  • Safe HttpContext.Current.Cache Usage

    - by Burak SARICA
    Hello there, I use Cache in a web service method like this : var pblDataList = (List<blabla>)HttpContext.Current.Cache.Get("pblDataList"); if (pblDataList == null) { var PBLData = dc.ExecuteQuery<blabla>( @"SELECT blabla"); pblDataList = PBLData.ToList(); HttpContext.Current.Cache.Add("pblDataList", pblDataList, null, DateTime.Now.Add(new TimeSpan(0, 0, 15)), Cache.NoSlidingExpiration, CacheItemPriority.Normal, null); } I wonder is it thread safe? I mean the method is called by multiple requesters And more then one requester may hit the second line at the same time while the cache is empty. So all of these requesters will retrieve the data and add to cache. The query takes 5-8 seconds. May a surrounding lock statement around this code prevent that action? (I know multiple queries will not cause error but i want to be sure running just one query.)

    Read the article

  • comparing two cursors in oracle instead of using MINUS

    - by Omnipresent
    The following query takes more than 3 minutes to run because tables contain massive amounts of data: SELECT RTRIM(LTRIM(A.HEAD)), A.EFFECTIVE_DATE, FROM TABLE_1 A WHERE A.TYPE_OF_ACTION='6' AND A.EFFECTIVE_DATE >= ADD_MONTHS(SYSDATE,-15) MINUS SELECT RTRIM(LTRIM(B.head)), B.EFFECTIVE_DATE, FROM TABLE_2 B In our system a query gets killed if it is running for more than 8 seconds. Is there a way to run the queries individually ..put them in cursors..compare and then get the results? that way each query will be ran individually rather than as one massive query which takes 3 minutes. How would two cursors be compared to mimic the MINUS?

    Read the article

  • Can Entity Framework be used for the purpose of entity/schema definition at application runtime?

    - by Kabeer
    Hello. Can 'Entity Framework' be used for the purpose of entity definition at application runtime? Ok, to make it simple, here is what I want to achieve: My application is a product. I should be able to define entities at runtime on the basis of inputs gathered from an 'authoring' user (in effect this means 'model first' approach). These entities are of course persistable. Further, after having defined the entities and their relationships, I should be able to make complex queries across them for many reasons, including reports. Is the above possible and how? So far what I have realized is that there is a dependency on Visual Studio.

    Read the article

  • Catching constraint violations in JPA 2.0.

    - by Dennetik
    Consider the following entity class, used with, for example, EclipseLink 2.0.2 - where the link attribute is not the primary key, but unique nontheless. @Entity public class Profile { @Id private Long id; @Column(unique = true) private String link; // Some more attributes and getter and setter methods } When I insert records with a duplicate value for the link attribute, EclipseLink does not throw a EntityExistsException, but throws a DatabaseException, with the message explaining that the unique constraint was violated. This doesn't seem very usefull, as there would not be a simple, database independent, way to catch this exception. What would be the advised way to deal with this? A few things that I have considered are: Checking the error code on the DatabaseException - I fear that this error code, though, is the native error code for the database; Checking the existence of a Profile with the specific value for link beforehand - this obviously would result in an enormous amount of superfluous queries.

    Read the article

  • Resources to learn about engineering aspects of data analytics (OLAP, warehousing, ETL, etc.)

    - by JT
    I'm a math/stats guy, interested in learning more about the engineering aspects of "data analytics" (this may be an overly broad term, this is a case of "I don't know what I don't know", so I'm not sure how to be more specific). I'm fine with manipulating and analyzing the data once it's already stored somewhere and I can access it, and I'm fine with writing scripts and SQL queries (and have a general knowledge of things like normalization). What I don't know is the whole engineering process of capturing and storing the data. For example, terms I've heard thrown about that I only vaguely understand the meaning of include: - OLAP, OLTP - Data warehousing - ETL - ??? What's a good book (or any other resource) to learn about these kinds of things? What are things I should know about database design (normalization seems kinda "obvious" to me, something I would have done even before I knew the term -- is there anything else?)? In other words, for jobs falling under the umbrella term of "analytics engineer", what kinds of things should I know?

    Read the article

  • How can I enable MySQL's slow query log without restarting MySQL?

    - by mmattax
    I followed the instructions here: http://crazytoon.com/2007/07/23/mysql-changing-runtime-variables-with-out-restarting-mysql-server/ but that seems to only set the threshold. Do I need to do anything else like set the filepath? According to MySQL's docs If no file_name value is given for --log-slow-queries, the default name is host_name-slow.log. The server creates the file in the data directory unless an absolute path name is given to specify a different directory. Running SHOW VARIABLES doesn't indicate any log file path and I don't see any slow query log file on my server...

    Read the article

  • Retrieve names by ratio of their occurance

    - by jjiffer
    Hello, I'm somewhat new to SQL queries, and I'm struggling with this particular problem. Let's say I have query that returns the following 3 records (kept to one column for simplicity): Tom Jack Tom And I want to have those results grouped by the name and also include the fraction (ratio) of the occurrence of that name out of the total records returned. So, the desired result would be (as two columns): Tom | 2/3 Jack | 1/3 How would I go about it? Determining the numerator is pretty easy (I can just use COUNT() and GROUP BY name), but I'm having trouble translating that into a ratio out of the total rows returned. Any help is much appreciated!

    Read the article

  • Oracle - Is there any effects of not having a primary key on a table ?

    - by Sathya
    We use sequence numbers for primary keys on the tables. There are some tables where we dont really use the primary key for any querying purpose. But, we have Indexes on other columns. These are non-unique indexes. The queries use these non-primary key columns in the WHERE conditions. So, I dont really see any benefit of having a primary key on such tables. My experience with SQL 2000 was that, it used to replicate tables which had some primary key. Otherwise it would not. I am using Oracle 10gR2. I would like to know if there are any such side-effects of having tables that dont have primary key.

    Read the article

  • Does the optimizer filter subqueries with outer where clauses

    - by Mongus Pong
    Take the following query: select * from ( select a, b from c UNION select a, b from d ) where a = 'mung' Will the optimizer generally work out that I am filtering a on the value 'mung' and consequently filter mung on each of the queries in the subquery. OR will it run each query within the subquery union and return the results to the outer query for filtering (as the query would perhaps suggest) In which case the following query would perform better : select * from ( select a, b from c where a = 'mung' UNION select a, b from d where a = 'mung' ) Obviously query 1 is best for maintenance, but is it sacrificing much performace for this? Which is best?

    Read the article

  • "Access is denied" error on accessing iframe document object

    - by Ovesh
    For posting AJAX forms in a form with many parameters, I am using a solution of creating an iframe, posting the form to it by POST, and then accessing the iframe's content. specifically, I am accessing the content like this: $("some_iframe_id").get(0).contentWindow.document I tested it and it worked. On some of the pages, I started getting an "Access is denied" error. As far as I know, this shouldn't happen if the iframe is served from the same domain. I'm pretty sure it was working before. Anybody have a clue? If I'm not being clear enough: I'm posting to the same domain. So this is not a cross-domain request. I am testing on IE only. P.S. I can't use simple ajax POST queries (don't ask...)

    Read the article

  • HSM - cryptoki - opening sessions overhead

    - by Raj
    I am having a query regarding sessions with HSM. I am aware that there is an overhead if you initialise and finalise the cryptoki api for every file you want to encrypt/decrypt. My queries are, Is there an overhead in opening and closing individual sessions for every file, you want to encrypt/decrypt.(C_Initialize/C_Finalize) How many maximum number of sessions can i have for a HSM simultaneously, with out affecting the performance? Is opening and closing the session for processing individual files the best approach or opening a session and processing multiple files and then closing the session the best approach? Thanks

    Read the article

  • Do you use enums in your data applications and how?

    - by Ivan
    My practice shows that a general enterprise application has a lot of entities which's nature corresponds to an elementary enumeration. For example We may have an Order entity which may have such fields as "OrderType", "OrderStatus", "Currency", etc. referencing corresponding Entities which are nothing more than just a textual name bound to a key to be referenced. Using enums would look very natural here. But entities have to be defined in application code at design-time, am I right? While in we need to be able to CRUD enum value variants at runtime and use enums in server-side SQL queries (like stored procedures and views). What are you practices and thoughts on this subject? I am particularly interested in C#4, linq and T-SQL.

    Read the article

  • Recursive query question - break rows into columns?

    - by Stew
    I have a table "Families", like so FamilyID PersonID Relationship ----------------------------------------------- F001 P001 Son F001 P002 Daughter F001 P003 Father F001 P004 Mother F002 P005 Daughter F002 P006 Mother F003 P007 Son F003 P008 Mother and I need output like FamilyID PersonID Father Mother ------------------------------------------------- F001 P001 P003 P004 F001 P002 P003 P004 F001 P003 F001 P004 F002 P005 P006 F002 P006 F003 P007 P008 F003 P008 In which the PersonID of the Father and Mother for a given PersonID are listed (if applicable) in separate columns. I know this must be a relatively trivial query to write (and therefore to find instructions for), but I can't seem to come up with the right search terms. Searching "SQL recursive queries" has gotten me closest, but I can't quite translate those methods to what I'm trying to do here. I'm trying to learn, so multiple methods are welcome, as is vocabulary I should read up on. Thanks!

    Read the article

  • Connection to DB2 in Python

    - by Mestika
    Hi, I'm trying to create a database connection in a python script to my DB2 database. When the connection is done I've to run some different SQL statements. I googled the problem and has read the ibm_db API (http://code.google.com/p/ibm-db/wiki/APIs) but just can't seem to get it right. Here is what I got so far: import sys import getopt import timeit import multiprocessing import random import os import re import ibm_db import time from string import maketrans query_str = None conn = ibm_db.pconnect("dsn=write","usrname","secret") query_stmt = ibm_db.prepare(conn, query_str) ibm_db.execute(query_stmt, "SELECT COUNT(*) FROM accounts") result = ibm_db.fetch_assoc() print result status = ibm_db.close(conn) but I get an error. I really tried everything (or, not everything but pretty damn close) and I can't get it to work. I just need to make a automatic test python script that can test different queries with different indexes and so on and for that I need to create and remove indexes a long the way. Hope someone has a solutions or maybe knows about some example codes out there I can download and study. Thanks Mestika

    Read the article

  • PHP PDO close()?

    - by PHPLOVER
    Can someone tell me, when you for example update, insert, delete.. should you then close it like $stmt->close(); ? i checked php manual and don't understand what close() actually does. EXAMPLE: $stmt = $dbh->prepare("SELECT `user_email` FROM `users` WHERE `user_email` = ? LIMIT 1"); $stmt->execute(array($email)); $stmt->close(); Next part of my question is, if as an example i had multiple update queries in a transaction after every execute() for each query i am executing should i close them individually ? ... because it's a transaction not sure i need to use $stmt->close(); after each execute(); or just use one $stmt->close(); after all of them ? Thanks once again, phplover

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >