Search Results

Search found 27368 results on 1095 pages for 'msaccess to sql'.

Page 625/1095 | < Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >

  • MySQL Count If using 4 tables or Perl

    - by user1726133
    Hi I have a relatively convoluted query that relies on 4 different tables, unfortunately I do not have control of this data, but I do have to query it. I ran this simpler query and it works using just table 1 and table 2 SELECT actor, receiver, count(IF(t2.group1 = "anxiety behavior", 1,0)) AS 'anxiety' FROM ethogram_edited_obs_behaviors t1 JOIN ethogram_behaviors t2 on t1.behavior = t2.behavior_code GROUP BY actor; Below are the 4 tables I need and the query I tried that didn't work Table 1 | Table 2 | Table 3 | Table 4 Actor | Behavior | Behavior | type of Behavior | subject | sex | subject |subject_code er frown | frown anxiety behavior | Eric M | Eric | er Here is the query that is failing SELECT actor, count(IF(t2.group1 = "anxiety behavior", 1,0) AND(t3.sex = "M", 1,0)) AS 'anxiety', FROM ethogram_edited_obs_behaviors t1 JOIN ethogram_behaviors t2 on t1.behavior = t2.behavior_code JOIN subject_code t3 on t1.actor = t3.behavior_code1 JOIN subjects t4 on t3.subject = t4.yerkes_code GROUP BY actor; Any help would be much appreciated!! Thanks :) P.S. if this is easier to do in Perl tips also much appreciated

    Read the article

  • MSSQL 2005 FOR XML

    - by Lima
    Hi, I am wanting to export data from a table to a specifically formated XML file. I am fairly new to XML files, so what I am after may be quite obvious but I just cant find what I am looking for on the net. The format of the XML results I need are: <data> <event start="May 28 2006 09:00:00 GMT" end="Jun 15 2006 09:00:00 GMT" isDuration="true" title="Writing Timeline documentation" image="http://simile.mit.edu/images/csail-logo.gif"> A few days to write some documentation </event> </data> My table structure is: name VARCHAR(50), description VARCHAR(255), startDate DATETIME, endDate DATETIME (I am not too interested in the XML fields image or isDuration at this point in time). I have tried: SELECT [name] ,[description] ,[startDate] ,[endTime] FROM [testing].[dbo].[time_timeline] FOR XML RAW('event'), ROOT('data'), type Which gives me: <data> <event name="Test1" description="Test 1 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> <event name="Test2" description="Test 2 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> </data> What I am missing, is the description needs to be outside of the event attributes, and there needs to be a tag. Is anyone able to point me in the correct direction, or point me to a tutorial or similar on how to accomplish this? Thanks, Matt

    Read the article

  • Is there a problem when I call SqlAdapter.Update and at the same time call SqlDataReader.Read

    - by Ahmed Said
    I have two applications, one updates a single table which has constant number of rows (128 rows) using SqlDataAdapter.Update method , and another application that select from this table periodically using SqlDataReader. sometimes the DataReader returns only 127 rows not 128, and the update application does not remove or even insert any new rows, it just update. I am asking what is the cause of this behaviour?

    Read the article

  • Question about joins and table with Millions of rows

    - by xRobot
    I have to create 2 tables: Magazine ( 10 millions of rows with these columns: id, title, genres, printing, price ) Author ( 180 millions of rows with these columns: id, name, magazine_id ) . Every author can write on ONLY ONE magazine and every magazine has more authors. So if I want to know all authors of Motors Magazine, I have to use this query: SELECT * FROM Author, Magazine WHERE ( Author.id = Magazine.id ) AND ( genres = 'Motors' ) The same applies to Printing and Price column. To avoid these joins with tables of millions of rows, I thought to use this tables: Magazine ( 10 millions of rows with this column: id, title, genres, printing, price ) Author ( 180 millions of rows with this column: id, name, magazine_id, genres, printing, price ) . and this query: SELECT * FROM Author WHERE genres = 'Motors' Is it a good approach ? I can use Postgresql or Mysql.

    Read the article

  • Efficiently retrieve objects with one to many references in Grails using GORM

    - by bebeastie
    I'm trying to determine how to find/retrieve/load objects efficiently in terms of a.) minimizing calls to database and b.) keeping the code as elegant/simple as possible (i.e. not writing hql etc.). Assume you have two objects: public class Foo { Bar bar String badge } public class Bar { String name } Each Foo has a bar and a badge. Also assume that all badges are unique within a bar. So if a Foo has a badge "4565" there are no other Foos that have the same badge # AND the same bar. If I have a bar ID, how can I efficiently retrive the Foo w/o first selecting Bar? I know I can do this: Foo.findByBadgeAndBar("4565", Bar.findById("1")) But that seems to cause a select on the Bar table followed by a select on the Foo table. In other words, I need to produce the Grails/Hibernate/GORM equivalent of the following: select * from foo where badge="4565" and bar_id="1"

    Read the article

  • CTE to build a list of departments and managers (hierarchical)

    - by Milky Joe
    I need to generate a list of users that are managers, or managers of managers, for company departments. I have two tables; one details the departments and one contains the manager hierarchy (simplified): CREATE TABLE [dbo].[Manager]( [ManagerId] [int], [ParentManagerId] [int]) CREATE TABLE [dbo].[Department]( [DepartmentId] [int], [ManagerId] [int]) Basically, I'm trying to build a CTE that will give me a list of DepartmentIds, together with all ManagerIds that are in the manager hierarchy for that department. So... Say Manager 1 is the Manager for Department 1, and Manager 2 is Manager 1's Manager, and Manager 3 is Manager 2's Manager, I'd like to see: DepartmentId, ManagerId 1, 1 1, 2 1, 3 Basically, managers are able to deal with all of their sub-manager's departments. Building the CTE to return the Manager hierarchy was fairly simple, but I'm struggling to inject the Departments in there: WITH DepartmentManagers AS ( SELECT ManagerId, ParentManagerId, 0 AS Depth From Manager UNION ALL SELECT Manager.ManagerId, Manager.ParentManagerId, DepartmentManagers.Depth + 1 AS Depth FROM Manager INNER JOIN DepartmentManagers ON DepartmentManagers.ManagerId = Manager.ParentManagerId ) Can anyone help?

    Read the article

  • How to join nearly identical several queries into one?

    - by Devyn
    Hi, Assume I have an order_dummy table where order_dummy_id, order_id, user_id, book_id, author_id are stored. You may complain the logic of my table but I somehow need to do it that way. I want to execute following queries. SELECT * FROM order_dummy WHERE order_id = 1 AND user_id = 1 AND book_id = 1 ORDER BY `order_dummy_id` DESC LIMIT 1 SELECT * FROM order_dummy WHERE order_id = 1 AND user_id = 1 AND book_id = 2 ORDER BY `order_dummy_id` DESC LIMIT 1 SELECT * FROM order_dummy WHERE order_id = 1 AND user_id = 1 AND book_id = 3 ORDER BY `order_dummy_id` DESC LIMIT 1 Please keep in mind that several numbers of same book is included in one order. Therefore, I list order_dummy_id by descending and limit 1 so only LATEST ORDER of A BOOK is shown. But my goal is to show other books in that way in one table. I used group by like this ... SELECT * FROM order_dummy WHERE order_id = 1 AND user_id = 1 GROUP BY book_id but it only shows order_dummy_id with ascending result. I have no idea anymore. Looking forward your kindness help!

    Read the article

  • How to display MySQL Select statement results in PHP

    - by Vafello
    I have the following code and it should return just one value (id) from mysql table. The following code doesnt work. How can I output it without creating arrays and all this stuff, just a simple output of one value. $query = "SELECT id FROM users_entity WHERE username = 'Admin' "; $result = map_query($query); echo $result;

    Read the article

  • How can I format Custom Data and display in autocomplete when source is an DB

    - by Andres Scarpone
    so I'm trying to get some info in the auto-complete widget like it's shown in the JQuery UI demo Demo, the only problem is they use a variable that they fill with the data they want to show, I instead want to access the data and the different description and stuff using a Data Base in MySQL, for this I have changed the source to use another php page that looks up the info. here is the code for the Auto-complete, I really don't understand the methods so I haven't changed it from the basic search. This is the JS: $(document).ready((function(){ $( "#completa" ).autocomplete({ source: "buscar.php", minLength: 1, focus: function (event, ui){ $("#completa").val(ui.item.val); return false; }; })); This is what I have in buscar.php: <?php $conec = mysql_connect(localhost, root, admin); if(!$conec) { die(mysql_error()); } else { $bd = mysql_select_db("ve_test",$conec ); if(!$bd) { die(mysql_error()); } } $termino = trim(strip_tags($_GET['term']));//Obtener el termino que envia el autocompletar $qstring = "SELECT name, descripcion FROM VE_table WHERE name LIKE '%".$termino."%'"; $result = mysql_query($qstring);//Solicitud a la Base de Datos while ($row = mysql_fetch_array($result,MYSQL_ASSOC))//Realizar un LOOP sobre los valores obtenidos { $row['value']=htmlentities(stripslashes($row['name'])); $row_set[] = $row;//build an array } echo json_encode($row_set);//Enviar los datos al autocompletar en codificacion JSON, Altamente Necesario. ?

    Read the article

  • How to update a table with a list of values at a time?

    - by VJ
    I have update NewLeaderBoards set MonthlyRank=(Select RowNumber() from LeaderBoards) I tried it this way - (Select RowNumber() from LeaderBoards) as NewRanks update NewLeaderBoards set MonthlyRank = NewRanks But it doesnt work for me..Can anyone suggest me how can i perform an update in such a way..

    Read the article

  • How can I make a multi search SPROC/UDF by passing a tabled-value to it?

    - by Shimmy
    I actually want to achieve the following description This is the table argument I want to pass to the server <items> <item category="cats">1</item> <item category="dogs">2</item> </items> SELECT * FROM Item WHERE Item.Category = <one of the items in the XML list> AND Item.ReferenceId = <the corresponding value of that item xml element> --Or in other words: SELECT FROM Items WHERE Item IN XML according to the splecified columns. Am I clear enought? I don't mind to do it in a different way other than xml. What I need is selecting values that mach an array of two of its columns' values.

    Read the article

  • PHP 'smart' search engine to search Mysql tables advice

    - by Anonymous12345
    I am creating a search engine for my php based website. I need to search a mysql table. Thing is, the search engine must be pretty 'smart', so that users can easily find their items (it's a classifieds website). I have currently set up a FULLTEXT search with this piece of code: MATCH (headline) AGAINST ($querystring) But this isn't enough... For instance, lets say the field headline contains something like Bmw 330ci. If I search for 330, I wont get any results. The ending ('ci') is just one of many endings in car models which must be taken into account when searching the table. Or what if the headline field is bmw330? Also no results, because it only matches full words. Or also, what if the headline is bmw 330, and I search for bmw 520, still with FULLTEXT I will get the bmw 330 as a result, even though I searched for bmw 520... Not good! How should I solve this problem?

    Read the article

  • connecting to secure database from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. The option of switching the database onto public hosting is not an option, so I have to work with the sdcenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • Oracle (Old?) Joins

    - by Grasper
    I have been porting oracle selects, and I have been running across a lot of queries like so: SELECT e.last_name, d.department_name FROM employees e, departments d WHERE e.department_id(+) = d.department_id; ...and: SELECT last_name, d.department_id FROM employees e, departments d WHERE e.department_id = d.department_id(+); Are there any guides/tutorials for converting all of the variants of the (+) syntax? What is that syntax even called (so I can scour google)? When was this standard phased out? Any info is appreciated.

    Read the article

  • Multi-tier applications using L2S, WCF and Base Class

    - by Gena Verdel
    Hi all. One day I decided to build this nice multi-tier application using L2S and WCF. The simplified model is : DataBase-L2S-Wrapper(DTO)-Client Application. The communication between Client and Database is achieved by using Data Transfer Objects which contain entity objects as their properties. abstract public class BaseObject { public virtual IccSystem.iccObjectTypes ObjectICC_Type { get { return IccSystem.iccObjectTypes.unknownType; } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_ID", AutoSync = AutoSync.OnInsert, DbType = "BigInt NOT NULL IDENTITY", IsPrimaryKey = true, IsDbGenerated = true)] [global::System.Runtime.Serialization.DataMemberAttribute(Order = 1)] public virtual long ID { //get; //set; get { return _ID; } set { _ID = value; } } } [DataContract] public class BaseObjectWrapper<T> where T : BaseObject { #region Fields private T _DBObject; #endregion #region Properties [DataMember] public T Entity { get { return _DBObject; } set { _DBObject = value; } } #endregion } Pretty simple, isn't it?. Here's the catch. Each one of the mapped classes contains ID property itself so I decided to override it like this [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.Divisions")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class Division : INotifyPropertyChanging, INotifyPropertyChanged { [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ID", AutoSync=AutoSync.OnInsert, DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true, IsDbGenerated=true)] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public override long ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } } Wrapper for division is pretty straightforward as well: public class DivisionWrapper : BaseObjectWrapper<Division> { } It worked pretty well as long as I kept ID values at mapped class and its BaseObject class the same(that's not very good approach, I know, but still) but then this happened: private CentralDC _dc; public bool UpdateDivision(ref DivisionWrapper division) { DivisionWrapper tempWrapper = division; if (division.Entity == null) { return false; } try { Table<Division> table = _dc.Divisions; var q = table.Where(o => o.ID == tempWrapper.Entity.ID); if (q.Count() == 0) { division.Entity._errorMessage = "Unable to locate entity with id " + division.Entity.ID.ToString(); return false; } var realEntity = q.First(); realEntity = division.Entity; _dc.SubmitChanges(); return true; } catch (Exception ex) { division.Entity._errorMessage = ex.Message; return false; } } When trying to enumerate over the in-memory query the following exception occurred: Class member BaseObject.ID is unmapped. Although I'm stating the type and overriding the ID property L2S fails to work. Any suggestions?

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • How can I do more than one level of cascading deletes in Linq?

    - by Gary McGill
    If I have a Customers table linked to an Orders table, and I want to delete a customer and its corresponding orders, then I can do: dataContext.Orders.DeleteAllOnSubmit(customer.Orders); dataContext.Customers.DeleteOnSubmit(customer); ...which is great. However, what if I also have an OrderItems table, and I want to delete the order items for each of the orders deleted? I can see how I could use DeleteAllOnSubmit to cause the deletion of all the order items for a single order, but how can I do it for all the orders?

    Read the article

  • postgres SQL - pg_class question

    - by Sachin Chourasiya
    PostgreSQL stores statistics about tables in the system table called pg_class. The query planner accesses this table for every query. These statistics may only be updated using the analyze command. If the analyze command is not run often, the statistics in this table may not be accurate and the query planner may make poor decisions which can degrade system performance. Another strategy is for the query planner to generate these statistics for each query (including selects, inserts, updates, and deletes). This approach would allow the query planner to have the most up-to-date statistics possible. Why postgres always rely on pg_class instead?

    Read the article

  • Compare values in serialized column in Doctrine with Query Builder

    - by ReynierPM
    I'm building a FormType for a Symfony2 project but I need some Query Builder on the field since I need to compare some values with the one stored on DB and show the results. This is what I have: .... ->add('servicio', 'entity', array( 'mapped' => false, 'class' => 'ComunBundle:TipoServicio', 'property' => 'nombre', 'required' => true, 'label' => false, 'expanded' => true, 'multiple' => true, 'query_builder' => function (EntityRepository $er) { return $er->createQueryBuilder('ts') ->where('ts.tipo_usuario = (:tipo)') ->setParameter('tipo', 1); } )) .... But tipo_usuario at DB table is stored as serialized text for example: record1: value1 | a:1:{i:0;s:1:"1";} record2: value2 | a:4:{i:0;s:1:"1";i:1;s:1:"2";i:2;s:1:"3";i:3;s:1:"4";} I'll have two different forms (I don't know how to pass the Request to a form) in the first one I'll only show the first record and for the second one the first and second record for example: First form will show: checkbox: value1 Second form will show: checkbox: value1 checkbox: value2 I achieve this? Any help?

    Read the article

  • C# - Rollback SqlTransaction in catch block - Problem with object accessability

    - by Marks
    Hi there. I've got a problem, and all articles or examples i found seem to not care about it. I want to do some database actions in a transaction. What i want to do is very similar to most examples: using (SqlConnection Conn = new SqlConnection(_ConnectionString)) { try { Conn.Open(); SqlTransaction Trans = Conn.BeginTransaction(); using (SqlCommand Com = new SqlCommand(ComText, Conn)) { /* DB work */ } } catch (Exception Ex) { Trans.Rollback(); return -1; } } But the problem is, that the SqlTransaction Trans is declared inside the try block. So it is not accessable in the catch() block. Most examples just do Conn.Open() and Conn.BeginTransaction() before the try block. But i think thats a bit risky, since both can throw multiple exceptions. Am I wrong, or do most people just ignore this risk? Whats the best solution to be able to rollback, if an exception happens. Thanks in advance, Marks

    Read the article

  • SQLServer using too much memory

    - by Israel Pereira Valverde
    I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express. I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible. With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ? When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again. I have read about an hotfix to solve one of the errors I got from sqlserver: There is insufficient system memory in resource pool 'internal' to run this query But it's already installed on my sqlserver. Why it's using so much memory?

    Read the article

< Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >