Search Results

Search found 27342 results on 1094 pages for 'sql denali'.

Page 588/1094 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • How to Set Customer Table with Multiple Phone Numbers? - Relational Database Design

    - by user311509
    CREATE TABLE Phone ( phoneID - PK . . . ); CREATE TABLE PhoneDetail ( phoneDetailID - PK phoneID - FK points to Phone phoneTypeID ... phoneNumber ... . . . ); CREATE TABLE Customer ( customerID - PK firstName phoneID - Unique FK points to Phone . . . ); A customer can have multiple phone numbers e.g. Cell, Work, etc. phoneID in Customer table is unique and points to PhoneID in Phone table. If customer record is deleted, phoneID in Phone table should also be deleted. Do you have any concerns on my design? Is this designed properly? My problem is phoneID in Customer table is a child and if child record is deleted then i can not delete the parent (Phone) record automatically.

    Read the article

  • "SELECT TOP", "LEFT OUTER JOIN", "ORDER BY" gives extra rows

    - by Codesleuth
    I have the following Access query I'm running through OLE DB in .NET: SELECT TOP 25 tblClient.ClientCode, tblRegion.Region FROM (tblClient LEFT OUTER JOIN tblRegion ON tblClient.RegionCode = tblRegion.RegionCode) ORDER BY tblRegion.Region There are 431 records within tblClient that have RegionCode set to NULL. For some reason, the query above returns all these 431 records instead of the first 25. If I change the query to ORDER BY tblClient.Client (the name of the client) like so: SELECT TOP 25 tblClient.ClientCode, tblRegion.Region FROM (tblClient LEFT OUTER JOIN tblRegion ON tblClient.RegionCode = tblRegion.RegionCode) ORDER BY tblClient.Client I get the expected result set of 25 records, showing a mixture of region names and NULL values. Why is it that ordering by a field retrieved through a LEFT OUTER JOIN will the TOP clause not work?

    Read the article

  • Why Banks or Financial Companies prefer Oracle than other RDBMS for their "Core" systems?

    - by edwin.nathaniel
    I'd like to know why most Banks or Financial companies prefer Oracle than other RDBMS for their core systems (the absolutely minimum features that a Bank must support). I found a few answers that didn't satisfy me. For example: Oracle has more features. But features for what? Can't you implement that in application level if you were not using Oracle? Could someone please describe a bit more technical but still on high-level overview of what the bank needs and how Oracle would solve it and the others can't or don't have the features yet? I came from the web-app (web 2.0) crowd who normally hear news about MySQL, PostgreSQL or even key-value/column-oriented storage solution. I have almost zero knowledge on how Banks or Financial companies operates from technical perspective. Thank you, Ed

    Read the article

  • How can I choose different hints for different joins for a single table in a query hint?

    - by RenderIn
    Suppose I have the following query: select * from A, B, C, D where A.x = B.x and B.y = C.y and A.z = D.z I have indexes on A.x and B.x and B.y and C.y and D.z There is no index on A.z. How can I give a hint to this query to use an INDEX hint on A.x but a USE_HASH hint on A.z? It seems like hints only take the table name, not the specific join, so when using a single table with multiple joins I can only specify a single strategy for all of them. Alternative, suppose I'm using a LEADING or ORDERED hint on the above query. Both of these hints only take a table name as well, so how can I ensure that the A.x = B.x join takes place before the A.z = D.z one? I realize in this case I could list D first, but imagine D subsequently joins to E and that the D-E join is the last one I want in the entire query. A third configuration -- Suppose I want the A.x join to be the first of the entire query, and I want the A.z join to be the last one. How can I use a hint to have a single join from A to take place, followed by the B-C join, and the A-D join last?

    Read the article

  • Integer Surrogate Key?

    - by CitadelCSAlum
    I need something real simple, that for some reason I am unable to accomplish to this point. I have also been unable to Google the answer surprisingly. I need to number the entries in different tables uniquely. I am aware of AUTO INCREMENT in MySQL and that it will be involved. My situation would be as follows If I had a table like Table EXAMPLE ID - INTEGER FIRST_NAME - VARCHAR(45) LAST_NAME - VARCHAR(45) PHONE - VARCHAR(45) CITY - VARCHAR(45) STATE - VARCHAR(45) ZIP - VARCHAR(45) This would be the setup for the table where ID is an integer that is auto-incremented every time an entry is inserted into the table. The thing I need is that I do not want to have to account for this field when inserting data into the database. From my understanding this would be a surrogate key, that I can tell the database to automatically increment and I do not have to include it in the INSERT STATEMENT so instead of INSERT INTO EXAMPLE VALUES (2,'JOHN','SMITH',333-333-3333,'NORTH POLE'.... I can leave out the first ID column and just write something like INSERT INTO EXAMPLE VALUES ('JOHN','SMITH'.....etc) Notice I Wouldnt have to define the ID column... I know this is a very common task to do, but for some reason I cant get to the bottom of it. I am using MySQL, just to clarify. Thanks alot

    Read the article

  • I need some help optimizing my database schema

    - by Steffan
    Here's a layout of my data: Heading 1: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 2: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 3: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 4: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 5: Sub heading Sub heading Sub heading Sub heading Sub heading These headings need to have a 'Completion Status' boolean value which gets linked to a user Id. Currently, this is how my table looks: id | userID | field_1 | field_2 | field_3 | field_4 | etc... ----------------------------------------------------------------------- 1 | 1 | 0 | 0 | 1 | 0 | ----------------------------------------------------------------------- 2 | 2 | 1 | 0 | 1 | 1 | Each field represents one Sub Heading. Having this many columns in my table looks awfully inefficient... How can I go about optimizing this? I can't think of any way to neaten it up :/

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • complicated inserts

    - by liysd
    I have to do something like this insert into object (name, value, first_node) values ('some_name', 123, 0) @id = mysql_last_insert_id() insert nodes (name, object_id) values ('node_name',@id) @id2 = mysql_last_insert_id() update object set first_node=@id2 where id=@id Is it possible to make it simpler? What if I want to insert more pairs (object, node) with resonable efficency?

    Read the article

  • Select number of rows for each group where two column values makes one group

    - by Fábio Antunes
    I have a two select statements joined by UNION ALL. In the first statement a where clause gathers only rows that have been shown previously to the user. The second statement gathers all rows that haven't been shown to the user, therefore I end up with the viewed results first and non-viewed results after. Of course this could simply be achieved with the same select statement using a simple ORDER BY, however the reason for two separate selects is simple after you realize what I hope to accomplish. Consider the following structure and data. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 1 | 1 | 10 | true | .... | | 2 | 10 | 1 | true | .... | | 3 | 1 | 10 | true | .... | | 4 | 6 | 8 | true | .... | | 5 | 1 | 10 | true | .... | | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | | 13 | 10 | 1 | false | .... | | 14 | 1 | 10 | false | .... | | 15 | 6 | 8 | false | .... | | 16 | 10 | 1 | false | .... | | 17 | 8 | 6 | false | .... | | 18 | 3 | 2 | false | .... | +----+------+-----+--------+------+ Basically I wish all non viewed rows to be selected by the statement, that is accomplished by checking weather the viewed column is true or false, pretty simple and straightforward, nothing to worry here. However when it comes to the rows already viewed, meaning the column viewed is TRUE, for those records I only want 3 rows to be returned for each group. The appropriate result in this instance should be the 3 most recent rows of each group. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | +----+------+-----+--------+------+ As you see from the ideal result set we have three groups. Therefore the desired query for the viewed results should show a maximum of 3 rows for each grouping it finds. In this case these groupings were 10 with 1 and 8 with 6, both which had three rows to be shown, while the other group 2 with 3 only had one row to be shown. Please note that where from = x and to = y, makes the same grouping as if it was from = y and to = x. Therefore considering the first grouping (10 with 1), from = 10 and to = 1 is the same group if it was from = 1 and to = 10. However there are plenty of groups in the whole table that I only wish the 3 most recent of each to be returned in the select statement, and thats my problem, I not sure how that can be accomplished in the most efficient way possible considering the table will have hundreds if not thousands of records at some point. Thanks for your help. Note: The columns id, from, to and viewed are indexed, that should help with performance. PS: I'm unsure on how to name this question exactly, if you have a better idea, be my guest and edit the title.

    Read the article

  • Java & Tomcat: SQL JDBC/JNDI Exceptions

    - by user267581
    I am deploying a webapp from eclipse to tomcat. I am having an issue with my application and JNDI lookups. When the app tries to load the JNDI resource I am this stacktrace: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused: connect] at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101) at javax.naming.InitialContext.lookup(InitialContext.java:396) at org.objectweb.carol.jndi.spi.AbsContext.lookup(AbsContext.java:134) at org.objectweb.carol.jndi.spi.AbsContext.lookup(AbsContext.java:144) at javax.naming.InitialContext.lookup(InitialContext.java:392) at org.objectweb.carol.jndi.spi.MultiContext.lookup(MultiContext.java:118) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.theriabook.daoflex.JDBCConnection.getDataSource(JDBCConnection.java:61) at com.theriabook.daoflex.JDBCConnection.getConnection(JDBCConnection.java:73) at com.aramark.data.UsersDAO.doLogin(UsersDAO.java:751) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at flex.messaging.services.remoting.adapters.JavaAdapter.invoke(JavaAdapter.java:406) at flex.messaging.services.RemotingService.serviceMessage(RemotingService.java:183) at flex.messaging.MessageBroker.routeMessageToService(MessageBroker.java:1417) at flex.messaging.endpoints.AbstractEndpoint.serviceMessage(AbstractEndpoint.java:878) at com.farata.remoting.CustomAMFEndpoint.serviceMessage(CustomAMFEndpoint.java:23) at flex.messaging.endpoints.amf.MessageBrokerFilter.invoke(MessageBrokerFilter.java:121) at flex.messaging.endpoints.amf.LegacyFilter.invoke(LegacyFilter.java:158) at flex.messaging.endpoints.amf.SessionFilter.invoke(SessionFilter.java:49) at flex.messaging.endpoints.amf.BatchProcessFilter.invoke(BatchProcessFilter.java:67) at flex.messaging.endpoints.amf.SerializationFilter.invoke(SerializationFilter.java:146) at flex.messaging.endpoints.BaseHTTPEndpoint.service(BaseHTTPEndpoint.java:274) at flex.messaging.MessageBrokerServlet.service(MessageBrokerServlet.java:377) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.cti.compiler.env.web.CompilerInvocationInterceptor.doFilter(CompilerInvocationInterceptor.java:25) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) Caused by: java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused: connect at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601) at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322) at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source) at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97) ... 41 more Caused by: java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.<init>(Socket.java:372) at java.net.Socket.<init>(Socket.java:186) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128) at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) I am really stumped at this error. I am using a WinXP running on a Virtual Machine (VMWare) from a MAC. Any ideas? I have uninstalled/reinstalled tomcat multiple times.

    Read the article

  • Filling data in FormView from different tables

    - by wacky_coder
    Hi, I am using a FormView in an online quiz page for displaying the questions and RadioButtons (for answers) http://stackoverflow.com/questions/2438219/online-quiz-using-asp-dot-net Now, I need to pick the questions according to a TestID and a particular Set of that Test. The testid and the set_number would be passed using Session variables. I'm having 3 testIDs and 3 Sets per TestID and, thus, am using 9 tables to store the Questions, the Options and the correct answer. I need help on how to set the FormView so that it extracts the questions from a particular table only. Do I need to use a StoredProcedure ? If yes, how? PS: If I use only one table to store all the questions from each Set and for each TestID, I can do that, but I'd prefer using separate tables.

    Read the article

  • MySQL ORDER BY date and team

    - by Michael
    I would like to order by date and then team in a MySQL query. It should be something similar to this: SELECT * FROM games ORDER BY gamedate ASC, team_id AND it should output something like this: 2010-04-12 10:20 Game 1 Team 1 2010-04-12 11:00 Game 3 Team 1 2010-04-12 10:30 Game 2 Team 2 2010-04-14 10:00 Game 4 Team 1 So that Team 1 is under each other on the same date, but separate on a new date

    Read the article

  • Oracle .dmp file

    - by Grasper
    I have an oracle .dmp file, and no access to an oracle install.. Is there any way I can read the data or open it in another program to see what data is in this file?

    Read the article

  • asp.net mvc insert doesnt seem to work for me....

    - by Pandiya Chendur
    My controller's call to repository insert method all the values are passed but it doesn't get inserted in my table.. My controller method, [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create([Bind(Exclude = "Id")]FormCollection collection) { try { MaterialsObj materialsObj = new MaterialsObj(); materialsObj.Mat_Name = collection["Mat_Name"]; materialsObj.Mes_Id = Convert.ToInt64(collection["MeasurementType"]); materialsObj.Mes_Name = collection["Mat_Type"]; materialsObj.CreatedDate = System.DateTime.Now; materialsObj.CreatedBy = Convert.ToInt64(1); materialsObj.IsDeleted = Convert.ToInt64(1); consRepository.createMaterials(materialsObj); return RedirectToAction("Index"); } catch { return View(); } } and my repository, public MaterialsObj createMaterials(MaterialsObj materialsObj) { Material mat = new Material(); mat.Mat_Name = materialsObj.Mat_Name; mat.Mat_Type = materialsObj.Mes_Name; mat.MeasurementTypeId = materialsObj.Mes_Id; mat.Created_Date = materialsObj.CreatedDate; mat.Created_By = materialsObj.CreatedBy; mat.Is_Deleted = materialsObj.IsDeleted; db.Materials.InsertOnSubmit(mat); return materialsObj; } What am i missing here any suggestion....

    Read the article

  • Android : SQL Lite insertion or create table issue

    - by Ram
    Team, can anyone please help me to understand what could be the problem in the below snippet of code? It fails after insertion 2 and insertion 3 debug statements I have the contentValues in the Array list, I am iterating the arraylist and inserting the content values in to the database. Log.d("database","before insertion 1 "); liteDatabase = this.openOrCreateDatabase("Sales", MODE_PRIVATE, null); Log.d("database","before insertion 2 "); liteDatabase .execSQL("Create table activity ( ActivityId VARCHAR,Created VARCHAR,AMTaps_X VARCHAR,AMTemperature_X VARCHAR,AccountId VARCHAR,AccountLocation VARCHAR,AccountName VARCHAR,Classification VARCHAR,ActivityType VARCHAR,COTaps_X VARCHAR,COTemperature_X VARCHAR,Comment VARCHAR,ContactWorkPhone VARCHAR,CreatedByName VARCHAR,DSCycleNo_X VARCHAR,DSRouteNo_X VARCHAR,DSSequenceNo_X VARCHAR,Description VARCHAR,HETaps_X VARCHAR,HETemperature_X VARCHAR,Pro_Setup VARCHAR,LastUpdated VARCHAR,LastUpdatedBy VARCHAR,Licensee VARCHAR,MUTaps_X VARCHAR,MUTemperature_X VARCHAR,Objective VARCHAR,OwnerFirstName VARCHAR,OwnerLastName VARCHAR,PhoneNumber VARCHAR,Planned VARCHAR,PlannedCleanActualDt_X VARCHAR,PlannedCleanReason_X VARCHAR,PrimaryOwnedBy VARCHAR,Pro_Name VARCHAR,ServiceRepTerritory_X VARCHAR,ServiceRep_X VARCHAR,Status VARCHAR,Type VARCHAR,HEINDSTapAuditDate VARCHAR,HEINEmployeeType VARCHAR)"); Log.d("database","before insertion 3 "); int counter = 0; int size = arrayList.size(); for (counter = 0; counter < size; counter++) { ContentValues contentValues = (ContentValues) arrayList .get(counter); liteDatabase.insert("activity", "activityInfo", contentValues); Log.d("database", "Database insertion is done"); } }

    Read the article

  • How do I perform a batch insert in Django?

    - by Thierry Lam
    In mysql, you can insert multiple rows to a table in one query for n 0: INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9), ..., (n-2, n-1, n); Is there a way to achieve the above with Django queryset methods? Here's an example: values = [(1, 2, 3), (4, 5, 6), ...] for value in values: SomeModel.objects.create(first=value[0], second=value[1], third=value[2]) I believe the above is calling an insert query for each iteration of the for loop. I'm looking for a single query, is that possible in Django?

    Read the article

  • Match entities fulfilling filter (strict superset of search)

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. That is, given a set of Attributes A, I need to find all people that have a set of Attributes that are a superset of A. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there? I've been trying to do something like the following, given that the ValidValue is unique in all cases: select distinct p from Person p join p.personAttributes a where a.value IN (:values) Then I've tried putting my set of required values in as "values", but that gives me errors no matter how I try to structure that. I also have to get a little more complicated, as follows, but at this point I'd be happy with solving the first problem cleanly. However, if it's possible, the Attribute table actually has a field for default value: id | attr_name | default_value 1 | Sex | 1 2 | Eye Color | 5 If the value you're searching on happens to be the default value, I want it to return any people that have no explicit value set for that attribute, because in the application logic, that means they inherit the default value. Again, I'm more concerned about the primary question, but if someone who can help with that also has some idea of how to do this one, I'd be extremely grateful.

    Read the article

  • Table Partitioning

    - by Ankur Gahlot
    How advantageous is it to use partitioning of tables as compared to normal approach ? Is there a sort of sample case or detailed comparative analysis that could statistically ( i know this is too strong a word, but it would really help if it is illustrated by some numbers ) emphasize on the utility of the process. Thanks, Ankur

    Read the article

  • Cursor while loop returning every value but the last

    - by LordSnoutimus
    Hello, I am using a while loop to iterate through a cursor and then outputing the longitude and latitude values of every point within the database. For some reason it is not returning the last (or first depending on if I use Cursor.MoveToLast) set of longitude and latitude values in the cursor. Here is my code: public void loadTrack() { SQLiteDatabase db1 = waypoints.getWritableDatabase(); Cursor trackCursor = db1.query(TABLE_NAME, FROM, "trackidfk=1", null, null, null,ORDER_BY); trackCursor.moveToFirst(); while (trackCursor.moveToNext()) { Double lat = trackCursor.getDouble(2); Double lon = trackCursor.getDouble(1); //overlay.addGeoPoint( new GeoPoint( (int)(lat*1E6), (int)(lon*1E6))); System.out.println(lon); System.out.println(lat); } } From this I am getting: 04-02 15:39:07.416: INFO/System.out(10551): 3.0 04-02 15:39:07.416: INFO/System.out(10551): 5.0 04-02 15:39:07.416: INFO/System.out(10551): 4.0 04-02 15:39:07.416: INFO/System.out(10551): 5.0 04-02 15:39:07.416: INFO/System.out(10551): 5.0 04-02 15:39:07.416: INFO/System.out(10551): 5.0 04-02 15:39:07.416: INFO/System.out(10551): 4.0 04-02 15:39:07.416: INFO/System.out(10551): 4.0 04-02 15:39:07.416: INFO/System.out(10551): 3.0 04-02 15:39:07.416: INFO/System.out(10551): 3.0 04-02 15:39:07.416: INFO/System.out(10551): 2.0 04-02 15:39:07.416: INFO/System.out(10551): 2.0 04-02 15:39:07.493: INFO/System.out(10551): 1.0 04-02 15:39:07.493: INFO/System.out(10551): 1.0 7 Sets of values, where I should be getting 8 sets. Thanks.

    Read the article

  • Connect sortable lists together and update SQL using jQuery UI

    - by rebellion
    I'm using jQuery UI's sortable lists to sort items in a todo list, and the reordering of the lists works like a charm. I have several UL lists for each todo category, i.e. Design, Development, etc. I want to be able to move an item from one category to another, and jQuery UI's sortable lists plugin allows me to do this with the connectWith option parameter. But I can't seem to find how to update the database as well. The todo database table has a field for a reference to the category which is laying in it's own table. This is the current code: $('.sortable').sortable({ axis: 'y', opacity: .5, cancel: '.no-tasks', connectWith: '.sortable', update: function(){ $.ajax({ type: 'POST', url: basepath + 'task/sort_list', data: $(this).sortable('serialize') }); } }); Do I need to add another value to the AJAX's data parameter, or do the sortable plugin this for me in the serialize function?

    Read the article

  • Drop a DB2 view if it exists...

    - by grenade
    Why doesn't this work in IBM Data Studio (Eclipse): IF EXISTS (SELECT 1 FROM SYSIBM.SYSVIEWS WHERE NAME = 'MYVIEW' AND CREATOR = 'MYSCHEMA') THEN DROP VIEW MYSCHEMA.MYVIEW; END IF; I have a feeling it has to do with statement terminators (;) but I can't find a syntax that works. Another similar question at http://stackoverflow.com/questions/355687/how-to-check-a-procedure-view-table-exists-or-not-before-dropping-it-in-db2-9-1 suggests that they had to create a proc but this isn't a solution for us.

    Read the article

  • Joining two queries into one query or making a sub-query

    - by gary A.K.A. G4
    I am having some trouble with the following queries originally done for some Access forms: SELECT qry1.TCKYEAR AS Yr, COUNT(qry1.SID) AS STUDID, qry1.SID AS MID, table_tckt.tckt_tick_no FROM table_tckt INNER JOIN qry1 ON table_tckt.tckt_SID = qry1.SID GROUP BY qry1.TCKYEAR, qry1.SID, table_tckt.tckt_tick_no HAVING (((table_tckt.tick_no)=[forms]![frmNAME]![cboNAME])); SELECT table_tckt.sid, FORMAT([tckt_iss_date], 'yyyy') AS TCKYEAR, table_tckt.tckt_tick_no, table_tckt.licstate FROM table_tckt WHERE (((table_tckt.licstate)<>"NA")); I am no longer working with Access, but JSP for the forms. I need to somehow either combine these two queries into one query or find another way to have a query 'query' another one.

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • How to get query result even if JOIN hasn't found any results?

    - by user1734651
    I want select data for user, and join another info from other table that related to the user. The problem is that this extra data not always exist for any user, just for few. How can I write a query that will return NULL for not found data, instead just return null for the whole query? SELECT a.*, b.* FROM user AS a LEFT JOIN extra AS b ON (a.userid = b.userid) WHERE a.userid = {$userid} LIMIT 1 When extra data found for the user, I get the resource as expected. If not, I get NULL for the whole query. Bottom line, I don't care if "extra" exist for the user or not, if yes - select it as well, if not - ignore that.

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >