Search Results

Search found 41025 results on 1641 pages for 'in memory database'.

Page 156/1641 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • MKMapView memory usage grows out of control with setRegion: calls

    - by Kurt
    Hi, I have a single MKMapView instance that I have programmatically added to a UIView. As part of the UI, the user can cycle through a list of addresses and the map view is updated to show the correct map for each address as the user goes through them. I create the map view once, and simply change what it displays with setRegion:animated:. The problem is that each time the map is changed to show a new address, the memory usage of my program increases by 200K-500K (as reported by Memory Monitor in Instruments). According to Object Allocations, it appears that a lot of 1.0K Mallocs are happening each time, and the Extended Detail pane for these 1.0K allocations shows that the Responsible Caller is convert_image_data and the Extended Detail pane shows that this is the result of [MKMapTileView drawLayer:inContext:]. So, seems likely to me that the memory usage is due to MKMapView not freeing memory it uses to redraw the map each time. In fact, when I don't display the map at all (by not even adding it as a subview of my main UIView) but still cycle through the addresses (which changes various UILabels and other displayed info) the memory usage for the app does NOT increase. If I add the map view but never update it with setRegion:, the memory also does NOT increase when changing to a new address. One more bit of info: if I go to a new address (and therefore ask the map to display the new address) the memory jumps as described above. However, if I go back to an address that was already displayed, the memory does not jump when the map redraws with the old address. Also, this happens on iPad (real device) with 3.2 and on iPhone (again, real device) with 3.1.2. Here's how I initialize the MKMapView (I only do this once): CGRect mapFrame; mapFrame.origin.y = 460; // yes, magic numbers. just for testing. mapFrame.origin.x = 0; mapFrame.size.height = 500; mapFrame.size.width = 768; mapView = [[MKMapView alloc] initWithFrame:mapFrame]; mapView.delegate = self; [self.view insertSubview:mapView atIndex:0]; And in response to the user selecting an address, I set the map like so: MKCoordinateRegion region; MKCoordinateSpan span; span.latitudeDelta=kStreetMapSpan; // 0.003 span.longitudeDelta=kStreetMapSpan; // 0.003 region.center = address.coords; // coords is CLLocationCoordinate2D region.span = span; mapView.region.span = span; [mapView setRegion:region animated:NO]; Any thoughts? I've scoured the net but haven't seen mention of this problem, and I've reached the limits of my Instruments knowledge. Thanks for any ideas.

    Read the article

  • UnboundLocalError: local variable 'rows' referenced before assignment

    - by patrick
    i'm trying to make a database connection by an other script. But the script didn't work propperly. and if I do a 'print' on the rows then I get the value 'null' But if I use a 'select * from incidents' query then i get the result from the table incidents. import database rows = database.database("INSERT INTO incidents VALUES(3 ,'test_title1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") #print database.database() print rows database.py script: import psycopg2 import sys import logfile def database(query): logfile.log(20, 'database.py', 'Executing...') con = None try: con = psycopg2.connect(database='incidents', user='ipfit5', password='tester') cur = con.cursor() #print query cur.execute(query) rows = cur.fetchall() con.commit() #test row does work #cur.execute("INSERT INTO incidents VALUES(3 ,'test_titel1', 'test', TO_DATE('25-07-2012', 'DD-MM-YYYY'), CURRENT_TIMESTAMP, 'sector', 50, 60)") except: logfile.log(40, 'database.py', 'Er is iets mis gegaan') logfile.log(40, 'database.py', str(sys.exc_info())) finally: if con: con.close() return rows

    Read the article

  • the Memory problem about MySQL "SELECT *"

    - by Austin Huang
    Dear all: I'm new to MySQL, and I have a question about the memory. I have a 200mb table(MyISAM, 2,000,000 rows), and I try to load all of it to the memory. I use python(actually MySQLdb in python) with sql: SELECT * FROM table. However, from my linux "top" I saw this python process uses 50% of my memory(which is total 6GB) I'm curious about why it uses about 3GB memory only for a 200 mb table. Thanks in advance!

    Read the article

  • Why might PC3200 ECC ram be incompatible when upgrading?

    - by Zak
    I had some 512MB PC3200 Kingston mem modules in a server, and I ordered replacement Kingston memory in 1GB sticks. The vendor said they were out of stock and I agreed to let them ship me "compatible" Samsung memory. However, after installing, the MB posted BIOS errors and wouldn't even boot to a BIOS screen. Both memory was ECC PC3200. Any ideas why that would happen?

    Read the article

  • Is it possible for double-escaping to cause harm to the DB?

    - by waiwai933
    If I accidentally double escape a string, can the DB be harmed? For the purposes of this question, let's say I'm not using parametrized queries For example, let's say I get the following input: bob's bike And I escape that: bob\'s bike But my code is horrible, and escapes it again: bob\\\'s bike Now, if I insert that into a DB, the value in the DB will be bob\'s bike Which, while is not what I want, won't harm the DB. Is it possible for any input that's double escaped to do something malicious to the DB assuming that I take all other necessary security precautions?

    Read the article

  • ERD Design help meeded

    - by Mobi
    Hello guyz, I am new to ERD and stuff.Earlier i was drawing an erd that issued me some problems. the name of two entities in focus is "Bus" and "Passenger".What shall be the relationship between them. I think it should be many to many since one passenger can travel in many buses and a bus can give ride to many passengers.But one of my friend insisted that its a one-to-many relationship(A bus can have many passengers but a passenger can travel in only one bus).Plz let me know what's right. Also , whats the relationship between a class,students. Any help is appreciated.

    Read the article

  • Will a database server perform better running on 2 CPUs with 16 cores or 4 CPUs with 8 cores?

    - by AlexOdin
    What I have: an online financial application (ASP.NET, C#) at peak we have 5K+ simultaneous users backend is running on Oracle 11g (active server + stand-by using Active Data Guard). At peak - 4K-5K database sessions Oracle is installed on Linux 5.8 (Oracle's unbreakable version) the database size: 7TB disk storage: NetApp (connected with 10GB network) I would like to replace old servers (IT will purchase HP blades BL685C). Servers will have 256GB of RAM. I need your help to figure out what to do with CPUs and cores. Options: 2 CPUs (2.3 GHz) with 16 cores each 4 CPUs (3.0 GHz) with 8 cores each Question: Which one should I pick? P.S. Next year, we will migrate from Oracle to SQL server. I hope, whatever option you recommend will work for both platforms

    Read the article

  • What database is easy to maintain and manage in a cluster?

    - by Sanoj
    I'm looking for a database (DBMS) that is easy to scale out. I would like to have high availability so I need a multi-master cluster, where the data is replicated to two or more physical computers. I would also like to be able to start with one node (no replication), and then scale out to more nodes as needed without a reinstallation or downtime. I would like to have a DBMS that are easy to maintain and manage. It should be easy to add nodes, remove nodes, take live backup and monitor the use of resources. It doesn't have to be a relational database system, so a NoSQL is okey. And I would like to have a free version so I can test it in small scale and compare it with alternatives. What alternatives do I have?

    Read the article

  • Best practice stock management when payment of customer failed using SQL Server and ASP.NET

    - by Martijn B
    Hi there, I am currently building a webshop for my own where I want to increment the product-stock when the user fails to complete payment within 10 minutes after the customer placed the order. I want to gather information from this thread to make a design decision. I am using SQL Server 2008 and ASP.NET 3.5. Should I use a SQL Server Job who intervals check the orders which are not payed yet or are there better solutions to do this. Thanks in advance! Martijn

    Read the article

  • recursive delete trigger and ON DELETE CASCADE contraints are not deleting everything

    - by bitbonk
    I have a very simple datamodel that represents a tree structure: The RootEntity is the root of such a tree, it can contain children of type ContainerEntity and of type AtomEntity. The type ContainerEntity again can contain children of type ContainerEntity and of type AtomEntity but can not contain children of type RootEntity. Children are referenced in a well known order. The DB model for this is below. My problem now is that when I delete a RootEntity I want all children to be deleted recursively. I have create foreign key with CASCADE DELETE and two delete triggers for this. But it is not deleting everything, it always leaves some items in the ContainerEntity, AtomEntity, ContainerEntity_Children and AtomEntity_Children tables. Seemling beginning with the recursionlevel of 3. CREATE TABLE RootEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_RootEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE ContainerEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_ContainerEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE AtomEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_AtomEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE RootEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_RootEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent RootEntity CONSTRAINT FK_RootEntiry_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES RootEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_RootEntiry_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_RootEntiry_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TABLE ContainerEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_ContainerEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent ContainerEntity CONSTRAINT FK_ContainerEntity_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_ContainerEntity_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_ContainerEntity_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TRIGGER Delete_RootEntity_Children ON RootEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO CREATE TRIGGER Delete_ContainerEntiy_Children ON ContainerEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO

    Read the article

  • Database server: Small quick RAM or large slow RAM?

    - by Josh Smeaton
    We are currently designing our new database servers, and have come up with a trade off I'm not entirely sure of how to answer. These are our options: 48GB 1333MHz, or 96GB 1066MHz. My thinking is that RAM should be plentiful for a Database Server (we have plenty and plenty of data, and some very large queries) rather than as quick as it could be. Apparently we can't get 16GB chips at 1333MHz, hence the choices above. So, should we get lots of slower RAM, or less faster RAM? Extra Info: Number of DIMM Slots Available: 6 Servers: Dell Blades CPU: 6 core (only single socket due to Oracle licensing).

    Read the article

  • How long should it take for someone to be able to type code from memory?

    - by LordSnoutimus
    Hi, I understand that this question could be answered with a simple sentence and that it may be viewed as subjective, however, I am a young student who is interested in pursuing a career in programming and wondered how long it took some of you to get to the level of experience you are now?. I ask this because I am currently working on building an application in Java on the Android platform and it bothers me that I am constantly having to look up how to write a certain section of code in my application such as writing to a database, or how the if loop should be structured. My question really is, how long did it take for you to become experienced enough to actually know exactly how your next line of code was going to look, before you even wrote it?

    Read the article

  • visual studio 2010 database project, is there a visual way?

    - by b0x0rz
    started a visual studio 2010 database project. however i am only able to write sql in a text mode, there is no functionality in making the table for example in a visual view as exists when you add a new database to app_data folder and the work on it there. is this the only way and there is no visual way of doing this in the visual studio 2010 database project? or am i missing some obvious way of getting to it? thank you also if there is a tutorial anywhere (video maybe!?) please link it. i only found importing a database from an existing script video using a wizard. would like new database from scratch without wizard.

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • Mobile Intel® GMA 4500MHD boost

    - by Andy Smith
    My machine has a Mobile Intel® GMA 4500MHD integrated graphics chipset. The machine is currently running 64 bit windows 7 premium with 3GB of ram (1x1gb and 1x2gb). I note that the Mobile Intel® GMA 4500MHD shares the physical memory to process the graphics. now, the total available graphics memory can be up to 1,340 MB with a 32-bit operating system and 3 GB system memory or 1,759 MB with a 64-bit operating system and 4 GB system memory. I am considering investing in a 4GB stick to replace the 1gb stick bringing the total up to 6gb, mainly for an increase in graphics processing ability. Can anyone let me know what sort of power (if any over the 4gb) I could expect by upgrading to 6GB?

    Read the article

  • Synonym for "Many-to-Many" relationship (relational databases)

    - by Byron
    What's a synonym for a "many-to-many" relationship? I've finished writing an object-relational mapper but I'm still stumped as to what to name the function that adds that relation. addParent() and addChild() seemed quite logical for the many-to-one/one-to-many and addSuperclass() for one-to-one inheritance, but addManyToMany() would sound quite unintuitive to an object-oriented programmer. addSibling() or addCousin() doesn't really make sense either. Any suggestions? And before you dismiss this as a non-programming question, please remember that consistent naming schemes and encapsulation are pretty integral to programming :)

    Read the article

  • Should this even be a has_many :through association?

    - by GoodGets
    A Post belongs_to a User, and a User has_many Posts. A Post also belongs_to a Topic, and a Topic has_many Posts. class User < ActiveRecord::Base has_many :posts end class Topic < ActiveRecord::Base has_many :posts end class Post < ActiveRecord::Base belongs_to :user belongs_to :topic end Well, that's pretty simple and very easy to set up, but when I display a Topic, I not only want all of the Posts for that Topic, but also the user_name and the user_photo of the User that made that Post. However, those attributes are stored in the User model and not tied to the Topic. So how would I go about setting that up? Maybe it can already be called since the Post model has two foreign keys, one for the User and one for the Topic? Or, maybe this is some sort of "one-way" has_many through assiociation. Like the Post would be the join model, and a Topic would has_many :users, :through = :posts. But the reverse of this is not true. Like a User does NOT has_many :topics. So would this even need to be has_many :though association? I guess I'm just a little confused on what the controller would look like to call both the Post and the User of that Post for a give Topic. Edit: Seriously, thank you to all that weighed in. I chose tal's answer because I used his code for my controller; however, I could have just as easily chosen either j.'s or tim's instead. Thank you both as well. This was so damn simple to implement, and I think today marks the day that I'm beginning to fall in love with rails.

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

  • Undo table updates in SQL Server 2008

    - by sikas
    I updated a table in my MS SQL Server 2008 by accident, I was updating a table from another by copying cell by cell, but I have overwritten the original table. Is there a way that I can restore my table contents as it was?

    Read the article

  • Best way to fetch data from a single database table with multiple threads?

    - by Ravi Bhatt
    Hi, we have a system where we collect data every second on user activity on multiple web sites. we dump that data into a database X (say MS SQL Server). we now need to fetch data from this single table from daatbase X and insert into database Y (say mySql). we want to fetch time based data from database X through multiple threads so that we fetch as fast as we can. Once fetched and stored in database Y, we will delete data from database X. Are there any best practices on this sort of design? any specific things to take care on table design like sharing or something? Are there any other things that we need to take care to make sure we fetch it as fast as we can from threads running on multiple machines? Thanks in advance! Ravi

    Read the article

  • Program to find canonical cover or minimum number of functional dependencies

    - by Sev
    I would like to know if there is a program or algorithm to find canonical cover or minimum number of functional dependencies? For example: If you have: R = (A,B,C) <-- these are tables: A,B,C And dependencies: A ? BC B ? C A ? B AB ? C The canonical cover (or minimum number of dependencies) is: A ? B B ? C Is there a program that can accomplish this? If not, any code/pseudocode to help me write one would be appreciated. Prefer in Python or Java.

    Read the article

  • Building a many-to-many db schema using only an unpredictable number of foreign keys

    - by user1449855
    Good afternoon (at least around here), I have a many-to-many relationship schema that I'm having trouble building. The main problem is that I'm only working with primary and foreign keys (no varchars or enums to simplify things) and the number of many-to-many relationships is not predictable and can increase at any time. I looked around at various questions and couldn't find something that directly addressed this issue. I split the problem in half, so I now have two one-to-many schemas. One is solved but the other is giving me fits. Let's assume table FOO is a standard, boring table that has a simple primary key. It's the one in the one-to-many relationship. Table BAR can relate to multiple keys of FOO. The number of related keys is not known beforehand. An example: From a query FOO returns ids 3, 4, 5. BAR needs a unique key that relates to 3, 4, 5 (though there could be any number of ids returned) The usual join table does not work: Table FOO_BAR primary_key | foo_id | bar_id | Since FOO returns 3 unique keys and here bar_id has a one-to-one relationship with foo_id. Having two join tables does not seem to work either, as it still can't map foo_ids 3, 4, 5 to a single bar_id. Table FOO_TO_BAR primary_key | foo_id | bar_to_foo_id | Table BAR_TO_FOO primary_key | foo_to_bar_id | bar_id | What am I doing wrong? Am I making things more complicated than they are? How should I approach the problem? Thanks a lot for the help.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >