Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 388/429 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • mailing system DB structure, need help

    - by Anna
    i have a system there user(sender) can write a note to friends(receivers), number of receivers=0. Text of the message is saved in DB and visible to sender and all receivers then they login to system. Sender can add more receivers at any time. More over any of receivers can edit the message and even remove it from DB. For this system i created 3 tables, shortly: users(userID, username, password) messages(messageID, text) list(id, senderID, receiverID, messageID) in table "list" each row corresponds to pair sender-receiver, like sender_x_ID -- receiver_1_ID -- message_1_ID sender_x_ID -- receiver_2_ID -- message_1_ID sender_x_ID -- receiver_3_ID -- message_1_ID Now the problem is: 1. if user deletes the message from table "messages" how to automatically delete all rows from table "list" which correspond to deleted message. Do i have to include some foreign keys? More important: 2. if sender has let say 3 receivers for his message1 (username1, username2 and username3) and at certain moment decides to add username4 and username5 and at the same time exclude username1 from the list of receivers. PHP code will get the new list of receivers (username2, username3, username4, username5) That means insert to table "list" sender_x_ID -- receiver_4_ID -- message_1_ID sender_x_ID -- receiver_5_ID -- message_1_ID and also delete from table "list" the row corresponding to user1 (which is not in the list or receivers any more) sender_x_ID -- receiver_1_ID -- message_1_ID which sql query to send from PHP to make it in an easy and intelligent way? Please help! Examples of sql queries would be perfect!

    Read the article

  • Unix: cannot add "\\ \n" even with escaping to the end of line

    - by HH
    I try to convert clean columnwise data to tables in tex. I am unable to have "\ \n" at each end of line. Please, see the command at the end. Data $ echo `. ./bin/addTableTexTags.sh < .data_3` 10.31 & 8.50 & 7.40 10.34 & 8.53 & 7.81 8.22 & 8.62 & 7.78 10.16 & 8.53 & 7.44 10.41 & 8.38 & 7.63 10.38 & 8.57 & 8.03 10.13 & 8.66 & 7.41 8.50 & 8.60 & 7.15 10.41 & 8.63 & 7.21 8.53 & 8.53 & 7.12 $ cat .data_3 10.31 8.50 7.40 10.34 8.53 7.81 8.22 8.62 7.78 10.16 8.53 7.44 10.41 8.38 7.63 10.38 8.57 8.03 10.13 8.66 7.41 8.50 8.60 7.15 10.41 8.63 7.21 8.53 8.53 7.12 addTableTexTags.sh #!/bin/bash sed -e "s@[[:space:]]@\t\&\t@g" -e "s@[[:space:]]*&*[[:space:]]*\$@\t \\ \\\\n@g" // Tried Escaping "\\" with "/" here and there // but cannot get a line ending with "\\ \n".

    Read the article

  • best web database solution for scala for a high traffic site?

    - by egervari
    I am in charge of a rebuilding a website that gets about 250,000 visitors a day. We'd like to use Scala, but it does not work very well with Spring (in some minor cases) and Hibernate (there is a major and very annoying mismatch here if you want to use scala collections, which we do). The application itself is going to have about 40-50 tables. Other than Hibernate, is there an ORM that works awesome with Scala and is as performant and reliable as Hibernate? Does it also have the same capabilities, or are we going to run into leaky-abstractions if we don't use Hibernate? It would be a big risk for us to go with a framework that is newer and doesn't seem to have a lot of industry backing... and at the same time, Hibernate is a real pain to program against when using Scala. 1) The Java Collection <- Scala Collection is absolutely painful. There is a lot more boilerplate and crap to write. 2) The IDE doesn't import JavaConversions and java interfaces automatically... so we this needs to be done manually. Optimizing Imports in IDEA is going to destroy all the manual work. 3) There is also a performance cost to converting back and forth all the time in your domain objects and your dao classes. 4) Not to mention there needs to be a lot of casting, which produces code ugly as sin. I actually would love to write my own orm that is 100% tailored to scala, but obviously this is really outside of the scope of our project for now. So what is the best approach?

    Read the article

  • iPhone - Using sql database - insert statement failing

    - by Satyam svv
    Hi, I'm using sqlite database in my iphone app. I've a table which has 3 integer columns. I'm using following code to write to that database table. -(BOOL)insertTestResult { NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; NSString* dataBasePath = [documentsDirectory stringByAppendingPathComponent:@"test21.sqlite3"]; BOOL success = NO; sqlite3* database = 0; if(sqlite3_open([dataBasePath UTF8String], &database) == SQLITE_OK) { BOOL res = (insertResultStatement == nil) ? createStatement(insertResult, &insertResultStatement, database) : YES; if(res) { int i = 1; sqlite3_bind_int(insertResultStatement, 0, i); sqlite3_bind_int(insertResultStatement, 1, i); sqlite3_bind_int(insertResultStatement, 2, i); int err = sqlite3_step(insertResultStatement); if(SQLITE_ERROR == err) { NSAssert1(0, @"Error while inserting Result. '%s'", sqlite3_errmsg(database)); success = NO; } else { success = YES; } sqlite3_finalize(insertResultStatement); insertResultStatement = nil; } } sqlite3_close(database); return success;} The command sqlite3_step is always giving err as 19. I'm not able to understand where's the issue. Tables are created using following queries: CREATE TABLE [Patient] (PID integer NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,PFirstName text NOT NULL,PLastName text,PSex text NOT NULL,PDOB text NOT NULL,PEducation text NOT NULL,PHandedness text,PType text) CREATE TABLE PatientResult(PID INTEGER,PFreeScore INTEGER NOT NULL,PForcedScore INTEGER NOT NULL,FOREIGN KEY (PID) REFERENCES Patient(PID)) I've only one entry in Patient table with PID = 1 BOOL createStatement(const char* query, sqlite3_stmt** stmt, sqlite3* database){ BOOL res = (sqlite3_prepare_v2(database, query, -1, stmt, NULL) == SQLITE_OK); if(!res) NSLog( @"Error while creating %s => '%s'", query, sqlite3_errmsg(database)); return res;}

    Read the article

  • Is there unresizable space in latex? Pictures in good looking grid.

    - by drasto
    I've created latex macro to typeset guitar chords diagrams(using picture environment). Now I want to make diagrams of different appear in good looking grid when typeset one next to each other as the picture shows: The picture. (on the picture: Labeled "First" bad layout of diagrams, labeled "Second" correct layout when equal number of diagrams in line) I'm using \hspace to make some skips between diagrams, otherwise they would be too near to each other. As you can see in second case when latex arrange pictures in so that there is same number of them in each line it works. However if there is less pictures in the last line they become "shifted" to the right. I don't want this. I guess is because latex makes the space between diagrams in first line a little longer for the line to exactly fit the page width. How do I tell latex not to resize spaces created by \hspace ? Or is there any other way ? I guess I cannot use tables because I don't know how many diagrams will fit in one line... This is current state of code: \newcommand{\spaceForChord}{1.7cm} \newcommnad{\chordChart}[1]{% %calculate dimensions xdim and ydim according to setings \begin{picture}(xdim, ydim){% %draw the diagram inside defined area }% \hspace*{\spaceForChord}% \hspace*{-\xdim}% }% %end preambule and begin document \begin{document} First:\\* \\* \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} %...above line is repeated 12 more times to produce result shown at the picture \end{document} Thanks for any help.

    Read the article

  • C programming - How to print numbers with a decimal component using only loops?

    - by californiagrown
    I'm currently taking a basic intro to C programming class, and for our current assignment I am to write a program to convert the number of kilometers to miles using loops--no if-else, switch statements, or any other construct we haven't learned yet are allowed. So basically we can only use loops and some operators. The program will generate three identical tables (starting from 1 kilometer through the input value) for one number input using the while loop for the first set of calculations, the for loop for the second, and the do loop for the third. I've written the entire program, however I'm having a bit of a problem with getting it to recognize an input with a decimal component. Here is what I have for the while loop conversions: #include <stdio.h> #define KM_TO_MILE .62 main (void) { double km, mi, count; printf ("This program converts kilometers to miles.\n"); do { printf ("\nEnter a positive non-zero number"); printf (" of kilometers of the race: "); scanf ("%lf", &km); getchar(); }while (km <= 1); printf ("\n KILOMETERS MILES (while loop)\n"); printf (" ========== =====\n"); count = 1; while (count <= km) { mi = KM_TO_MILE * count; printf ("%8.3lf %14.3lf\n", count, mi); ++count; } getchar(); } The code reads in and converts integers fine, but because the increment only increases by 1 it won't print a number with a decimal component (e.g. 3.2, 22.6, etc.). Can someone point me in the right direction on this? I'd really appreciate any help! :)

    Read the article

  • Is it possible to load an entire SQL Server CE database into RAM?

    - by DanM
    I'm using LinqToSql to query a small SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table via a foreign key, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of the appeal of LinqToSql. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view.

    Read the article

  • How to use Transaction in Entity FrameWork?

    - by programmerist
    How to use Transaction in Entity FrameWork? i read some links on Stackoverflow : http://stackoverflow.com/questions/815586/entity-framework-using-transactions-or-savechangesfalse-and-acceptallchanges BUT; i have 3 table so i have 3 entities: CREATE TABLE Personel (PersonelID integer PRIMARY KEY identity not null, Ad varchar(30), Soyad varchar(30), Meslek varchar(100), DogumTarihi datetime, DogumYeri nvarchar(100), PirimToplami float); Go create TABLE Prim (PrimID integer PRIMARY KEY identity not null, PersonelID integer Foreign KEY references Personel(PersonelID), SatisTutari int, Prim float, SatisTarihi Datetime); Go CREATE TABLE Finans (ID integer PRIMARY KEY identity not null, Tutar float); Personel, Prim,Finans my tables. if you look Prim table you can see Prim value float value if i write a textbox not float value my transaction must run. using (TestEntities testCtx = new TestEntities()) { using (TransactionScope scope = new TransactionScope()) { // do someyihng... testCtx.Personel.SaveChanges(); // do someyihng... testCtx.Prim.SaveChanges(); // do someyihng... testCtx.Finans.SaveChanges(); scope .Complete(); success = true; } } How can i do that?

    Read the article

  • Update table using SSIS

    - by thursdaysgeek
    I am trying to update a field in a table with data from another table, based on a common key. If it were in straight SQL, it would be something like: Update EHSIT set e.IDMSObjID = s.IDMSObjID from EHSIT e, EHSIDMS s where e.SITENUM = s.SITE_CODE However, the two tables are not in the same database, so I'm trying to use SSIS to do the update. Oh, and the sitenum/site_code are varchar in one and nvarchar in the other, so I'll have to do a data conversion so they'll match. How do I do it? I have a data flow object, with the source as EHSIDMS and the destination as EHSIT. I have a data conversion to convert the unicode to non-unicode. But how do I update based on the match? I've tried with the destination, using a SQL Command as the Data Access mode, but it doesn't appear to have the source table. If I just map the field to be updated, how does it limit it based on fields matching? I'm about to export my source table to Excel or something, and then try inputting from there, although it seems that all that would get me would be to remove the data conversion step. Shouldn't there be an update data task or something? Is it one of those Data Flow transformation tasks, and I'm just not figuring out which it is?

    Read the article

  • How do i map this in NHibernate

    - by Andrew Smith
    I Have two classes, Survey and Poll classes. Also I have Question and Question Choice classes. How do I map these so I come out with particular table formats. here is the classes involved. public class Survey { public IList<Question> Questions { get; private set; } } public class Poll { public Question Question { get; set; } } public class Question { public string Text { get; set; } public IList<QuestionChocie> Choices { get; private set; } } public class QuestionChoice { public string Text { get; set; } } The resulting tables that I'm shooting for include the following Surveys- a table of survey information. Polls - a table of polls information. SurveyQuestions -a table of survey questions. PollQuestions - a table of poll questions. SurveyChoices - a table of the question choices for the surveys. PollChoices - a table of the question choices for the survey. preferably i really want to know for fluent nhibernate, or just mapping xml is fine too.

    Read the article

  • Does this query fetch unnecessary information? Should I change the query?

    - by Camran
    I have this classifieds website, and I have about 7 tables in MySql where all data is stored. I have one main table, called "classifieds". In the classifieds table, there is a column called classified_id. This is not the PK, or a key whatsoever. It is just a number which is used for me to JOIN table records together. Ex: classifieds table: fordon table: id => 33 id => 12 classified_id => 10 classified_id => 10 ad_id => 'bmw_m3_92923' This above is linked together by the classified_id column. Now to the Q, I use this method to fetch all records WHERE the column ad_id matches any of the values inside an array, called in this case $ad_arr: SELECT mt.*, fordon.*, boende.*, elektronik.*, business.*, hem_inredning.*, hobby.* FROM classified mt LEFT JOIN fordon ON fordon.classified_id = mt.classified_id LEFT JOIN boende ON boende.classified_id = mt.classified_id LEFT JOIN elektronik ON elektronik.classified_id = mt.classified_id LEFT JOIN business ON business.classified_id = mt.classified_id LEFT JOIN hem_inredning ON hem_inredning.classified_id = mt.classified_id LEFT JOIN hobby ON hobby.classified_id = mt.classified_id WHERE mt.ad_id IN ('$ad_arr')"; Is this good or would this actually fetch unnecessary information? Check out this Q I posted couple of days ago. In the comments HLGEM is commenting that it is wrong etc etc. What do you think? http://stackoverflow.com/questions/2782275/another-rookie-question-how-to-implement-count-here Thanks

    Read the article

  • Find whether a particular cell of a table has an img tag

    - by SilentPro
    I am generating a table dynamically using Django. The same table template is used to generate a variety of tables depending on the data supplied. In one scenario a particular column contains image tags. Since my table is editable (using jquery) the image cell also becomes editable and removes my content. I want some special behavior on double click of such cells like say upload an image. How do I accomplish this with a jquery? My script for making the table editable is given below. $(function() { $("td").dblclick(function() { var OriginalContent = $(this).text(); $(this).addClass("cellEditing"); $(this).html("<input type='text' value='" + OriginalContent + "' />"); $(this).children().first().focus(); $(this).children().first().keypress(function(e) { if (e.which == 13) { var newContent = $(this).val(); $(this).parent().text(newContent); $(this).parent().removeClass("cellEditing"); } }); $(this).children().first().blur(function() { $(this).parent().text(OriginalContent); $(this).parent().removeClass("cellEditing"); }); }); });

    Read the article

  • Subquery with multiple results combined into a single field?

    - by Todd
    Assume I have these tables, from which i need to display search results in a browser: Table: Containers id | name 1 Big Box 2 Grocery Bag 3 Envelope 4 Zip Lock Table: Sale id | date | containerid 1 20100101 1 2 20100102 2 3 20091201 3 4 20091115 4 Table: Items id | name | saleid 1 Barbie Doll 1 2 Coin 3 3 Pop-Top 4 4 Barbie Doll 2 5 Coin 4 I need output that looks like this: itemid itemname saleids saledates containerids containertypes 1 Barbie Doll 1,2 20100101,20100102 1,2 Big Box, Grocery Bag 2 Coin 3,4 20091201,20091115 3,4 Envelope, Zip Lock 3 Pop-Top 4 20091115 4 Zip Lock The important part is that each item type only gets one record/row in the return on the screen. I accomplished this in the past by returning multiple rows of the same item and using a scripting language to limit the output. However, this makes the ui overly complicated and loopy. So, I'm hoping I can get the database to spit out only as many records as there are rows to display. This example may be a bit extreme because of the 2 joins needed to get to the container from the item (through the sale table). I'd be happy for just an example query that outputs this: itemid itemname saleids saledates 1 Barbie Doll 1,2 20100101,20100102 2 Coin 3,4 20091201,20091115 3 Pop-Top 4 20091115 I can only return a single result in a subquery, so I'm not sure how to do this.

    Read the article

  • Entity framework self referencing loop detected

    - by Lyd0n
    I have a strange error. I'm experimenting with a .NET 4.5 Web API, Entity Framework and MS SQL Server. I've already created the database and set up the correct primary and foreign keys and relationships. I've created a .edmx model and imported two tables: Employee and Department. A department can have many employees and this relationship exists. I created a new controller called EmployeeController using the scaffolding options to create an API controller with read/write actions using Entity Framework. In the wizard, selected Employee as the model and the correct entity for the data context. The method that is created looks like this: // GET api/Employee public IEnumerable<Employee> GetEmployees() { var employees = db.Employees.Include(e => e.Department); return employees.AsEnumerable(); } When I call my API via /api/Employee, I get this error: ...The 'ObjectContent`1' type failed to serialize the response body for content type 'application/json; ...System.InvalidOperationException","StackTrace":null,"InnerException":{"Message":"An error has occurred.","ExceptionMessage":"Self referencing loop detected with type 'System.Data.Entity.DynamicProxies.Employee_5D80AD978BC68A1D8BD675852F94E8B550F4CB150ADB8649E8998B7F95422552'. Path '[0].Department.Employees'.","ExceptionType":"Newtonsoft.Json.JsonSerializationException","StackTrace":" ... Why is it self referencing [0].Department.Employees? That doesn't make a whole lot of sense. I would expect this to happen if I had circular referencing in my database but this is a very simple example. What could be going wrong?

    Read the article

  • Best way to limit results in MySQL with user subcategories

    - by JM4
    I am trying to essentially solve for the following: 1) Find all users in the system who ONLY have programID 1. 2) Find all users in the system who have programID 1 AND any other active program. My tables structures (in very simple terms are as follows): users userID | Name ================ 1 | John Smith 2 | Lewis Black 3 | Mickey Mantle 4 | Babe Ruth 5 | Tommy Bahama plans ID | userID | plan | status --------------------------- 1 | 1 | 1 | 1 2 | 1 | 2 | 1 3 | 1 | 3 | 1 4 | 2 | 1 | 1 5 | 2 | 3 | 1 6 | 3 | 1 | 0 7 | 3 | 2 | 1 8 | 3 | 3 | 1 9 | 3 | 4 | 1 10 | 4 | 2 | 1 11 | 4 | 4 | 1 12 | 5 | 1 | 1 I know I can easily find all members with a specific plan with something like the following: SELECT * FROM users a JOIN plans b ON (a.userID = b.userID) WHERE b.plan = 1 AND b.status = 1 but this will only tell me which users have an 'active' plan 1. How can I tell who ONLY has plan 1 (in this case only userID 5) and how to tell who has plan 1 AND any other active plan? Update: This is not to get a count, I will actually need the original member information, including all the plans they have so a COUNT(*) response may not be what I'm trying to achieve.

    Read the article

  • Best way to build an application based on R?

    - by Prasad Chalasani
    I'm looking for suggestions on how to go about building an application that uses R for analytics, table generation, and plotting. What I have in mind is an application that: displays various data tables in different tabs, somewhat like in Excel, and the columns should be sortable by clicking. takes user input parameters in some dialog windows. displays plots dynamically (i.e. user-input-dependent) either in a tab or in a new pop-up window/frame Note that I am not talking about a general-purpose fron-end/GUI for exploring data with R (like say Rattle), but a specific application. Some questions I'd like to see addressed are: Is an entirely R-based approach even possible ( on Windows ) ? The following passage from the Rattle article in R-Journal intrigues me: It is interesting to note that the first implementation of Rattle actually used Python for implementing the callbacks and R for the statistics, using rpy. The release of RGtk2 allowed the interface el- ements of Rattle to be written directly in R so that Rattle is a fully R-based application If it's better to use another language for the GUI part, which language is best suited for this? I'm looking for a language where it's relatively "painless" to build the GUI, and that also integrates very well with R. From this StackOverflow question How should I do rapid GUI development for R and Octave methods (possibly with Python)? I see that Python + PyQt4 + QtDesigner + RPy2 seems to be the best combo. Is that the consensus ? Anyone have pointers to specific (open source) applications of the type I describe, as examples that I can learn from?

    Read the article

  • How to use multiple identity numbers in one table?

    - by vincer
    I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them. ie) Form1- Numbered 2000000-2999999 Form2- Numbered 3000000-3999999 dbo.test2 - is my form information table Tsel - is my autoinc table for the 3000000 series numbers Tadv - is my autoinc table for the 2000000 series numbers What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms. Although it does work, I'm concerned that the numbers will get messed up under load. I'm not sure the @@IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above. See code below. ** TRIGGER ** CREATE TRIGGER MAKEANID2 ON dbo.test2 AFTER INSERT AS SET NOCOUNT ON declare @someid int declare @someid2 int declare @startfrom int declare @test1 varchar(10) select @someid=@@IDENTITY select @test1 = (Select name1 from test2 where sysid = @someid ) if @test1 = 'select' begin insert into Tsel Default values select @someid2 = @@IDENTITY end if @test1 = 'adv' begin insert into Tadv Default values select @someid2 = @@IDENTITY end update test2 set name2=(@someid2) where sysid = @someid SET NOCOUNT OFF

    Read the article

  • How to translate the fields of a database model?

    - by Tõnis M
    I have some tables/models in a web app that will have multilingual content. For example a university, with it's description in a default language(english) and the user wants he can see the same information in another language( if the object has it's fields translated). If there were only a few languages then I would just add fields like name_en and name_de and so on, but the number of languages isn't fixed, so that' would create a mess. I could also just create a new object with the translated data but then foreign keys wouldn't work, and only some of the fields can be translated so that would create duplicate data. Storing the translations in a file and using gettext or something similar is also not an option since the objects fields can be translated by the website user, not only developers/admins. What would be the best way to design/architect such a database? Searching from the translated data should also be not too complex - as it should not require creating complex joins which would make the queries slower I'm using PostgreSQL and Ruby of Rails but I'm not looking for a technical solution but for a general idea how to design it.

    Read the article

  • Merging ILists to bind on datagridview to avoid using a database view

    - by P.Bjorklund
    In the form we have this where IntaktsBudgetsType is a poorly named enum that only specifies wether to populate the datagridview after customer or product (You do the budgeting either after product or customer) private void UpdateGridView() { bs = new BindingSource(); bs.DataSource = intaktsbudget.GetDataSource(this.comboBoxKundID.Text, IntaktsBudgetsType.PerKund); dataGridViewIntaktPerKund.DataSource = bs; } This populates the datagridview with a database view that merge the product, budget and customer tables. The logic has the following method to get the correct set of IList from the repository which only does GetTable<T>.ToList<T> public IEnumerable<IntaktsBudgetView> GetDataSource(string id, IntaktsBudgetsType type) { IList<IntaktsBudgetView> list = repository.SelectTable<IntaktsBudgetView>(); switch (type) { case IntaktsBudgetsType.PerKund: return from i in list where i.kundId == id select i; case IntaktsBudgetsType.PerProdukt: return from i in list where i.produktId == id select i; } return null; } Now I don't want to use a database view since that is read-only and I want to be able to perform CRUD actions on the datagridview. I could build a class that acts as a wrapper for the whole thing and bind the different table values to class properties but that doesn't seem quite right since I would have to do this for every single thing that requires "the merge". Something pretty important (and probably basic) is missing the the thought process but after spending a weekend on google and in books I give up and turn to the SO community.

    Read the article

  • How to configure a NSPopupButton for displaying multiple values in a TableView?

    - by jekmac
    Hi there! I'm using two entities A and B with to-many-to-many relationship. Lets say I got an entity A with attribute aAttrib and a to-many relationship aRelat to another entity B with attribute bAttrib and a to-many relationship bRelat with entity A. Now I am building an interface with two tables one for entity A and another for entity B. The table for entity B has two columns one for bAttrib and one for the relationship aRelat. The aRelat-column should be a NSPopupButtonCell to display multiple aAttrib values. I'd like to set all the bindings in InterfaceBuilder in Table Column Bindings: -- I have two NSArrayController each for one entity: Object Controller Mode:Entity Array Controller Bindings: Parameters Managed Object Context bind to File's Owner -- One Table Cloumn with a PopUpButtonCell: TableCloumnBindings Content bind to Entity A with ControllerKey arrangedObjects; Content Values bind to Entity A with ModelKeyPath aAttrib Selected Object bind to Entity B with ModelKeyPath bRelat I know that this configuration doesn't allow multiple value setting. But I don't know how to do the right one. Getting the following message: HIToolbox: ignoring exception 'Unacceptable type of value for to-many relationship: property = "bRelat"; desired type = NSSet; given type = NSCFString; value = testValue.' that raised inside Carbon event dispatch... Does anyone have any idea?

    Read the article

  • Java: over-typed structures?

    - by HH
    Term over-type structure = a data structure that accepts different types, can be primitive or user-defined. I think ruby supports many types in structures such as tables. I tried a table with types 'String', 'char' and 'File' in Java but errs. How can I have over-typed structure in Java? How to show types in declaration? What about in initilization? Suppose a structure: INDEX VAR FILETYPE //0 -> file FILE //1 -> lineMap SizeSequence //2 -> type char //3 -> binary boolean //4 -> name String //5 -> path String Code import java.io.*; import java.util.*; public class Object { public static void print(char a) { System.out.println(a); } public static void print(String s) { System.out.println(s); } public static void main(String[] args) { Object[] d = new Object[6]; d[0] = new File("."); d[2] = 'T'; d[4] = "."; print(d[2]); print(d[4]); } } Errors Object.java:18: incompatible types found : java.io.File required: Object d[0] = new File("."); ^ Object.java:19: incompatible types found : char required: Object d[2] = 'T'; ^

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • SELECT product from subclass: How many queries do I need?

    - by Stefano
    I am building a database similar to the one described here where I have products of different type, each type with its own attributes. I report a short version for convenience product_type ============ product_type_id INT product_type_name VARCHAR product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id ... (common attributes to all product) magazine ======== magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id ... (magazine-specific attributes) web_site ======== web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id ... (web-site specific attributes) This way I do not need to make a huge table with a column for each attribute of different product types (most of which will then be NULL) How do I SELECT a product by product.product_id and see all its attributes? Do I have to make a query first to know what type of product I am dealing with and then, through some logic, make another query to JOIN the right tables? Or is there a way to join everything together? (if, when I retrieve the information about a product_id there are a lot of NULL, it would be fine at this point). Thank you

    Read the article

  • Mixing together Connect by, inner join and sum with Oracle

    - by François
    Hey there, I need help with a oracle query. Excuse me in advance for my english. Here is my setup: I have 2 tables called respectively "tasks" and "timesheets". The "tasks" table is a recursive one, that way each task can have multiple subtasks. Each timesheet is associated with a task (not necessarily the "root" task) and contains the number of hours worked on it. Example: Tasks id:1 | name: Task A | parent_id: NULL id:2 | name: Task A1 | parent_id: 1 id:3 | name: Task A1.1 | parent_id: 2 id:4 | name: Task B | parent_id: NULL id:5 | name: Task B1 | parent_id: 4 Timesheets id:1 | task_id: 1 | hours: 1 id:2 | task_id: 2 | hours: 3 id:3 | task_id:3 | hours: 1 id:5 | task_id:5 | hours:1 ... What I want to do: I want a query that will return the sum of all the hours worked on a "task hierarchy". If we take a look at the previous example, It means I would like to have the following results: task A - 5 hour(s) | task B - 1 hour(s) At first I tried this SELECT TaskName, Sum(Hours) "TotalHours" FROM ( SELECT replace(sys_connect_by_path(decode(level, 1, t.name), '~'), '~') As TaskName, ts.hours as hours FROM tasks t INNER JOIN timesheets ts ON t.id=ts.task_id START WITH PARENTOID=-1 CONNECT BY PRIOR t.id = t.parent_id ) GROUP BY TaskName Having Sum(Hours) > 0 ORDER BY TaskName And it almost work. THe only problem is that if there are no timesheet for a root task, it will skip the whole hieararchy... but there might be timesheets for the child rows and it is exactly what happens with Task B1. I know it is the "inner join" part that is causing my problem but I'm not sure how can I get rid of it. Any idea how to solve this problem? Thank you

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >