Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 223/256 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • Select highest rated, oldest track

    - by Blair McMillan
    I have several tables: CREATE TABLE [dbo].[Tracks]( [Id] [uniqueidentifier] NOT NULL, [Artist_Id] [uniqueidentifier] NOT NULL, [Album_Id] [uniqueidentifier] NOT NULL, [Title] [nvarchar](255) NOT NULL, [Length] [int] NOT NULL, CONSTRAINT [PK_Tracks_1] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TrackHistory]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [Datetime] [datetime] NOT NULL, CONSTRAINT [PK_TrackHistory] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[TrackHistory] ([Track_Id] ,[Datetime]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,GETDATE()) CREATE TABLE [dbo].[Ratings]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [User_Id] [uniqueidentifier] NOT NULL, [Rating] [tinyint] NOT NULL, CONSTRAINT [PK_Ratings] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[Ratings] ([Track_Id] ,[User_Id] ,[Rating]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,"C7D62450-8BE6-40F6-80F1-A539DA301772" ,1) Users User_Id|Guid Other fields Links between the tables are pretty obvious. TrackHistory has each track added to it as a row whenever it is played ie. a track will appear in there many times. Ratings value will either be 1 or -1. What I'm trying to do is select the Track with the highest rating, that is more than 2 hours old, and if there is a duplicate rating for a track (ie a track receives 6 +1 ratings and 1 - rating, giving that track a total rating of 5, another track also has a total rating of 5), the track that was last played the longest ago should be returned. (If all tracks have been played within the last 2 hours, no rows should be returned) I'm getting somewhere doing each part individually using the link above, SUM(Value) and GROUP BY Track_Id, but I'm having trouble putting it all together. Hopefully someone with a bit more (MS)SQL knowledge will be able to help me. Many thanks!

    Read the article

  • Field specific errors for ETL

    - by AaronLS
    I am creating a ETL process in MS SQL Server and I would like to have errors specific to a particular column of a particular row. For example, the data is initially loaded from excel files into a table(we'll call the Initial table) where all columns are varchar(2000) and then I stage the data to another table(the DataTypedTable) that contains more specific data types (datetime,int, etc.) or more tightly constrained varchar lengths. I need to be able to create error messages for a specific field such as: "Jan. 13th" is not a valid date format for the submission date. Please use a format of MM/DD/YYYY These error messages would need to be stored in some way such that later in the process a automated process can create reports with the error messages such that each message references a specific row and field(someone will need to go back and correct the data in the source system and resubmit the excel file). So ideally it would be inserted into a Failures tables of some sort and contain the primary key of the failed row, the column name, and the error message. Question: So I am wondering if this can be accomplished with SSIS, or some open source tool like Talend, and if so, what would be your general approach? Or what hand coded approach you would take? Couple approaches I've thought of using SQL(up until no I have done ETL by hand in SQL procs, but I want to consider other approaches. Possible C# even.): Use a cursor to read through the Initial table, and for each row insert a blank record with only the primary key into the DataTyped table, then use a single update statement for each column, such that if that update fails I can insert a very specific error message specific to that column in the error messages table. Insert all the data as is into the DataTyped table, but have duplicate columns like SubmissionDate and SubmissionDateOld. After the initial insert the *Old columns have data, the rest are blank, and I have a single update for each column that sets the SubmissionDate based on the SubmissionDateOld. In addition to suggesting an approach, I'd like to know if you are using that approach or something similar already in the work you do.

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • How do I use a modalViewController Identically in Two Controllers?

    - by Theory
    I'm using the Three20 TTMessageController in my app. I've figured out how to use it, adding on a bunch of other stuff (including TTMessageControllerDelegate methods and ABPeoplePickerNavigationControllerDelegate methods). It works great for me, after a bit of a struggle to figure it out. The trouble I'm having now is a design issue: I want to use it identically in two different places, including with the same delegate methods. My current approach is that I've put all the code into a single class inheriting from NSObject, called ComposerProxy, and I'm just having the two controllers that use it use the proxy, like so: ComposerProxy *proxy = [[ComposerProxy alloc] initWithController:this]; [proxy go]; The go method constructs the TTMessageController, configures it, adds it to a UINavigationController, and presents it: [self.controller presentModalViewController: navController animated: YES]; This works great, as I have all my code nicely encapsulated in ComposerProxy and I need only the above two lines anywhere I want to use it. The downside, though, is that I can't dealloc the proxy variable without getting crashes. I can't autorelease it, either: same problem. So I'm wondering if my proxy approach is a poor one. How does one normally encapsulate a bunch of behaviors like this without requiring a lot of duplicate code in the classes that use it? Do I need to add a delegate class to my ComposerProxy and make the controller responsible for dismissing the modal view controller in a hypothetical composerDidFinish method or some such? Many TIA!

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • Placement/Positioning/Alignment of UIScrollView w.r.t. length of Title NSString

    - by Shoaibi
    I have a scenario where i show stuff like: ----------------------------------- titleview (UITextView) ______________ tArea (UIScrollView) ttextview (UITextView) ----------------------------------- Now here is the condition: Length of titleview's text is dynamic, varies based on user input. Because of the condition I have trouble placing tArea e.g. UIScrollView on the screen, it either appears way too below than the titleview, or overlaps it. Previously what i did: Count the number of characters in titleview.text.length and divide them by 27 (e.g. characters in one line when using boldSystemFontOfSize:20) and multiply by 10 to get the starting X of tArea e.g. UIScrollView. But that sucked. Because i had to duplicate code for rotation to landscape. What do i have now? : CGSize titlesize = [title sizeWithFont:[UIFont systemFontOfSize:20] constrainedToSize:CGSizeMake(5, 90) lineBreakMode:UILineBreakModeWordWrap]; ttitleview = [[UITextView alloc] initWithFrame:CGRectMake(5, 5, 310,titlesize.height)]; ttitleview.text = title; ttitleview.font = [UIFont boldSystemFontOfSize:20]; ttitleview.backgroundColor = [UIColor clearColor]; ttitleview.editable = NO; [self.view addSubview:ttitleview]; CGSize textsize = [ttext sizeWithFont:[UIFont systemFontOfSize:20] constrainedToSize:CGSizeMake(5, 350) lineBreakMode:UILineBreakModeWordWrap]; tArea = [[UIScrollView alloc] initWithFrame:CGRectMake(5, titlesize.height, 310, 230)]; tArea.contentSize = CGSizeMake(310, textsize.height+20); tArea.pagingEnabled = FALSE; tArea.scrollEnabled = TRUE; tArea.backgroundColor = [UIColor clearColor]; [self.view addSubview:tArea]; ttextview = [[UITextView alloc] initWithFrame:CGRectMake(0, 0, 310, textsize.height + 20)]; ttextview.text = ttext; ttextview.font = [UIFont systemFontOfSize:20]; ttextview.backgroundColor = [UIColor clearColor]; ttextview.editable = NO; [tArea addSubview:ttextview]; But its no use. Looking for an elegant solution than this.

    Read the article

  • Linking error while using Qt static built libraries

    - by Kamran Amini
    I hope this is not a duplicate. Recently I'm developing a native C++ application using Qt 4.8.3 and VS2008. Since clients run the application on their naked machines, they need to install VC++ 2008 Redistribution package. So I decided to make it statically linked. I changed my project settings (C/C++ Code Generation Runtime Library) to /MTd. Also I compiled Qt again, this time using following commands for a static building; originally found on this blog Static Qt with static CRT (VS 2008) 1- replaced -MD with -MT in lines QMAKE_CFLAGS_RELEASE and QMAKE_CFLAGS_DEBUG in %QDIR%\mkspecs\win32-msvc2008\qmake.conf 2- nmake confclean 3- configure -static -platform win32-msvc2008 -no-webkit 4- nmake sub-src I compiled Qt successfully. But when I tried again to compile my application, it gave me some strange errors. 1>Linking... 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: bool __thiscall QBasicAtomicInt::deref(void)" (?deref@QBasicAtomicInt@@QAE_NXZ) already defined in QtCored4.lib(QtCored4.dll) 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: bool __thiscall QBasicAtomicInt::operator!=(int)const " (??9QBasicAtomicInt@@QBE_NH@Z) already defined in QtCored4.lib(QtCored4.dll) 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: __thiscall QString::~QString(void)" (??1QString@@QAE@XZ) already defined in QtCored4.lib(QtCored4.dll) I changed some lib files but with each change, situation got worse; for example I tried to use QtCored.lib instead of QtCored4.lib because it is newly created after compilation. I think I've missed something in building static Qt libs. Thanks.

    Read the article

  • % style macros not supported in some C++/CLI project property pages under VS2010?

    - by Dave Foster
    We're currently evaluating VS2010 and have upgraded our VS2008 C++/CLI project to the new .vcxproj format. I've noticed that a certain property we had set in the project settings did not get translated properly. Under Configuration Properties - Managed Resources - Resource Logical Name, we used to have (in VS2008) the setting: $(IntDir)\$(RootNamespace).$(InputName).resources which indicated that all .resx files were to compile into OurLib.SomeForm.resources inside of the assembly. (the Debug portion is dropped when assembled) According to MSDN, the $(InputName) macro no longer exists and should be replaced with %(Filename). However, when translating the above line to swap those macros, it does not seem to ever expand. The second .resx file it tries to compile, I get a "LINK : fatal error LNK1316: duplicate managed resource name 'Debug\OurLib.%(Filename).resources". This indicates to me that the % style macros are not being expanded here, at least in this specific property. If we don't set anything in that property, the default behavior seems to be to add the subdirectory as a prefix, such as: OurLib.Forms.SomeForm.resources where Forms is the subdir of our project that the .resx file lives. This only occurs when the .resx file is in an immediate subdirectory of the project being built. If a .resx file exists somewhere else on disk (aka ..\OtherLib\Forms\SomeForm2.resx) this prefix is NOT added. This is causing an issue with loading form resources, as it does not account for this possible prefix, even though we are using the standard Forms Designer method of getting at resources: System::ComponentModel::ComponentResourceManager^ resources = (gcnew System::ComponentModel::ComponentResourceManager(SomeForm::typeid)); and do not specify the .resources file by name. The issue I've just described may not be the same as the original question, but if I were to fix the Resource Logical Name issue I think this would all go away. Does anyone have any information about these % macros and where they are allowed to be used?

    Read the article

  • Algorithm to detect how many words typed, also multi sentence support (Java)

    - by Alex Cheng
    Hello all. Problem: I have to design an algorithm, which does the following for me: Say that I have a line (e.g.) alert tcp 192.168.1.1 (caret is currently here) The algorithm should process this line, and return a value of 4. I coded something for it, I know it's sloppy, but it works, partly. private int counter = 0; public void determineRuleActionRegion(String str, int index) { if (str.length() == 0 || str.indexOf(" ") == -1) { triggerSuggestionList(1); return; } //remove duplicate space, spaces in front and back before searching int num = str.trim().replaceAll(" +", " ").indexOf(" ", index); //Check for occurances of spaces, recursively if (num == -1) { //if there is no space //no need to check if it's 0 times it will assign to 1 triggerSuggestionList(counter + 1); counter = 0; return; //set to rule action } else { //there is a space counter++; determineRuleActionRegion(str, num + 1); } } //end of determineactionRegion() So basically I find for the space and determine the region (number of words typed). However, I want it to change upon the user pressing space bar <space character>. How may I go around with the current code? Or better yet, how would one suggest me to do it the correct way? I'm figuring out on BreakIterator for this case... To add to that, I believe my algorithm won't work for multi sentences. How should I address this problem as well. -- The source of String str is acquired from textPane.getText(0, pos + 1);, the JTextPane. Thanks in advance. Do let me know if my question is still not specific enough.

    Read the article

  • Passing an instance variable through RJS?

    - by Elliot
    Hey guys here is my code (roughly): books.html.erb <% @books.each do |book| %> <% @bookid = book.id %> <div id="enter_stuff"> <%= render "input", :bookid => @bookid %> </div> <%end%> _input.html.erb <% @book = Book.find_by_id(@bookid) %> <strong>your book is: <%=h @book.name %></strong> create.rjs page.replace_html :enter_stuff, :partial => 'input', :object => @bookid The problem here is that only create.js doesn't seem to work (though, if instead of passing the partial I passed "..." it does work, so I know its that there are instance variables in the partial that aren't being reset. Any ideas?) So the final question, is how do I pass an instance variable to a partial through the create.rjs file? p.s. I know I will have duplicate div IDs, I'm not worrying about that for now though. Best, Elliot

    Read the article

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • For the professional programmers - do you still write code for fun at home ? [closed]

    - by Led
    Possible Duplicate: Do you ever code just for fun? I've been working as a 'professional' coder for about 11 years. (I've just turned 33.) When I talk to my collegues, I find that most of them actually don't program any more in their spare time - 8 (or 10 :)) hours a day at their job is enough for them. A difference between me and them might be that I was always programming for fun (demoscene stuff etc.) which is why I got into the field, while most of them picked up programming later on (at university or whatever). When I get home my head is always full of ideas, so usually I have a hobby-project going on. Is it weird to spend 8 hours a day programming, and then get home, have dinner, and do some more ? For me the reasons are just - ideas : trying stuff - wanting to develop something all by myself, so when it's finished I can claim it as my own victory How about you ? And if you do, do you have other reasons to do so ? Edit: And if you've got sparetime projects, it might be fun to tell us a bit about it :) Spamming a link to your site/hobbyproject won't be frowned upon here ! Edit2: Vote for this if you want to encourage companies to make monitors that'll give you a nice tan ! ;-)

    Read the article

  • php - sort and delete duplicates?

    - by c41122ino
    I've got an array that looks like this: Array ( [0] => Array ( num => 09989, dis => 20 ) [1] => Array ( num => 09989, dis => 10 ) [2] => Array ( num => 56676, dis => 15 ) [3] => Array ( num => 44533, dis => 20 ) [4] => Array ( num => 44533, dis => 50 ) ) First, I'm trying to sort them by num, and can't seem to get the usort example from php.net working here. It simply doesn't appear to be sorting... I'm also trying to delete the array element if it's a duplicate and whose dis value is higher than the other one. So, based on the example above, I'm trying to create: Array ( [0] => Array ( num => 09989, dis => 10 ) [1] => Array ( num => 44533, dis => 20 ) [2] => Array ( num => 56676, dis => 15 ) ) This is the code from php.net: function cmp($a, $b) { if ($a == $b) { return 0; } return ($a < $b) ? -1 : 1; }

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • How do I define a Calculated Measure in MDX based on a Dimension Attribute?

    - by ShaneD
    I would like to create a calculated measure that sums up only a specific subset of records in my fact table based on a dimension attribute. Given: Dimension Date LedgerLineItem {Charge, Payment, Write-Off, Copay, Credit} Measures LedgerAmount Relationships * LedgerLineItem is a degenerate dimension of FactLedger If I break down LedgerAmount by LedgerLineItem.Type I can easily see how much is charged, paid, credit, etc, but when I do not break it down by LedgerLineItem.Type I cannot easily add the credit, paid, credit, etc into a pivot table. I would like to create separate calculated measures that sum only specific type (or multiple types) of ledger facts. An example of the desired output would be: | Year | Charged | Total Paid | Amount - Ledger | | 2008 | $1000 | $600 | -$400 | | 2009 | $2000 | $1500 | -$500 | | Total | $3000 | $2100 | -$900 | I have tried to create the calculated measure a couple of ways and each one works in some circumstances but not in others. Now before anyone says do this in ETL, I have already done it in ETL and it works just fine. What I am trying to do as part of learning to understand MDX better is to figure out how to duplicate what I have done in the ETL in MDX as so far I am unable to do that. Here are two attempts I have made and the problems with them. This works only when ledger type is in the pivot table. It returns the correct amount of the ledger entries (although in this case it is identical to [amount - ledger] but when I try to remove type and just get the sum of all ledger entries it returns unknown. CASE WHEN ([Ledger].[Type].currentMember = [Ledger].[Type].&[Credit]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Paid]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Held Money: Copay]) THEN [Measures].[Amount - ledger] ELSE 0 END This works only when ledger type is not in the pivot table. It always returns the total payment amount, which is incorrect when I am slicing by type as I would only expect to see the credit portion under credit, the paid portion, under paid, $0 under charge, etc. sum({([Ledger].[Type].&[Credit]), ([Ledger].[Type].&[Paid]), ([Ledger].[Type].&[Held Money: Copay])}, [Measures].[Amount - ledger]) Is there any way to make this return the correct numbers regardless of whether Ledger.Type is included in my pivot table or not?

    Read the article

  • Rails: Oracle constraint violation

    - by justinbach
    I'm doing maintenance work on a Rails site that I inherited; it's driven by an Oracle database, and I've got access to both development and production installations of the site (each with its own Oracle DB). I'm running into an Oracle error when trying to insert data on the production site, but not the dev site: ActiveRecord::StatementInvalid (OCIError: ORA-00001: unique constraint (DATABASE_NAME.PK_REGISTRATION_OWNERSHIP) violated: INSERT INTO registration_ownerships (updated_at, company_ownership_id, created_by, updated_by, registration_id, created_at) VALUES ('2006-05-04 16:30:47', 3, NULL, NULL, 2920, '2006-05-04 16:30:47')): /usr/local/lib/ruby/gems/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:221:in `execute' app/controllers/vendors_controller.rb:94:in `create' As far as I can tell (I'm using Navicat as an Oracle client), the DB schema for the dev site is identical to that of the live site. I'm not an Oracle expert; can anyone shed light on why I'd be getting the error in one installation and not the other? Incidentally, both dev and production registration_ownerships tables are populated with lots of data, including duplicate entries for country_ownership_id (driven by index PK_REGISTRATION_OWNERSHIP). Please let me know if you need more information to troubleshoot. I'm sorry I haven't given more already, but I just wasn't sure which details would be helpful.

    Read the article

  • Linq-to-SQL: How to shape the data with group by?

    - by Cheeso
    I have an example database, it contains tables for Movies, People and Credits. The Movie table contains a Title and an Id. The People table contains a Name and an Id. The Credits table relates Movies to the People that worked on those Movies, in a particular role. The table looks like this: CREATE TABLE [dbo].[Credits] ( [Id] [int] IDENTITY (1, 1) NOT NULL PRIMARY KEY, [PersonId] [int] NOT NULL FOREIGN KEY REFERENCES People(Id), [MovieId] [int] NOT NULL FOREIGN KEY REFERENCES Movies(Id), [Role] [char] (1) NULL In this simple example, the [Role] column is a single character, by my convention either 'A' to indicate the person was an actor on that particular movie, or 'D' for director. I'd like to perform a query on a particular person that returns the person's name, plus a list of all the movies the person has worked on, and the roles in those movies. If I were to serialize it to json, it might look like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "roles": ["actor", "director"] }, { "title": "Sands of Iwo Jima", "roles": ["director"] }, { "title": "Dirty Harry", "roles": ["actor"] }, ... ] } How can I write a LINQ-to-SQL query that shapes the output like that? I'm having trouble doing it efficiently. if I use this query: int personId = 10007; var persons = from p in db.People where p.Id == personId select new { name = p.Name, movies = (from m in db.Movies join c in db.Credits on m.Id equals c.MovieId where (c.PersonId == personId) select new { title = m.Title, role = (c.Role=="D"?"director":"actor") }) }; I get something like this: { "name" : "Clint Eastwood", "movies" : [ { "title": "Unforgiven", "role": "actor" }, { "title": "Unforgiven", "role": "director" }, { "title": "Sands of Iwo Jima", "role": "director" }, { "title": "Dirty Harry", "role": "actor" }, ... ] } ...but as you can see there's a duplicate of each movie for which Eastwood played multiple roles. How can I shape the output the way I want?

    Read the article

  • Code bacteria: evolving mathematical behavior

    - by Stefano Borini
    It would not be my intention to put a link on my blog, but I don't have any other method to clarify what I really mean. The article is quite long, and it's in three parts (1,2,3), but if you are curious, it's worth the reading. A long time ago (5 years, at least) I programmed a python program which generated "mathematical bacteria". These bacteria are python objects with a simple opcode-based genetic code. You can feed them with a number and they return a number, according to the execution of their code. I generate their genetic codes at random, and apply an environmental selection to those objects producing a result similar to a predefined expected value. Then I let them duplicate, introduce mutations, and evolve them. The result is quite interesting, as their genetic code basically learns how to solve simple equations, even for values different for the training dataset. Now, this thing is just a toy. I had time to waste and I wanted to satisfy my curiosity. however, I assume that something, in terms of research, has been made... I am reinventing the wheel here, I hope. Are you aware of more serious attempts at creating in-silico bacteria like the one I programmed? Please note that this is not really "genetic algorithms". Genetic algorithms is when you use evolution/selection to improve a vector of parameters against a given scoring function. This is kind of different. I optimize the code, not the parameters, against a given scoring function.

    Read the article

  • XSL-FO: Static content AND Flow content in Region-Body: Possible?

    - by Peterdk
    I have the following problem: I need to use XSLFO to generate a 2-column multipage document. Problem is: I need to have a vertical line between the 2 columns. Since XSLFO does not seem to specify a option for creating such a divider, I need to manually put it there. I was thinking of using a static rotated blockcontainer with a leader in it. However, it looks like it's not possible to use static-content on the same region as where the flow content comes. <fo:layout-master-set> <fo:simple-page-master page-width="170mm" page-height="222mm" master-name="page" > <fo:region-body region-name="xsl-region-body" margin-top="2mm" margin-bottom="2mm" margin-left="10mm" margin-right="10mm" column-count="2" column-gap="5mm" /> </fo:simple-page-master> </fo:layout-master-set> <fo:page-sequence master-reference="page"> <fo:static-content flow-name="xsl-region-body" ><!-- This gives a error --> <fo:block>test</fo:block> </fo:static-content> <fo:flow flow-name="xsl-region-body"> <xsl:apply-templates/> </fo:flow> </fo:page-sequence> Results in (XEP): [error] Duplicate identifier: flow-name="xsl-region-body". Property 'flow-name' should be unique within 'fo:page-sequence'. Are there any methods to place static content on the main region when also flow content is placed there? Or: Is there a way to define the divider that divides a 2-column layout?

    Read the article

  • Modeling multiple polymorphic relationships using Hibernate

    - by f-potter
    Ruby on Rails has polymorphic relations which are really useful for implementing functionality such as commenting, tagging and rating to name a few. We can have a comment, tag or rating class which has a many to one polymorphic relationship with a commentable, taggable and rateable object. Also, a given domain object can choose to implement any combination of such relations. So, it can for example be commentable, taggable and rateable at the same time. I couldn't think up of a straightforward way to duplicate this functionality in Hibernate. Ideally, there would be a Comment class which will have a many to one relationship with a Commentable class and a Commentable class will conversely have a one to many relationship with Comments. It will be ideal if the concrete domain classes can inherit from a number of such classes, say Commentable and Taggable. Things seem a little complicated as a Java class can only extend one other class and some code might end up being duplicated across a number of classes. I wanted to know what are the best practices for modeling such relationships neatly and concisely using Hibernate?

    Read the article

  • Generics vs inheritance (whenh no collection classes are involved)

    - by Ram
    This is an extension of this questionand probably might even be a duplicate of some other question(If so, please forgive me). I see from MSDN that generics are usually used with collections The most common use for generic classes is with collections like linked lists, hash tables, stacks, queues, trees and so on where operations such as adding and removing items from the collection are performed in much the same way regardless of the type of data being stored. The examples I have seen also validate the above statement. Can someone give a valid use of generics in a real-life scenario which does not involve any collections ? Pedantically, I was thinking about making an example which does not involve collections public class Animal<T> { public void Speak() { Console.WriteLine("I am an Animal and my type is " + typeof(T).ToString()); } public void Eat() { //Eat food } } public class Dog { public void WhoAmI() { Console.WriteLine(this.GetType().ToString()); } } and "An Animal of type Dog" will be Animal<Dog> magic = new Animal<Dog>(); It is entirely possible to have Dog getting inherited from Animal (Assuming a non-generic version of Animal)Dog:Animal Therefore Dog is an Animal Another example I was thinking was a BankAccount. It can be BankAccount<Checking>,BankAccount<Savings>. This can very well be Checking:BankAccount and Savings:BankAccount. Are there any best practices to determine if we should go with generics or with inheritance ?

    Read the article

  • Non distinct Unique ID in MySQL database table.

    - by Geoff
    First of, a simplified version: I am wondering if I can create a trigger to activate during INSERT (it's actually LOAD DATA INFILE) and NOT enter records for an RMA already in my table? I have a table that has no records that are unique. Some may be duplicates but there is one field that I can use to know if the data has been entered or not. For instance RMA Op Days --------------------- 213 Repair 0.10 213 Test 0.20 213 Repair 0.10 So I could do an index on the three columns together but as you see it's possible for an RMA to be in a step for the same amount of time twice so it's possible to have duplicate records. This data comes from a report that I cannot edit and this is all it provides. The key is that an RMA's data is only in the report once so if my database already has that RMA in it's records I want to skip the loading of that RMA's records from the report. By all means please let me know if that didn't make sense, I'll Explain as needed. I'm sure it's not uncommon but I couldn't find anything on the net.

    Read the article

  • Refining data stored in SQLite - how to join several contacts?

    - by Krab
    Problem background Imagine this problem. You have a water molecule which is in contact with other molecules (if the contact is a hydrogen bond, there can be 4 other molecules around my water). Like in the following picture (A, B, C, D are some other atoms and dots mean the contact). A B . . O / \ H H . . C D I have the information about all the dots and I need to eliminate the water in the center and create records describing contacts of A-C, A-D, A-B, B-C, B-D, and C-D. Database structure Currently, I have the following structure in the database: Table atoms: "id" integer PRIMARY KEY, "amino" char(3) NOT NULL, (HOH for water or other value) other columns identifying the atom Table contacts: "acceptor_id" integer NOT NULL, (the atom near to my hydrogen, here C or D) "donor_id" integer NOT NULL, (here A or B) "directness" char(1) NOT NULL, (this should be D for direct and W for water-mediated) other columns about the contact, such as the distance Current solution (insufficient) Now, I'm going through all the contacts which have donor.amino = "HOH". In this sample case, this would select contacts from C and D. For each of these selected contacts, I look up contacts having the same acceptor_id as is the donor_id in the currently selected contact. From this information, I create the new contact. At the end, I delete all contacts to or from HOH. This way, I am obviously unable to create C-D and A-B contacts (the other 4 are OK). If I try a similar approach - trying to find two contacts having the same donor_id, I end up with duplicate contacts (C-D and D-C). Is there a simple way to retrieve all six contacts without duplicates? I'm dreaming about some one page long SQL query which retrievs just these six wanted rows. :-) It is preferable to conserve information about who is donor where possible, but not strictly necessary. Big thanks to all of you who read this question to this point.

    Read the article

  • How do I exclude data from local table schema_migrations from being pushed to Heroku DB?

    - by Thierry Lam
    I was able to push my Ruby on Rails app with MySQL(local dev) to the Heroku server along with migrating my model with the command heroku rake db:migrate. I have also read the documentation on Database Import/Export. Is that doc referring to pushing actual data from my local dev DB to whichever Heroku's DB? Do I need to modify anything in the file database.yml to make it happen? I ran the following command: heroku db:push and I am getting the error: Sending data 2 tables, 3 records !!! Caught Server Exception | ETA: --:--:-- Taps Server Error: PGError ERROR: duplicate key value violates unique constraint "unique_schema_migrations" I have 2 tables, one I create for my app and the other schema_migrations. The total number of entries among the 2 tables is 3. I'm also printing the number of entries I have in the table I have created and it's showing 0. Any ideas what I might be missing or what I am doing wrong? EDIT: I figured out the above, Heroku's DB already have schema_migrations the moment I ran migrate. New question: Does anyone know how I can exclude data from a specific table from being pushed to Heroku DB. The table to exclude in this case will be schema_migrations. Not so good solution: I googled around and someone else was having the same issue. He suggested naming the schema_migrations table to zschema_migrations. In this way data from the other tables will be pushed properly until it fails on the last table. It's a pretty bad solution but will do for the time being. A better solution will be to use an existing Rails command which can reset a specific table from a database. I don't think Rake can do that.

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >