Search Results

Search found 1334 results on 54 pages for 'constraint satisfaction'.

Page 48/54 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Oracle User definied aggregate function for varray of varchar

    - by baju
    I am trying to write some aggregate function for the varray and I get this error code when I'm trying to use it with data from the DB: ORA-00600 internal error code, arguments: [kodpunp1], [], [], [], [], [], [], [], [], [], [], [] [koxsihread1], [0], [3989], [45778], [], [], [], [], [], [], [], [] Code of the function is really simple(in fact it does nothing ): create or replace TYPE "TEST_VECTOR" as varray(10) of varchar(20) ALTER TYPE "TEST_VECTOR" MODIFY LIMIT 4000 CASCADE create or replace type Test as object( lastVector TEST_VECTOR, STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number, MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number, MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number, MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number ); create or replace type body Test is STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number is begin sctx := Test(TEST_VECTOR()); return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number is begin self.lastVector := value; return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number is begin return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number is begin returnValue := self.lastVector; return ODCIConst.Success; end; end; create or replace FUNCTION test_fn (input TEST_VECTOR) RETURN TEST_VECTOR PARALLEL_ENABLE AGGREGATE USING Test; Next I create some test data: create table t1_test_table( t1_id number not null, t1_value TEST_VECTOR not null, Constraint PRIMARY_KEY_1 PRIMARY KEY (t1_id) ) Next step is to put some data to the table insert into t1_test_table (t1_id,t1_value) values (1,TEST_VECTOR('x','y','z')) Now everything is prepared to perform queries: Select test_fn(TEST_VECTOR('y','x')) from dual Query above work well Select test_fn(t1_value) from t1_test_table where t1_id = 1 Version of Oracle DBMS I use: 11.2.0.3.0 Does anyone tried do such a thing? What can be the reason that it does not work? How to solve it? Thanks in advance for help.

    Read the article

  • What are the constraints on Cocoa Framework version numbers?

    - by Joe
    We're distributing a Cocoa framework with regular updates. We'll be updating the version numbers with each release. The Apple documentation appears to suggest that version numbers should be consecutive incrementing integers. We are distributing output in several formats, and the framework is only one of them. We would rather not have to maintain a separate numbering system just for our frameworks. We don't really care about the precise format of the version numbers of the framework, so long as they change whenever the product changes, and behave in a correct and expected manner. I'm looking for a sensible and correct way of avoiding having to run a separate version number counter. One suggestion is that for product version 12.34.56 we could simply remove the dots and say the framework version is 123456. Is there a constraint on the type of number that can be represented (uint? long?) Does it have to be a number? Could it be a string? Do the numbers have to be consecutive? Is there a standard way of doing things in this situation?

    Read the article

  • Dynamic Variable Names in Included Module in Ruby?

    - by viatropos
    I'm hoping to implement something like all of the great plugins out there for ruby, so that you can do this: acts_as_commentable has_attached_file :avatar But I have one constraint: That helper method can only include a module; it can't define any variables or methods. Here's what the structure looks like, and I'm wondering if you know the missing piece in the puzzle: # 1 - The workhorse, encapsuling all dynamic variables module My::Module def self.included(base) base.extend ClassMethods base.class_eval do include InstanceMethods end end module InstanceMethods self.instance_eval %Q? def #{options[:my_method]} "world!" end ? end module ClassMethods end end # 2 - all this does is define that helper method module HelperModule def self.included(base) base.extend(ClassMethods) end module ClassMethods def dynamic_method(options = {}) include My::Module(options) end end end # 3 - send it to active_record ActiveRecord::Base.send(:include, HelperModule) # 4 - what it looks like class TestClass < ActiveRecord::Base dynamic_method :my_method => "hello" end puts TestClass.new.hello #=> "world!" That %Q? I'm not totally sure how to use, but I'm basically just wanting to somehow be able to pass the options hash from that helper method into the workhorse module. Is that possible? That way, the workhorse module could define all sorts of functionality, but I could name the variables whatever I wanted at runtime.

    Read the article

  • Check if DateTime in specific range

    - by katit
    Need to check if DateTime is in specific range. I think I need to calculate knowing YEAR first and last date of DST time in this year. How would I figure "Sunday of week 2 of March" date? From 1/1/2007 12:00:00 AM to 12/31/9999 12:00:00 AM Begins at 2:00 AM on Sunday of week 2 of March Ends at 2:00 AM on Sunday of week 1 of November For example, I need to check if 11/21/2011 is between Sunday of week 2 in March and Sunday of week 1 of November - answer should be NO If I pass 8/8/2011 - answer should be yes. Basically, I need to write function to check if my date belongs to daylight savings time. My only idea so far is to write loops to find 2nd week for example. So, I would loop from Day 1 in March until I hit Sunday second time. Same thing I would loop (increment days by 1) from day 1 of November until I hit Sunday first time. In another words, I need function to check if input data is in Daylight Savings time period. Time period defined by constraint above. P.S. I can't use TimeZoneInfo since it's in Silverlight P.P.S I can't use DateTime.IsDaylightSavingsTime as I don't have times with kind "local"

    Read the article

  • symfony + doctrine + inheritance, how to make them work?

    - by imac
    I am beginning to work with Symfony, I've found some documentation about inheritance. But also found this discouraging article, which make me doubt if Doctrine handles inheritance any good at all... Has anyone find a smart solution for inheritance in Symfony+Doctrine? As an example, I have already structured the database something like this: CREATE TABLE `poster` ( `poster_id` int(11) NOT NULL AUTO_INCREMENT, `user_name` varchar(50) NOT NULL, PRIMARY KEY (`poster_id`), UNIQUE KEY `id` (`poster_id`), ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1; CREATE TABLE `user` ( `user_id` int(11) NOT NULL, `real_name` varchar(50) DEFAULT NULL, PRIMARY KEY (`user_id`), UNIQUE KEY `user_id` (`user_id`), CONSTRAINT `user_fk` FOREIGN KEY (`user_id`) REFERENCES `poster` (`poster_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; From that, Doctrine generated this "schema.yml": Poster: connection: doctrine tableName: poster columns: poster_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true user_name: type: string(50) fixed: false unsigned: false primary: false notnull: true autoincrement: false relations: Post: local: poster_id foreign: poster_id type: many User: local: poster_id foreign: user_id type: many Version: local: poster_id foreign: poster_id type: many User: connection: doctrine tableName: user columns: user_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false real_name: type: string(50) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Poster: local: user_id foreign: poster_id type: one User creation for this structure with Doctrine auto-generated forms does not work. Any clue will be appreciated.

    Read the article

  • Batch processing JDBC

    - by Wai Hein
    I am practicing JDBC batch processing and having errors: error 1: Unsupported feature error 2: Execute cannot be empty or null Property files include: itemsdao.updateBookName = Update Books set bookname = ? where books.id = ? itemsdao.updateAuthorName = Update books set authorname = ? where books.id = ? I know I can execute about DML statements in one update, but I am practicing batch processing in JDBC. Below is my method public void update(Item item) { String query = null; try { connection = DbConnector.getConnection(); property = SqlPropertiesLoader.getProperties("dml.properties"); connection.setAutoCommit(false); if ( property == null ) { Logging.log.debug("dml.properties does not exist. Check property loader or file name is spelled right"); return; } query = property.getProperty("itemsdao.updateBookName"); statement = connection.prepareStatement(query); statement.setString(1, item.getBookName()); statement.setInt(2, item.getId()); statement.addBatch(query); query = property.getProperty("itemsdao.updateAuthorName"); statement = connection.prepareStatement(query); statement.setString(1, item.getAuthorName()); statement.setInt(2, item.getId()); statement.addBatch(query); statement.executeBatch(); connection.commit(); }catch (ClassNotFoundException e) { Logging.log.error("Connection class does not exist", e); } catch (SQLException e) { Logging.log.error("Violating PK constraint",e); } //helper class th finally { DbUtil.close(connection); DbUtil.closePreparedStatement(statement); }

    Read the article

  • Hibernate many-to-one - bad usage?

    - by DaveA
    Just trying out Hibernate (with Annotations) and I'm having problems with my mappings. I have two entity classes, AudioCD and Artist. @Entity public class AudioCD implements CatalogItem { @Id @GeneratedValue(strategy = GenerationType.AUTO) private int id; private String title; @ManyToOne(cascade = { CascadeType.ALL }, optional = false) private Artist artist; .... } @Entity @Table(uniqueConstraints = { @UniqueConstraint(columnNames = { "name" }) }) public class Artist { @Id @GeneratedValue(strategy = GenerationType.AUTO) private int id; @Column(nullable = false) private String name; ..... } I get AudioCD objects from an external source. When I try to persist the AudioCD the Artist gets persisted as well, just like I want to happen. If I try persisting another different CD, but Artist already exists I get errors due to constraint violations. I want Hibernate to recognise that the Artist already exists and shouldn't be inserted again. Can this be done via annotations? Or do I have to manage the persistence of the AudioCD and Artist seperately?

    Read the article

  • How can data templates in generic.xaml get applied automatically?

    - by Thiado de Arruda
    I have a custom control that has a ContentPresenter that will have an arbitrary object set as it content. This object does not have any constraint on its type, so I want this control to display its content based on any data templates defined by application or by data templates defined in Generic.xaml. If in a application I define some data template(without a key because I want it to be applied automatically to objects of that type) and I use the custom control bound to an object of that type, the data template gets applied automatically. But I have some data templates defined for some types in the generic.xaml where I define the custom control style, and these templates are not getting applied automatically. Here is the generic.xaml : If I set an object of type 'PredefinedType' as the content in the contentpresenter, the data template does not get applied. However, If it works if I define the data template in the app.xaml for the application thats using the custom control. Does someone got a clue? I really cant assume that the user of the control will define this data template, so I need some way to tie it up with the custom control.

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • Why does this javascript code have an infinite loop?

    - by asdas
    optionElements is a 2d array. Each element has an array of length 2. These are an integer number and an element. I have a select list called linkbox, and i want to add all of the elements to the select list. The order I want them to go in is important, and is determined by the number each element has. It should be smallest to highest. So think of it like this: optionElements is: [ [5, <option>], [3, <option], [4, <option], [1, <option], [2, <option]] and it would add them to link box in order of those numbers. BUT that is not what happens. It is an infinite loop after the first time. I added the x constraint just to stop it from freezing my browser but you can ignore it. var b; var smallest; var samllestIndex; var x = 0; while(optionElements.length > 0 && ++x < 100) { smallestIndex = 0; smallest = optionElements[0][0]; b = 0; while( ++b < optionElements.length) { if(optionElements[b][0] > smallest) { smallestIndex = b; smallest = optionElements[b][0]; } } linkbox.appendChild(optionElements[smallestIndex][1]); optionElements.unshift(optionElements[smallestIndex]); } can someone point out to me where my problem is?

    Read the article

  • Codeigniter: how should I restructure db schema?

    - by Kevin Brown
    I don't even know if that's the right term. May it be known that I'm a major novice! I have three tables: users, profiles, and survey. Each one has user_id as it's first field, (auto-increment for users), and they're all tied by a foreign key constraint, CASCADE on DELETE. Currently, for each user, let's say user_id 1, they have a corresponding db entry in the other tables. For profiles it lists all their information, and the survey table holds all their survey information. Now I must change things...darn scope creep. Users need the ability to have multiple survey results. I imagine that this would be similar to a comment table for a blog... My entire app runs around the idea that a single user is linked to a constraining profile and survey. How should I structure my db? How should I design my app's db so that a user can have multiple tests/profiles for the test? Please assist! Any advice, information and personal-knowledge is appreciated! Right now the only way I know how to accompany my client is to create a pseudo-user for each test (so unnecessary) and list them in a view table (called "your tests")-- these are obtained from the db by saying: where user_id=manager_id

    Read the article

  • Automatically converting an A* into a B*

    - by Xavier Nodet
    Hi, Suppose I'm given a class A. I would like to wrap pointers to it into a small class B, some kind of smart pointer, with the constraint that a B* is automatically converted to an A* so that I don't need to rewrite the code that already uses A*. I would therefore want to modify B so that the following compiles... struct A { void foo() {} }; template <class K> struct B { B(K* k) : _k(k) {} //operator K*() {return _k;} //K* operator->() {return _k;} private: K* _k; }; void doSomething(A*) {} void test() { A a; A* pointer_to_a (&a); B<A> b (pointer_to_a); //b->foo(); // I don't need those two... //doSomething(b); B<A>* pointer_to_b (&b); pointer_to_b->foo(); // 'foo' : is not a member of 'B<K>' doSomething(pointer_to_b); // 'doSomething' : cannot convert parameter 1 from 'B<K> *' to 'A *' } Note that B inheriting from A is not an option (instances of A are created in factories out of my control)... Is it possible? Thanks.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Is this method of static file serving safe in node.js? (potential security hole?)

    - by MikeC8
    I want to create the simplest node.js server to serve static files. Here's what I came up with: fs = require('fs'); server = require('http').createServer(function(req, res) { res.end(fs.readFileSync(__dirname + '/public/' + req.url)); }); server.listen(8080); Clearly this would map http://localhost:8080/index.html to project_dir/public/index.html, and similarly so for all other files. My one concern is that someone could abuse this to access files outside of project_dir/public. Something like this, for example: http://localhost:8080/../../sensitive_file.txt I tried this a little bit, and it wasn't working. But, it seems like my browser was removing the ".." itself. Which leads me to believe that someone could abuse my poor little node.js server. I know there are npm packages that do static file serving. But I'm actually curious to write my own here. So my questions are: Is this safe? If so, why? If not, why not? And, if further, if not, what is the "right" way to do this? My one constraint is I don't want to have to have an if clause for each possible file, I want the server to serve whatever files I throw in a directory.

    Read the article

  • hibernate: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException

    - by user121196
    when saving an object to database using hibernate, sometimes it fails because certain fields of the object exceed the maximum varchar length defined in the database. There force I am using the following approach: 1. attempt to save 2. if getting an DataException, then I truncate the fields in the object to the max length specified in the db definition, then try to save again. However, in the second save after truncation. I'm getting the following exception: hibernate: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails here's the relevant code, what's wrong? public static void saveLenientObject(Object obj){ try { save2(rec); } catch (org.hibernate.exception.DataException e) { // TODO Auto-generated catch block e.printStackTrace(); saveLenientObject(rec, e); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } private static void saveLenientObject(Object rec, DataException e) { Util.truncateObject(rec); System.out.println("after truncation "); save2(rec); } public static void save2(Object obj) throws Exception{ try{ beginTransaction(); getSession().save(obj); commitTransaction(); }catch(Exception e){ e.printStackTrace(); rollbackTransaction(); //closeSession(); throw e; }finally{ closeSession(); } }

    Read the article

  • Why in the world is this Moq + NUnit test failing?

    - by Dave Falkner
    I have this dataAccess mock object and I'm trying to verify that one of its methods is being invoked, and that the argument passed into this method fulfills certain constraints. As best I can tell, this method is indeed being invoked, and with the constraints fulfilled. This line of the test throws a MockException: data.Verify(d => d.InsertInvoice(It.Is<Invoice>(i => i.TermPaymentAmount == 0m)), Times.Once()); However, removing the constraint and accepting any invoice passes the test: data.Verify(d => d.InsertInvoice(It.IsAny<Invoice>()), Times.Once()); I've created a test windows form that instantiates this test class, runs its .Setup() method, and then calls the method which I am wishing to test. I insert a breakpoint on the line of code where the mock object is failing the test data.InsertInvoice(invoice); to actually hover over the invoice, and I can confirm that its .TermPaymentAmount decimal property is indeed zero at the time the method is invoked. Out of desperation, I even added a call back to my dataAccess mock: data.Setup(d => d.InsertInvoice(It.IsAny<Invoice>())).Callback((Invoice inv) => System.Windows.Forms.MessageBox.Show(inv.TermPaymentAmount.ToString("G17"))); And this gives me a message box showing "0". This is really baffling me, and no one else in my shop has been able to figure this out. Any help would be appreciated. A barely related question, which I should probably ask independently, is whether it is preferable to use Mock.Verify(...) as I have here, or to use Mock.Expect(...).Verifiable followed by Mock.VerifyAll() as I have seen other people doing? If the answer is situational, which situations would warrent the use of one over the other?

    Read the article

  • Pass Arguments to Included Module in Ruby?

    - by viatropos
    I'm hoping to implement something like all of the great plugins out there for ruby, so that you can do this: acts_as_commentable has_attached_file :avatar But I have one constraint: That helper method can only include a module; it can't define any variables or methods. The reason for this is because, I want the options hash to define something like type, and that could be converted into one of say 20 different 'workhorse' modules, all of which I could sum up in a line like this: def dynamic_method(options = {}) include ("My::Helpers::#{options[:type].to_s.camelize}").constantize(options) end Then those 'workhorses' would handle the options, doing things like: has_many "#{options[:something]}" Here's what the structure looks like, and I'm wondering if you know the missing piece in the puzzle: # 1 - The workhorse, encapsuling all dynamic variables module My::Module def self.included(base) base.extend ClassMethods base.class_eval do include InstanceMethods end end module InstanceMethods self.instance_eval %Q? def #{options[:my_method]} "world!" end ? end module ClassMethods end end # 2 - all this does is define that helper method module HelperModule def self.included(base) base.extend(ClassMethods) end module ClassMethods def dynamic_method(options = {}) # don't know how to get options through! include My::Module(options) end end end # 3 - send it to active_record ActiveRecord::Base.send(:include, HelperModule) # 4 - what it looks like class TestClass < ActiveRecord::Base dynamic_method :my_method => "hello" end puts TestClass.new.hello #=> "world!" That %Q? I'm not totally sure how to use, but I'm basically just wanting to somehow be able to pass the options hash from that helper method into the workhorse module. Is that possible? That way, the workhorse module could define all sorts of functionality, but I could name the variables whatever I wanted at runtime.

    Read the article

  • Deserialization of a DataSet... deal with column name changes? how to migrate data from one column to another?

    - by Brian Kennedy
    So, we wanted to slightly generalize a couple columns in our typed dataset... basically dropped a foreign key constraint and then wanted to change a couple column names to better reflect their new state. All that is easy. The problem is that our users may have serialized out the old version of the DataSet as XML. We want to be able to read those old XML files and deserialize them into the revised DataSet. It seems that would be a fairly common desire... but I haven't yet figured out the right thing to search the internet for. One possible solution would seem to be some way to give a DataColumn an alias or alternate name such that when it reads the old column name, it knows that data can be read into the column with the new column name. I can find no support for any such thing. Another approach would seem to be an "after deserialization" method of some sort... so, I would let it read in the old column values into a normal DataColumn with that name, and then in the "after deserialization" method I would just move the data from the obsolete column into the new column, and then delete the old columns. That would seem to generalize to many other situations... and having such events or hooks is pretty common in ADO.NET. But I have looked for such a hook and haven't yet found it. If no "after deserialization" hook, it would seem I ought to be able to override ReadXml or ReadXmlSerializable methods to call the base and then do my "after" stuff to fix up old data into new. But it does not appear that is possible. Soooo, I have to think backward compatibility with old serialized DataSets and simple data migration would be a well-solved problem... so, trying to reinvent that wheel seems silly. But so far, I haven't seemed to find any documentation on doing those things. Suggestions? What is best practice for this?

    Read the article

  • MySQL query problem

    - by Luke
    Ok, I have the following problem. I have two InnoDB tables: 'places' and 'events'. One place can have many events, but event can be created without entering place. In this case event's foreign key is NULL. Simplified structure of the tables looks as follows: CREATE TABLE IF NOT EXISTS `events` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) COLLATE utf8_bin NOT NULL, `places_id` int(9) unsigned DEFAULT NULL, PRIMARY KEY (`id`), KEY `fk_events_places` (`places_id`), ) ENGINE=InnoDB; ALTER TABLE `events` ADD CONSTRAINT `events_ibfk_1` FOREIGN KEY (`places_id`) REFERENCES `places` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS `places` ( `id` int(9) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`), ) ENGINE=InnoDB; Question is, how to construct query which contains name of the event and name of the corresponding place (or no value, in case there is no place assigned?). I am able to do it with two queries, but then I am visibly separating events which have place assigned from the ones that are without place. Help really appreciated.

    Read the article

  • Logic in the db for maintaining a points system relationship?

    - by MarcusBooster
    I'm making a little web based game and need to determine where to put logic that checks the integrity of some underlying data in the sql database. Each user keeps track of points assigned to him, and points are awarded by various tasks. I keep a record of each task transaction to make sure they're not repeated, and to keep track of the value of the task at the time of completion, since an individual award level my fluctuate over time. My schema looks like this so far: create table player ( player_ID serial primary key, player_Points int not null default 0 ); create table task ( task_ID serial primary key, task_PointsAwarded int not null ); create table task_list ( player_ID int references player(player_ID), task_ID int references task(task_ID), when_completed timestamp default current_timestamp, point_value int not null, --not fk because task value may change later constraint pk_player_task_id primary key (player_ID, task_ID) ); So, the player.player_Points should be the total of all his cumulative task points in the task_list. Now where do I put the logic to enforce this? Should I do away with player.player_Points altogether and do queries every time I want to know the total score? Which seems wasteful since I'll be doing that query a lot over the course of a game. Or, put a trigger in the task_list that automatically updates the player.player_Points? Is that too much logic to have in the database and should just maintain this relationship in the application? Thanks.

    Read the article

  • On counting pairs of words that differ by one letter

    - by Quintofron
    Let us consider n words, each of length k. Those words consist of letters over an alphabet (whose cardinality is n) with defined order. The task is to derive an O(nk) algorithm to count the number of pairs of words that differ by one position (no matter which one exactly, as long as it's only a single position). For instance, in the following set of words (n = 5, k = 4): abcd, abdd, adcb, adcd, aecd there are 5 such pairs: (abcd, abdd), (abcd, adcd), (abcd, aecd), (adcb, adcd), (adcd, aecd). So far I've managed to find an algorithm that solves a slightly easier problem: counting the number of pairs of words that differ by one GIVEN position (i-th). In order to do this I swap the letter at the ith position with the last letter within each word, perform a Radix sort (ignoring the last position in each word - formerly the ith position), linearly detect words whose letters at the first 1 to k-1 positions are the same, eventually count the number of occurrences of each letter at the last (originally ith) position within each set of duplicates and calculate the desired pairs (the last part is simple). However, the algorithm above doesn't seem to be applicable to the main problem (under the O(nk) constraint) - at least not without some modifications. Any idea how to solve this?

    Read the article

  • Merging rows with uniqueness constraints

    - by Flambino
    I've got a little time-tracking web app (implemented in Rails 3.2.8 & MySQL). The app has several users who add their time to specific tasks, on a given date. The system is set up so a user can only have 1 time entry (i.e. row) per task per date. I.e. if you add time twice on the same task and date, it'll add time to the existing row, rather than create a new one. Now I'm looking to merge 2 tasks. In the simplest terms, merging task ID 2 into task ID 1 would take this time | user_id | task_id | date ------+----------+----------+----------- 10 | 1 | 1 | 2012-10-29 15 | 2 | 1 | 2012-10-29 10 | 1 | 2 | 2012-10-29 5 | 3 | 2 | 2012-10-29 and change it into this time | user_id | task_id | date ------+----------+----------+----------- 20 | 1 | 1 | 2012-10-29 <-- time values merged (summed) 15 | 2 | 1 | 2012-10-29 <-- no change 5 | 3 | 1 | 2012-10-29 <-- task_id changed (no merging necessary) I.e. merge by summing the time values, where the given user_id/date/task combo would conflict. I figure I can use a unique constraint to do a ON DUPLICATE KEY UPDATE ... if I do an insert for every task_id=2 entry. But that seems pretty inelegant. I've also tried to figure a way to first update all the rows in task 1 with the summed-up times, but I can't quite figure that one out. Any ideas?

    Read the article

  • Optimal (Time paradigm) solution to check variable within boundary

    - by kumar_m_kiran
    Hi All, Sorry if the question is very naive. I will have to check the below condition in my code 0 < x < y i.e code similar to if(x > 0 && x < y) The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location). At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down. So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span? Other Info x is unsigned integer (which must be checked to be greater than 0 and less than y). y is unsigned integer. y is a non-const and varies for every comparison. Here time is the constraint compared to space. Language - C++. Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double). Thanks in advance for any input. PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.

    Read the article

  • Stored Procedure, 'incorrect syntax error'

    - by jacksonSD
    Attempting to figure out sp's, and I'm getting this error: "Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword 'Procedure'." the error seems to be on the if, but I can drop other existing tables with stored procedures the exact same way so I'm not clear on why this isn't working. can anyone shed some light? Begin Set nocount on Begin Try Create Procedure uspRecycle as if OBJECT_ID('Recycle') is not null Drop Table Recycle create table Recycle (RecycleID integer constraint PK_integer primary key, RecycleType nchar(10) not null, RecycleDescription nvarchar(100) null) insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('1','Compost','Product is compostable, instructions included in packaging') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('2','Return','Product is returnable to company for 100% reuse') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('3','Scrap','Product is returnable and will be reclaimed and reprocessed') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('4','None','Product is not recycleable') End Try Begin Catch DECLARE @ErrMsg nvarchar(4000); SELECT @ErrMsg = ERROR_MESSAGE(); Throw 50001, @ErrMsg, 1; End Catch -- checking to see if table exists and is loaded: If (Select count(*) from Recycle) >1 begin Print 'Recycle table created and loaded '; Print getdate() End set nocount off End

    Read the article

  • How can I bind the same dependency to many dependents in Ninject?

    - by Mike Bantegui
    Let's I have three interfaces: IFoo, IBar, IBaz. I also have the classes Foo, Bar, and Baz that are the respective implementations. In the implementations, each depends on the interface IContainer. So for the Foo (and similarly for Bar and Baz) the implementation might read: class Foo : IFoo { private readonly IDependency Dependency; public Foo(IDependency dependency) { Dependency = dependency; } public void Execute() { Console.WriteLine("I'm using {0}", Dependency.Name); } } Let's furthermore say I have a class Container which happens to contain instances of the IFoo, IBar and IBaz: class Container : IContainer { private readonly IFoo _Foo; private readonly IBar _Bar; private readonly IBaz _Baz; public Container(IFoo foo, IBar bar, IBaz baz) { _Foo = foo; _Bar = bar; _Baz = baz; } } In this scenario, I would like the implementation class Container to bind against IContainer with the constraint that the IDependency that gets injected into IFoo, IBar, and IBaz be the same for all three. In the manual way, I might implement it as: IDependency dependency = new Dependency(); IFoo foo = new Foo(dependency); IBar bar = new Bar(dependency); IBaz baz = new Baz(dependency); IContainer container = new Container(foo, bar, baz); How can I achieve this within Ninject? Note: I am not asking how to do nested dependencies. My question is how I can guarantee that a given dependency is the same among a collection of objects within a materialized service. To be extremely explicit, I understand that Ninject in it's standard form will generate code that is equivalent to the following: IContainer container = new Container(new Foo(new Dependency()), new Bar(new Dependency()), new Baz(new Dependency())); I would not like that behavior.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54  | Next Page >