Search Results

Search found 5380 results on 216 pages for 'primary'.

Page 182/216 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Use SQL to clone a tree structure represented in a database

    - by AmoebaMan17
    Given a table that represents a hierarchical tree structure and has three columns ID (Primary Key, not-autoincrementing) ParentGroupID SomeValue I know the lowest most node of that branch, and I want to copy that to a new branch with the same number of parents that also need to be cloned. I am trying to write a single SQL INSERT INTO statement that will make a copy of every row that is of the same main has is part one GroupID into a new GroupID. Example beginning table: ID | ParentGroupID | SomeValue ------------------------ 1 | -1 | a 2 | 1 | b 3 | 2 | c Goal after I run a simple INSERT INTO statement: ID | ParentGroupID | SomeValue ------------------------ 1 | -1 | a 2 | 1 | b 3 | 2 | c 4 | -1 | a-cloned 5 | 4 | b-cloned 6 | 5 | c-cloned Final tree structure +--a (1) | +--b (2) | +--c (3) | +--a-cloned (4) | +--b-cloned (5) | +--c-cloned (6) The IDs aren't always nicely spaced out as this demo data is showing, so I can't always assume that the Parent's ID is 1 less than the current ID for rows that have parents. Also, I am trying to do this in T-SQL (for Microsoft SQL Server 2005 and greater). This feels like a classic exercise that should have a pure-SQL answer, but I'm too used to programming that my mind doesn't think in relational SQL.

    Read the article

  • GWT + JDO + ArrayList

    - by dvieira
    Hi, I'm getting a Null ArrayList in a program i'm developing. For testing purposes I created this really small example that still has the same problem. I already tried diferent Primary Keys, but the problem persists. Any ideas or suggestions? 1-Employee class @PersistenceCapable(identityType = IdentityType.APPLICATION) public class Employee { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) @Extension(vendorName="datanucleus", key="gae.encoded-pk", value="true") private String key; @Persistent private ArrayList<String> nicks; public Employee(ArrayList<String> nicks) { this.setNicks(nicks); } public String getKey() { return key; } public void setNicks(ArrayList<String> nicks) { this.nicks = nicks; } public ArrayList<String> getNicks() { return nicks; } } 2-EmployeeService public class BookServiceImpl extends RemoteServiceServlet implements EmployeeService { public void addEmployee(){ ArrayList<String> nicks = new ArrayList<String>(); nicks.add("name1"); nicks.add("name2"); Employee employee = new Employee(nicks); PersistenceManager pm = PMF.get().getPersistenceManager(); try { pm.makePersistent(employee); } finally { pm.close(); } } /** * @return * @throws NotLoggedInException * @gwt.typeArgs <Employee> */ public Collection<Employee> getEmployees() { PersistenceManager pm = getPersistenceManager(); try { Query q = pm.newQuery("SELECT FROM " + Employee.class.getName()); Collection<Employee> list = pm.detachCopyAll((Collection<Employee>)q.execute()); return list; } finally { pm.close(); } } }

    Read the article

  • How much does an InnoDB table benefit from having fixed-length rows?

    - by Philip Eve
    I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's give an example: I have the following table CREATE TABLE `PlayerGameRcd` ( `User` SMALLINT UNSIGNED NOT NULL, `Game` MEDIUMINT UNSIGNED NOT NULL, `GameResult` ENUM('Quit', 'Kicked by Vote', 'Kicked by Admin', 'Kicked by System', 'Finished 5th', 'Finished 4th', 'Finished 3rd', 'Finished 2nd', 'Finished 1st', 'Game Aborted', 'Playing', 'Hide' ) NOT NULL DEFAULT 'Playing', `Inherited` TINYINT NOT NULL, `GameCounts` TINYINT NOT NULL, `Colour` TINYINT UNSIGNED NOT NULL, `Score` SMALLINT UNSIGNED NOT NULL DEFAULT 0, `NumLongTurns` TINYINT UNSIGNED NOT NULL DEFAULT 0, `Notes` MEDIUMTEXT, `CurrentOccupant` TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (`Game`, `User`), UNIQUE KEY `PGR_multi_uk` (`Game`, `CurrentOccupant`, `Colour`), INDEX `Stats_ind_PGR` (`GameCounts`, `GameResult`, `Score`, `User`), INDEX `GameList_ind_PGR` (`User`, `CurrentOccupant`, `Game`, `Colour`), CONSTRAINT `Constr_PlayerGameRcd_User_fk` FOREIGN KEY `User_fk` (`User`) REFERENCES `User` (`UserID`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `Constr_PlayerGameRcd_Game_fk` FOREIGN KEY `Game_fk` (`Game`) REFERENCES `Game` (`GameID`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARACTER SET utf8 COLLATE utf8_general_ci The only column that is nullable is Notes, which is MEDIUMTEXT. This table presently has 33097 rows (which I appreciate is small as yet). Of these rows, only 61 have values in Notes. How much of an improvement might I see from, say, adding a new table to store the Notes column in and performing LEFT JOINs when necessary?

    Read the article

  • iPhone or Android apps that use SMS based authentication?

    - by JSW
    What are some iPhone or Android applications that use SMS as their primary means of user authentication? I'm interested to see such apps in action. SMS-auth seems like a natural approach that is well-situated to mobile contexts. The basic workflow is: to sign up, a user provides a phone number; the app calls a backend webservice which generates a signed URL and sends it to the phone number via an SMS gateway; the user receives the SMS, clicks the link, and is thus verified and logged in. This results in a very strong user identity that is difficult to spoof yet fairly easy. It can be paired with a username or additional account attributes as needed for the product requirements. Despite the advantages, this does not seem to be in much use - hence my question. My initial assumption is that this is because products and users are wary of asking for / providing phone numbers, which users consider sensitive information. That said, I hope this becomes an increasingly more commonplace approach.

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • How to insert several thousand columns into sqlite3?

    - by user291071
    Similar to my last question, but I ran into problem lets say I have a simple dictionary like below but its Big, when I try inserting a big dictionary using the methods below I get operational error for the c.execute(schema) for too many columns so what should be my alternate method to populate an sql databases columns? Using the alter table command and add each one individually? import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() dic = { 'x1':{'y1':1.0,'y2':0.0}, 'x2':{'y1':0.0,'y2':2.0,'joe bla':1.5}, 'x3':{'y2':2.0,'y3 45 etc':1.5} } # 1. Find the unique column names. columns = set() for _, cols in dic.items(): for key, _ in cols.items(): columns.add(key) # 2. Create the schema. col_defs = [ # Start with the column for our key name '"row_name" VARCHAR(2) NOT NULL PRIMARY KEY' ] for column in columns: col_defs.append('"%s" REAL NULL' % column) schema = "CREATE TABLE simple (%s);" % ",".join(col_defs) c.execute(schema) # 3. Loop through each row for row_name, cols in dic.items(): # Compile the data we have for this row. col_names = cols.keys() col_values = [str(val) for val in cols.values()] # Insert it. sql = 'INSERT INTO simple ("row_name", "%s") VALUES ("%s", "%s");' % ( '","'.join(col_names), row_name, '","'.join(col_values) )

    Read the article

  • @ManyToMany Duplicate Entry Exception

    - by zp26
    I have mapped a bidirectional many-to-many exception between the entities Course and Trainee in the following manner: Course { ... private Collection<Trainee> students; ... @ManyToMany(targetEntity = lesson.domain.Trainee.class, cascade = {CascadeType.All}, fetch = {FetchType.EAGER}) @Jointable(name="COURSE_TRAINEE", joincolumns = @JoinColumn(name="COURSE_ID"), inverseJoinColumns = @JoinColumn(name = "TRAINEE_ID")) @CollectionOfElements public Collection<Trainee> getStudents() { return students; } ... } Trainee { ... private Collection<Course> authCourses; ... @ManyToMany(cascade = {CascadeType.All}, fetch = {FetchType.EAGER}, mappedBy = "students", targetEntity = lesson.domain.Course.class) @CollectionOfElements public Collection<Course> getAuthCourses() { return authCourses; } ... } Instead of creating a table where the Primary Key is made of the two foreign keys (imported from the table of the related two entities), the system generates the table "COURSE_TRAINEE" with the following schema: I am working on MySQL 5.1 and my App. Server is JBoss 5.1. Does anyone guess why?

    Read the article

  • How to increment the string value during run time without using dr.read() and store it in the oledb

    - by sameer
    Here is my code, everything is working fine the only problem is with the ReadData() method in which i want the string value to be increment i.e AM0001,AM0002,AM0003 etc. This is happening but only one time the value is getting increment to AM0001, the second time the same value i.e AM0001 is getting return. Due to this i am getting a error from oledb because of AM0001 is a primary key field. enter code here: class Jewellery : Connectionstr { string lmcode = null; public string LM_code { get { return lmcode;} set { lmcode = ReadData();} } string mname; public string M_Name { get { return mname; } set { mname = value;} } string desc; public string Desc { get { return desc; } set { desc = value; } } public string ReadData() { string jid = string.Empty; string displayString = string.Empty; String query = "select max(LM_code)from Master_Accounts"; Datamanager.RunExecuteReader(Constr,query); if (string.IsNullOrEmpty(jid)) { jid = "AM0000";//This string value has to increment at every time, but it is getting increment only one time. } int len = jid.Length; string split = jid.Substring(2, len - 2); int num = Convert.ToInt32(split); num++; displayString = jid.Substring(0, 2) + num.ToString("0000"); return displayString; } public void add() { String query ="insert into Master_Accounts values ('" + LM_code + "','" + M_Name + "','" + Desc + "')"; Datamanager.RunExecuteNonQuery(Constr , query); } Any help will be appreciated.

    Read the article

  • Beginner PHP: I can't insert data into MYSQL database

    - by Victor
    I'm learning PHP right now and I'm trying to insert data into a MySQL database called "pumpl2" The table is set up like this. create table product ( productid int unsigned not null auto_increment primary key, price int(9) not null, value int(9) not null, description text ); I have a form and want to insert the fields from the form in the database. Here is what the php file looks like. <?php // create short variable names $price = $_POST['price']; $value = $_POST['value']; $description = $_POST['description']; if (!$price || !$value || !$description) { echo "You have not entered all the required details.<br />" ."Please go back and try again."; exit; } @ $db = new mysqli('localhost', 'pumpl', '********', 'pumpl2'); if (mysqli_connect_errno()) { echo "Error: Could not connect to database. Please try again later."; exit; } $query = "insert into pumpl2 values ('".$price."', '".$value."', '".$description."')"; $result = $db->query($query); if ($result) { echo $db->affected_rows." product inserted into database."; } else { echo "An error has occurred. The item was not added."; } $db->close(); ?> When I submit the form, I get an error message "An error has occurred. The item was not added." Does anyone know what the problem is? Thank you!

    Read the article

  • Is there a way to specify a per-host deploy_to path with Capistrano?

    - by Chad Johnson
    I have searched and searched and asked a question already and have not received a clear answer. I have the following deploy script (snippet): set :application, "testapplication" set :repository, "ssh://domain.com//srv/hg/#{application}" set :scm, :mercurial set :deploy_to, "/srv/www/#{application}" role :web, "domain1.com", "domain2.com" role :app, "domain1.com", "domain2.com" role :db, "domain1.com", :primary => true, :norelease => true role :db, "domain2.com", :norelease => true As you see, I have set deploy_to to a specific path. And, I also have specified multiple web servers. However, each web server should have a different deployment path. I want to be able to run "cap deploy" and deploy to all hosts in one shot. I am NOT trying to deploy to staging and then to production. This is all production. My question is: how exactly do I specify a path per server? I have read the "Roles" documentation for Capistrano, and this is unclear. Can someone please post a deploy file example? I have read the documentation, and it is unclear how to do this. Does anyone know? Am I crazy? Am I thinking of this wrong or something? No answers anywhere online. Nowhere. Nothing. Please, someone help.

    Read the article

  • Embedded scripting engine in .Net app

    - by Nate
    I am looking to replace an old control being used for scripting an application. The control used to be called SAX Basic, but is now called WinWrap. It provides us with two primary functions. 1) It's a scripting engine (VB) 2) It has a GUI for developing and debugging scripts that get run in the hosting application. The first feature it provides is actually pretty easy to replace. There are so many great methods of running just about any kind of code at runtime that it's almost a non-issue. Just about any language targeting the .Net runtime will work for us. We've looked at running C#, PowerShell, VB.Net, IronPython, etc. I've also taken a brief look at Lua and F#, but honestly the language isn't the biggest barrier here. Now, for the hard part that seems to keep getting me stuck. We want a code editor, and debugger. Something simple, not unlike PowerShell's ISE would be fine. Just as long as a file could be created, saved, debugged and executed. I'm currently looking into Visual Studio 2010 Shell (Isolated) and I'm also looking at the feasibility of embedding PowerShell ISE in my application. Are there any other editor's I could embed/use in my application? Purchasing a product is not out of the question. It comes down to a combination of ease of use, how well it meets our needs, and how simple deployment and licensing is for developers. Thanks for the pointers

    Read the article

  • Is it possible to return a list of numbers from a Sybase function?

    - by ps_rs4
    I'm trying to overcome a very serious performance issue in which Sybase refuses to use the primary key index on a large table because one of the required fields is specified indirectly through another table - or, in other words; SELECT ... FROM BIGTABLE WHERE KFIELD = 123 runs in ms but SELECT ... FROM BIGTABLE, LTLTBL WHERE KFIELD = LTLTBL.LOOKUP AND LTLTBL.UNIQUEID = 'STRINGREPOF123' takes 30 - 40 seconds. I've managed to work around this first problem by using a function that basically lets me do this; SELECT ... FROM BIGTABLE WHERE KFIELD = MYFUNC('STRINGREPOF123') which also runs in ms. The problem, however, is that this approach only works when there is a single value returned by MYFUNCT but I have some cases where it may return 2 or 3 values. I know that the SQL SELECT ... FROM BIGTABLE WHERE KFIELD IN (123,456,789) also returns in millis so I'd like to have a function that returns a list of possible values rather than just a single one - is this possible? Sadly the application is running on Sybase ASA 9. Yes I know it is old and is scheduled to be refreshed but there's nothing I can do about it now so I need logic that will work with this version of the DB. Thanks in advance for any assistance on this matter.

    Read the article

  • java oracle syntax error?

    - by murali
    hi, i am using the following code for the uploading keywords & count to the excel file. i am having the keyword_id as primary key for that one i had written sentence...i am having the twocolumns in the excel file..1.keyword 2.count my code is: while (rs.next()) { System.out.println("inside "); String keyword = rs.getString(1); int count = rs.getInt(2); System.out.println("insert into SEARCHABLE_KEYWORDS values ('"+ keyword+"','"+count+"')"); stmtdb.execute("insert into SEARCHABLE_KEYWORDS (keyword_id,keyword,count) values ('"+ "select Searchable_Keywords_sequence.nextval from dual"+ "','"+keyword+"','"+count+"')"); System.out.println(keyword + " " + keyword+" count "+count); } but I am getting the following error: java.sql.SQLException: [Microsoft][ODBC Excel Driver] Too few parameters. Expected 1. at sun.jdbc.odbc.JdbcOdbc.createSQLException(JdbcOdbc.java:6998) at sun.jdbc.odbc.JdbcOdbc.standardError(JdbcOdbc.java:7155) at sun.jdbc.odbc.JdbcOdbc.SQLExecDirect(JdbcOdbc.java:3151) at sun.jdbc.odbc.JdbcOdbcStatement.execute(JdbcOdbcStatement.java:378) at sun.jdbc.odbc.JdbcOdbcStatement.executeQuery(JdbcOdbcStatement.java:284) at keywordsreader.main(keywordsreader.java:42) please help to slove this problem...

    Read the article

  • JPA 2.0 How to persist in order

    - by parhs
    hello i am having two entities... Exam and Exam_Normal EXAM @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private Long id; private String name; private String codeName; @Enumerated(EnumType.STRING) private ExamType examType; @ManyToOne private Category category; @OneToMany(mappedBy="id",cascade=CascadeType.PERSIST) @OrderBy("id") private List<Exam_Normal> exam_Normal; EXAM_NORMAL @Id private Long item; @Id @ManyToOne private Exam id; @Enumerated(EnumType.STRING) private Gender gender; private Integer age_month_from; private Integer age_month_to; The problem is that if i put a list of EXAM_NORMAL at an EXAM class if i try to persist(EXAM) i get an error because it tries to persist EXAM_NORMAL first but it cant because the primary keyof EXAM is missing because it isnt persisted... Is there any way to define the order?Or i should set null the list ,persist and then set the list again ? thanks:)

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Entering Content Into A MySQL Database Via A Form

    - by ThatMacLad
    I've been working on creating a form that submits content into my database but I decided that rather than using a drop down menu to select the date I'd rather use a textfield. I was wondering what changes I will need to make to my table creation file. <?php mysql_connect ('localhost', 'root', 'root') ; mysql_select_db ('tmlblog'); $sql = "CREATE TABLE php_blog ( id int(20) NOT NULL auto_increment, timestamp int(20) NOT NULL, title varchar(255) NOT NULL, entry longtext NOT NULL, PRIMARY KEY (id) )"; $result = mysql_query($sql) or print ("Can't create the table 'php_blog' in the database.<br />" . $sql . "<br />" . mysql_error()); mysql_close(); if ($result != false) { echo "Table 'php_blog' was successfully created."; } ?> It's the timestamp that I need to edit to enter in via a textfield. The Title and Entry are currently being entered via that method anyway.

    Read the article

  • FluentNHibernate error -- "Invalid object name"

    - by goober
    I'm attempting to do the most simple of mappings with FluentNHibernate & Sql2005. Basically, I have a database table called "sv_Categories". I'd like to add a category, setting the ID automatically, and adding the userid and title supplied. Database table layout: CategoryID -- int -- not-null, primary key, auto-incrementing UserID -- uniqueidentifier -- not null Title -- varchar(50) -- not null Simple. My SessionFactory code (which works, as far as I can tell): _SessionFactory = Fluently.Configure().Database( MsSqlConfiguration.MsSql2005 .ConnectionString(c => c.FromConnectionStringWithKey("SVTest"))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<CategoryMap>()) .BuildSessionFactory(); My ClassMap code: public class CategoryMap : ClassMap<Category> { public CategoryMap() { Id(x => x.ID).Column("CategoryID").Unique(); Map(x => x.Title).Column("Title").Not.Nullable(); Map(x => x.UserID).Column("UserID").Not.Nullable(); } } My Class code: public class Category { public virtual int ID { get; private set; } public virtual string Title { get; set; } public virtual Guid UserID { get; set; } public Category() { // do nothing } } And the page where I save the object: public void Add(Category catToAdd) { using (ISession session = SessionProvider.GetSession()) { using (ITransaction Transaction = session.BeginTransaction()) { session.Save(catToAdd); Transaction.Commit(); } } } I receive the error Invalid object name 'Category'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'Category'. I think it might be that I haven't told the CategoryMap class to use the "sv_Categories" table, but I'm not sure how to do that. Any help would be appreciated. Thanks!

    Read the article

  • how can i make sure only a single record is inserted when multiple apache threads are trying to acce

    - by Ed Gl
    I have a web service (xmlrpc service to be exact) that handles among other things writing data into the database. Here's the scenario: I often receive requests to either update or insert a record. What I would do is this: If the record already exists, append to the record, If not, create a new record The issue is that there are certain times I would get a 'burst' of requests, which spawns several apache threads to handle the request. These 'bursts' would come within less than milliseconds of each other. I now have several threads performing #1 and #2. Often two threads would would 'pass' number #1 and actually create two duplicate records (except for the primary key). I'd like to use some locking mechanism to prevent other threads from accessing the table while the other thread finishes its work. I'm just afraid of using it because if something happens I don't want to leave the table locked. Is there a solid way of handling this? I'm open to using locks if I can do it properly. Thanks,

    Read the article

  • Mysql - Help me alter this query to apply AND logic instead of OR in searching?

    - by sandeepan-nath
    First execute these tables and data dumps :- CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'key1'), (2, 'key2'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`) VALUES (1, 1), (2, 1); The following query finds all the tutors from Tutors_Tag_Relations table which have reference to at least one of the terms "key1" or "key2". SELECT td . * FROM Tutors_Tag_Relations AS td INNER JOIN Tags AS t ON t.id_tag = td.id_tag WHERE t.tag LIKE "%key1%" OR t.tag LIKE "%key2%" Group by td.id_tutor LIMIT 10 Please help me modify this query so that it returns all the tutors from Tutors_Tag_Relations table which have reference to both the terms "key1" and "key2" (AND logic instead of OR logic). Please suggest an optimized query considering huge number of data records (the query should NOT individually fetch two sets of tutors matching each keyword and then find the intersection).

    Read the article

  • Questions about openid and dotnetauthentication

    - by chobo2
    Hi I am looking into openid and dotnetauthentication library. However I still go some outstanding questions. the id that comes back is that unique for each user? Can I store this id in a database as the userId(currently this field is a primary key and unique identifier) I read that you can try to request information such as email address but you may not give it to you. What happens if you need this information? I think it kinda sucks if I have to popup another field right away and ask for their email address and whatever else fields I need. Sort of seems to defeat the purpose a bit as I always considered a benefit of openid is that you don't have to fill out registration forms. Is it better to only have some predefined choices(google,yahoo,openid,facebook). Then letting them type in their own ones(ie gray out the field to let them type in a url). I am thinking of this because it goes back to point number 2 if they type in a provider that does not give me the information that I need I am then stuck. How do you a log person out? Do you just kill the form authentication ticket?

    Read the article

  • NHibernate class referencing discriminator based subclass

    - by Rich
    I have a generic class Lookup which contains code/value properties. The table PK is category/code. There are subclasses for each category of lookup, and I've set the discriminator column in the base class and its value in the subclass. See example below (only key pieces shown): public class Lookup { public string Category; public string Code; public string Description; } public class LookupClassMap { CompositeId() .KeyProperty(x = x.Category, "CATEGORY_ID") .KeyProperty(x = x.Code, "CODE_ID"); DiscriminateSubclassesBasedOnColumn("CATEGORY_ID"); } public class MaritalStatus: Lookup {} public class MartialStatusClassMap: SubclassMap { DiscriminatorValue(13); } This all works. Here's the problem. When a class has a property of type MaritalStatus, I create a reference based on the contained code ID column ("MARITAL_STATUS_CODE_ID"). NHibernate doesn't like it because I didn't map both primary key columns (Category ID & Code ID). But with the Reference being of type MaritalStatus, NHibernate should already know what the value of the category ID is going to be, because of the discriminator value. What am I missing?

    Read the article

  • What is the correct approach to using GWT with persistent objects?

    - by dankilman
    Hi, I am currently working on a simple web application through Google App engine using GWT. It should be noted that this is my first attempt at such a task. I have run into to following problem/dilema: I have a simple Class (getters/setters and nothing more. For the sake of clarity I will refer to this Class as DataHolder) and I want to make it persistent. To do so I have used JDO which required me to add some annotations and more specifically add a Key field to be used as the primary key. The problem is that using the Key class requires me to import com.google.appengine.api.datastore.Key which is ok on the server side, but then I can't use DataHolder on the client side, because GWT doesn't allow it (as far as I know). So I have created a sister Class ClientDataHolder which is almost identical, though it doesn't have all the JDO annotations nor the Key field. Now this actually works but It feels like I'm doing something wrong. Using this approach would require maintaining to separate classes for each entity I wish to have. So my question is: Is there a better way of doing this? Thank you.

    Read the article

  • Create new table with Wordpress API

    - by Fire G
    I'm trying to create a new plugin to track popular posts based on views and I have everything done and ready to go, but I can't seem to create a new table using the Wordpress API (I can do it with standard PHP or with phpMyAdmin, but I want this plugin to be self-sufficient). I've tried several ways ($wpdb-query, $wpdb-get_results, dbDelta) but none of them will create the new table. function create_table(){ global $wpdb; $tablename = $wpdb->prefix.'popular_by_views'; $ppbv_table = $wpdb->get_results("SHOW TABLES LIKE '".$tablename."'" , ARRAY_N); if(is_null($ppbv_table)){ $create_table_sql = "CREATE TABLE '".$tablename."' ( 'id' BIGINT(50) NOT NULL AUTO_INCREMENT, 'url' VARCHAR(255) NOT NULL, 'views' BIGINT(50) NOT NULL, PRIMARY KEY ('id'), UNIQUE ('id') );"; $wpdb->show_errors(); $wpdb->flush(); if(is_null($wpdb->get_results("SHOW TABLES LIKE '".$tablename."'" , ARRAY_N))) echo 'crap, the SQL failed.'; } else echo 'table already exists, nothing left to do.';}

    Read the article

  • Extracting DCT coefficients from encoded images and video

    - by misha
    Is there a way to easily extract the DCT coefficients (and quantization parameters) from encoded images and video? Any decoder software must be using them to decode block-DCT encoded images and video. So I'm pretty sure the decoder knows what they are. Is there a way to expose them to whomever is using the decoder? I'm implementing some video quality assessment algorithms that work directly in the DCT domain. Currently, the majority of my code uses OpenCV, so it would be great if anyone knows of a solution using that framework. I don't mind using other libraries (perhaps libjpeg, but that seems to be for still images only), but my primary concern is to do as little format-specific work as possible (I don't want to reinvent the wheel and write my own decoders). I want to be able to open any video/image (H.264, MPEG, JPEG, etc) that OpenCV can open, and if it's block DCT-encoded, to get the DCT coefficients. In the worst case, I know that I can write up my own block DCT code, run the decompressed frames/images through it and then I'd be back in the DCT domain. That's hardly an elegant solution, and I hope I can do better. Presently, I use the fairly common OpenCV boilerplate to open images: IplImage *image = cvLoadImage(filename); // Run quality assessment metric The code I'm using for video is equally trivial: CvCapture *capture = cvCaptureFromAVI(filename); while (cvGrabFrame(capture)) { IplImage *frame = cvRetrieveFrame(capture); // Run quality assessment metric on frame } cvReleaseCapture(&capture); In both cases, I get a 3-channel IplImage in BGR format. Is there any way I can get the DCT coefficients as well?

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >