Search Results

Search found 6988 results on 280 pages for 'if statement'.

Page 227/280 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • org.hibernate.hql.ast.QuerySyntaxException: TABLE NAME is not mapped

    - by Coronatus
    I have two models, Item and ShopSection. They have a many-to-many relationship. @Entity(name = "item") public class Item extends Model { @ManyToMany(cascade = CascadeType.PERSIST) public Set<ShopSection> sections; } @Entity(name = "shop_section") public class ShopSection extends Model { public List<Item> findActiveItems(int page, int length) { return Item.find("select distinct i from Item i join i.sections as s where s.id = ?", id).fetch(page, length); } } findActiveItems is meant to find items in a section, but I get this error: org.hibernate.hql.ast.QuerySyntaxException: Item is not mapped [select distinct i from Item i join i.sections as s where s.id = ?] at org.hibernate.hql.ast.util.SessionFactoryHelper.requireClassPersister(SessionFactoryHelper.java:180) at org.hibernate.hql.ast.tree.FromElementFactory.addFromElement(FromElementFactory.java:111) at org.hibernate.hql.ast.tree.FromClause.addFromElement(FromClause.java:93) at org.hibernate.hql.ast.HqlSqlWalker.createFromElement(HqlSqlWalker.java:322) at org.hibernate.hql.antlr.HqlSqlBaseWalker.fromElement(HqlSqlBaseWalker.java:3441) at org.hibernate.hql.antlr.HqlSqlBaseWalker.fromElementList(HqlSqlBaseWalker.java:3325) at org.hibernate.hql.antlr.HqlSqlBaseWalker.fromClause(HqlSqlBaseWalker.java:733) at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:584) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:301) at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:244) at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254) at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185) at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80) at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:124) at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156) at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:135) at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1770) at org.hibernate.ejb.AbstractEntityManagerImpl.createQuery(AbstractEntityManagerImpl.java:272) ... 8 more What am I doing wrong?

    Read the article

  • WPF, C# - Making Intellisense/Autocomplete list, fastest way to filter list of strings

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • JavaScript: input validation in the keydown event

    - by c411
    Hi, I'm attempting to do info validation against user text input in the process of keydown event. The reason that I am trying to validate in the keydown event is because I do not want to display the characters those that are considered to be illegal in the input box at the beginning. The validation I am writing is like this, function validateUserInput(){ var code = this.event.keyCode; if ((code<48||code>57) // numerical &&code!==46 //delete &&code!==8 //back space &&code!==37 // <- arrow &&code!==39) // -> arrow { this.event.preventDefault(); } } I can keep going like this, however I am seeing drawbacks on this implmentation. Those are, for example, Conditional statement become longer and longer when I put more conditions to be examined. keyCodes can be different by browsers. I have to not only check what is not legal but also have to check what are exceptionals. In above examples, delete,backspace, and arrow keys are exceptionals. But the feature that I don't want to lose is having not to display the input in the textarea unless it passes the validation. (In case the user try to put illegal characters in the textarea, nothing should appear at all) That is why I am not doing validation upon keyup event. So my question is, Are there better ways to validate input in keydown event than checking keyCode by keyCode? Are there other ways to capture the user inputs other than keydown event before browser displays it? And a way to put the validation on it? Thanks for the help in advance.

    Read the article

  • using a JOIN in an UPDATE in SQL

    - by SDLFunTimes
    Hi, I'm having trouble formulating a legal statement to double the statuses of the suppliers (s) who have shipped (sp) more than 500 units. I've been trying: update s set s.status = s.status * 2 from s join sp on (sp.sno = s.sno) group by sno having sum(qty) > 500; however I'm getting this error from Mysql: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'from s join sp on (sp.sno = s.sno) group by sno having sum(qty) > 500' at line 1 Does anyone have any ideas about what is wrong with this query? Here's my schema: create table s ( sno char(5) not null, sname char(20) not null, status smallint, city char(15), primary key (sno) ); create table p ( pno char(6) not null, pname char(20) not null, color char(6), weight smallint, city char(15), primary key (pno) ); create table sp ( sno char(5) not null, pno char(6) not null, qty integer not null, primary key (sno, pno) );

    Read the article

  • Why should I bother with unit testing if I can just use integration tests?

    - by CodeGrue
    Ok, I know I am going out on a limb making a statement like that, so my question is for everyone to convince me I am wrong. Take this scenario: I have method A, which calls method B, and they are in different layers. So I unit test B, which delivers null as a result. So I test that null is returned, and the unit test passes. Nice. Then I unit test A, which expects an empty string to be returned from B. So I mock the layer B is in, an empty string is return, the test passes. Nice again. (Assume I don't realize the relationship of A and B, or that maybe two differente people are building these methods) My concern is that we don't find the real problem until we test A and B togther, i.e. Integration Testing. Since an integration test provides coverage over the unit test area, it seems like a waste of effort to build all these unit tests that really don't tell us anything (or very much) meaningful. Why am I wrong?

    Read the article

  • Diffie-Hellman -- Primitive root mod n -- cryptography question.

    - by somewhat confused
    In the below snippet, please explain starting with the first "for" loop what is happening and why. Why is 0 added, why is 1 added in the second loop. What is going on in the "if" statement under bigi. Finally explain the modPow method. Thank you in advance for meaningful replies. public static boolean isPrimitive(BigInteger m, BigInteger n) { BigInteger bigi, vectorint; Vector<BigInteger> v = new Vector<BigInteger>(m.intValue()); int i; for (i=0;i<m.intValue();i++) v.add(new BigInteger("0")); for (i=1;i<m.intValue();i++) { bigi = new BigInteger("" + i); if (m.gcd(bigi).intValue() == 1) v.setElementAt(new BigInteger("1"), n.modPow(bigi,m).intValue()); } for (i=0;i<m.intValue();i++) { bigi = new BigInteger("" + i); if (m.gcd(bigi).intValue() == 1) { vectorint = v.elementAt(bigi.intValue()); if ( vectorint.intValue() == 0) i = m.intValue() + 1; } } if (i == m.intValue() + 2) return false; else return true; }

    Read the article

  • Catching 'Last Record' in Coldfusion for IE javascript bug

    - by Simon Hume
    I'm using ColdFusion to pull UK postcodes into an array for display on a Google Map. This happens dynamically from a SQL database, so the numbers can range from 1 to 100+ the script works great, however, in IE (groan) it decides to display one point way off line, over in California somewhere. I fixed this issue in a previous webapp, this was due to the comma between each array item still being present at the end. Works fine in Firefox, Safari etc, but not IE. But, that one was using a set 10 records, so was easy to fix. I just need a little if statement to wrap around my comma to hide it when it hits the last record. I can't seem to get it right. Any tips/suggestions? here is the line of code in question: var address = [<cfloop query="getApplicant"><cfif getApplicant.dbHomePostCode GT ""><cfoutput>'#getApplicant.dbHomePostCode#',</cfoutput></cfif> </cfloop>]; Hopefully someone can help with this rather simple request. I'm just having a bad day at the office!

    Read the article

  • notify listener inside or outside inner synchronization

    - by Jary Zeels
    Hello all, I am struggling with a decision. I am writing a thread-safe library/API. Listeners can be registered, so the client is notified when something interesting happens. Which of the two implementations is most common? class MyModule { protected Listener listener; protected void somethingHappens() { synchronized(this) { ... do useful stuff ... listener.notify(); } } } or class MyModule { protected Listener listener; protected void somethingHappens() { Listener l = null; synchronized(this) { ... do useful stuff ... l = listener; } l.notify(); } } In the first implementation, the listener is notified inside the synchronization. In the second implementation, this is done outside the synchronization. I feel that the second one is advised, as it makes less room for potential deadlocks. But I am having trouble to convince myself. A downside of the second imlementation is that the client might receive 'incorrect' notifications, which happens if it accessed the module prior to the l.notify() statement. thanks a lot

    Read the article

  • sybase - fails to use index unless string is hard-coded

    - by Garrett
    I'm using Sybase 12.5.3 (ASE); I'm new to Sybase though I've worked with MSSQL pretty extensively. I'm running into a scenario where a stored procedure is really very slow. I've traced the issue to a single SELECT stmt for a relatively large table. Modifying that statement dramatically improves the performance of the procedure (and reverting it drastically slows it down; i.e., the SELECT stmt is definitely the culprit). -- Sybase optimizes and uses multi-column index... fast!<br> SELECT ID,status,dateTime FROM myTable WHERE status in ('NEW','SENT') ORDER BY ID -- Sybase does not use index and does very slow table scan<br> SELECT ID,status,dateTime FROM myTable WHERE status in (select status from allowableStatusValues) ORDER BY ID The code above is an adapted/simplified version of the actual code. Note that I've already tried recompiling the procedure, updating statistics, etc. I have no idea why Sybase ASE would choose an index only when strings are hard-coded and choose a table scan when choosing from another table. Someone please give me a clue, and thank you in advance.

    Read the article

  • DQL delete from multiple tables (doctrine)

    - by singer
    Need to perform DQL delete from multple related tables. In SQL it is something like this: DELETE r1,r2 FROM ComRealty_objects r1, com_realty_objects_phones r2 WHERE r1.id IN (10,20) AND r2.id_object IN (10,20) I need to perform this statement using DQL, but I'm stuck on this :( <?php $dql = Doctrine_Query::create() ->delete('phones, comrealtyobjects') ->from('ComRealtyObjects comrealtyobjects') ->from('ComRealtyObjectsPhones phones') ->whereIn("comrealtyobjects.id", $ids) ->whereIn("phones.id_object", $ids); echo($dql->getSqlQuery()); ?> But DQL parser gives me this result: DELETE FROM `com_realty_objects_phones`, `ComRealty_objects` WHERE (`id` IN (?) AND `id_object` IN (?)) Searching google and stack overflow I found this(useful) topic: http://stackoverflow.com/questions/2247905/what-is-the-syntax-for-a-multi-table-delete-on-a-mysql-database-using-doctrine But this is not exactly my case - there was delete from single table. If there is a way to override dql parser behaviour? Or maybe some other way to delete records from multiple tables using doctrine. Note: If you are using doctrine behaviours(Doctrine_Record_Generator) you need first to initialize those tables using Doctrine_Core::initializeModels() to perform DQL operations on them.

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • Field specific errors for ETL

    - by AaronLS
    I am creating a ETL process in MS SQL Server and I would like to have errors specific to a particular column of a particular row. For example, the data is initially loaded from excel files into a table(we'll call the Initial table) where all columns are varchar(2000) and then I stage the data to another table(the DataTypedTable) that contains more specific data types (datetime,int, etc.) or more tightly constrained varchar lengths. I need to be able to create error messages for a specific field such as: "Jan. 13th" is not a valid date format for the submission date. Please use a format of MM/DD/YYYY These error messages would need to be stored in some way such that later in the process a automated process can create reports with the error messages such that each message references a specific row and field(someone will need to go back and correct the data in the source system and resubmit the excel file). So ideally it would be inserted into a Failures tables of some sort and contain the primary key of the failed row, the column name, and the error message. Question: So I am wondering if this can be accomplished with SSIS, or some open source tool like Talend, and if so, what would be your general approach? Or what hand coded approach you would take? Couple approaches I've thought of using SQL(up until no I have done ETL by hand in SQL procs, but I want to consider other approaches. Possible C# even.): Use a cursor to read through the Initial table, and for each row insert a blank record with only the primary key into the DataTyped table, then use a single update statement for each column, such that if that update fails I can insert a very specific error message specific to that column in the error messages table. Insert all the data as is into the DataTyped table, but have duplicate columns like SubmissionDate and SubmissionDateOld. After the initial insert the *Old columns have data, the rest are blank, and I have a single update for each column that sets the SubmissionDate based on the SubmissionDateOld. In addition to suggesting an approach, I'd like to know if you are using that approach or something similar already in the work you do.

    Read the article

  • Use of unassigned local variable 'dictionary'

    - by codemonkie
    I got the error Use of unassigned local variable 'dictionary' despite I assigned the value in the following code: private static void UpdateJadProperties(Uri jadUri, Uri jarUri, Uri notifierUri) { Dictionary<String, String> dictionary; try { String[] jadFileContent; // Create an instance of StreamReader to read from a file. // The using statement also closes the StreamReader. using (StreamReader sr = new StreamReader(jadUri.AbsolutePath.ToString())) { Char[] delimiters = { '\r', '\n' }; jadFileContent = sr.ReadToEnd().Split(delimiters, System.StringSplitOptions.RemoveEmptyEntries); } // @@NOTE: Keys contain ": " suffix, values don't! dictionary = jadFileContent.ToDictionary(x => x.Substring(0, x.IndexOf(':') + 2), x => x.Substring(x.IndexOf(':') + 2)); } catch (Exception e) { // Let the user know what went wrong. Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } try { if (dictionary.ContainsKey("MIDlet-Jar-URL: ")) { // Change the value by Remove follow by Add } } catch (ArgumentNullException ane) { throw; } } The error is from the line: if (dictionary.ContainsKey("MIDlet-Jar-URL: ")) Can any one help me out here, pls? TIA

    Read the article

  • How to build a ~500 page Flash site

    - by philwilks
    I am about to embark on building a Flash site with approximately 500 pages. The site is an interactive learning type of system, with about 10 "chapters" each containing around 50 "pages". Each page has some sort of animation and interactivity, for example the user might have to decide whether a statement is true or false by clicking on one of two buttons, and then an appropriate response is displayed. The user can jump backwards and forwards between pages as they wish. As far as I know, these are some of my options... A) Build the entire site as a single Flash file with no external content. B) Build each of the 10 chapters as a separate Flash file, and then have a master Flash file which loads in the chapters. Each page would then be a separate movie clip within the chapter file. C) Build each indevidual page as a separate Flash file, and then have master Flash file which loads these in. At the moment I'm thinking that option B would be best, and I'd be very grateful for your thoughts on this! Of course, there are probably other options that I haven't thought of.

    Read the article

  • Can't select data from MySQL database: java.lang.NullPointerException

    - by Devel
    Hi, I'm trying to select data from database using this code: //DATABASE ResultSet rs; String polecenie; Statement st; String[] subj; public void polacz() { try { Class.forName("com.mysql.jdbc.Driver"); Connection pol=DriverManager.getConnection("jdbc:mysql://localhost:3306/testgenerator", "root", "pospaz"); st = pol.createStatement(); lblPolaczonoZBaza.setText("Polaczono z baza danych testgenerator"); } catch (Exception ek) { statusMessageLabel.setText("Can't connect to d: "+ek); } polecenie = "select * from subjects"; try { rs = st.executeQuery(polecenie); int i=0; while (rs.next()){ subj[i] = rs.getString("name"); i++; } st.close(); } catch (Exception ek) { statusMessageLabel.setText("Can't select data: "+ek); } } The second catch shows exception: java.lang.NullPointerException I looked everywhere and I can't find the solution. I'd be grateful for any help.

    Read the article

  • Create dynamic factory method in PHP (< 5.3)

    - by fireeyedboy
    How would one typically create a dynamic factory method in PHP? By dynamic factory method, I mean a factory method that will autodiscover what objects there are to create, based on some aspect of the given argument. Preferably without registering them first with the factory either. I'm OK with having the possible objects be placed in one common place (a directory) though. I want to avoid your typical switch statement in the factory method, such as this: public static function factory( $someObject ) { $className = get_class( $someObject ); switch( $className ) { case 'Foo': return new FooRelatedObject(); break; case 'Bar': return new BarRelatedObject(); break; // etc... } } My specific case deals with the factory creating a voting repository based on the item to vote for. The items all implement a Voteable interface. Something like this: Default_User implements Voteable ... Default_Comment implements Voteable ... Default_Event implements Voteable ... Default_VoteRepositoryFactory { public static function factory( Voteable $item ) { // autodiscover what type of repository this item needs // for instance, Default_User needs a Default_VoteRepository_User // etc... return new Default_VoteRepository_OfSomeType(); } } I want to be able to drop in new Voteable items and Vote repositories for these items, without touching the implementation of the factory.

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • UPDATE query that fixes orphaned records

    - by Jed
    I have an Access database that has two tables that are related by PK/FK. Unfortunately, the database tables have allowed for duplicate/redundant records and has made the database a bit screwy. I am trying to figure out a SQL statement that will fix the problem. To better explain the problem and goal, I have created example tables to use as reference: You'll notice there are two tables, a Student table and a TestScore table where StudentID is the PK/FK. The Student table contains duplicate records for students John, Sally, Tommy, and Suzy. In other words the John's with StudentID's 1 and 5 are the same person, Sally 2 and 6 are the same person, and so on. The TestScore table relates test scores with a student. Ignoring how/why the Student table allowed duplicates, etc - The goal I'm trying to accomplish is to update the TestScore table so that it replaces the StudentID's that have been disabled with the corresponding enabled StudentID. So, all StudentID's = 1 (John) will be updated to 5; all StudentID's = 2 (Sally) will be updated to 6, and so on. Here's the resultant TestScore table that I'm shooting for (Notice there is no longer any reference to the disabled StudentID's 1-4): Can you think of a query (compatible with MS Access's JET Engine) that can accomplish this goal? Or, maybe, you can offer some tips/perspectives that will point me in the right direction. Thanks.

    Read the article

  • Why not use tables for layout in HTML?

    - by Bno
    It seems to be the general opinion that tables should not be used for layout in HTML. Why? I have never (or rarely to be honest) seen good arguments for this. The usual answers are: It's good to separate content from layoutBut this is a fallacious argument; Cliche Thinking. I guess it's true that using the table element for layout has little to do with tabular data. So what? Does my boss care? Do my users care?Perhaps me or my fellow developers who have to maintain a web page care... Is a table less maintainable? I think using a table is easier than using divs and CSS.By the way... why is using a div or a span good separation of content from layout and a table not? Getting a good layout with only divs often requires a lot of nested divs. Readability of the codeI think it's the other way around. Most people understand html, few understand CSS. It's better for SEO not to use tablesWhy? Can anybody show some evidence that it is? Or a statement from Google that tables are discouraged from an SEO perspective? Tables are slower.An extra tbody element has to be inserted. This is peanuts for modern web browsers. Show me some benchmarks where the use of a table significantly slows down a page. A layout overhaul is easier without tables, see css Zen Garden.Most web sites that need an upgrade need new content (html) as well. Scenarios where a new version of a web site only needs a new CSS file are not very likely. Zen Garden is a nice web site, but a bit theoretical. Not to mention its misuse of CSS. I am really interested in good arguments to use divs + CSS instead of tables.

    Read the article

  • Inserting null fields with dbi:Pg

    - by User1
    I have a Perl script inserting data into Postgres according to a pipe delimited text file. Sometimes, a field is null (as expected). However, Perl makes this field into an empty string and the Postgres insert statement fails. Here's a snippet of code: use DBI; #Connect to the database. $dbh=DBI-connect('dbi:Pg:dbname=mydb','mydb','mydb',{AutoCommit=1,RaiseError=1,PrintError=1}); #Prepare an insert. $sth=$dbh-prepare("INSERT INTO mytable (field0,field1) SELECT ?,?"); while (<){ #Remove the whitespace chomp; #Parse the fields. @field=split(/\|/,$_); print "$_\n"; #Do the insert. $sth-execute($field[0],$field[1]); } And if the input is: a|1 b| c|3 EDIT: Use this input instead. a|1|x b||x c|3|x It will fail at b|. DBD::Pg::st execute failed: ERROR: invalid input syntax for integer: "" I just want it to insert a null on field1 instead. Any ideas? EDIT: I simplified the input at the last minute. The old input actually made it work for some reason. So now I changed the input to something that will make the program fail. Also note that field1 is a nullable integer datatype.

    Read the article

  • Writing language converter in ANTLR

    - by Stefan
    I'm writing a converter between some dialects of the same programming language. I've found a grammar on the net - it's complex and handles all the cases. Now I'm trying to write the appropriate actions. Most of the input is just going to be rewritten to output. What I need to do is parse function calls, do my magic (rename function, reorder arguments, etc) and write it. I'm using AST as output. When I come across a function call, I build a custom object structure (from classes defined in my target language), call the appropriate function and I have a string that represents the transformed function that I want to get. The problem is, what I'm supposed to do with that string? I'd like to replace the .text attribute of the enclosing rule, but setText() is only available on lexer rules and the rule's .text attribute is read-only. How to solve this problem? program : statement_list { output = $statement_list.text; } ; //... statement : expression_statement // ... ; expression_statement : function_call // ... ; function_call : ID '(' { /* build the object, assign name */ Function function = new Function(); //... } ( arg1 = expression { /* add first parameter */ } ( ',' arg2 = expression { /* add the rest of parameters */ } )* )? ')' { /* convert the function call */ string converted = Tools.Convert(function); // $setText(converted); // doesn't work // $functionCall.text = converted; // doesn't work } ;

    Read the article

  • Navigating to nodes using xpath in flat structure

    - by James Berry
    I have an xml file in a flat structure. We do not control the format of this xml file, just have to deal with it. I've renamed the fields because they are highly domain specific and don't really make any difference to the problem. <attribute name="Title">Book A</attribute> <attribute name="Code">1</attribute> <attribute name="Author"> <value>James Berry</value> <value>John Smith</value> </attribute> <attribute name="Title">Book B</attribute> <attribute name="Code">2</attribute> <attribute name="Title">Book C</attribute> <attribute name="Code">3</attribute> <attribute name="Author"> <value>James Berry</value> </attribute> Key things to note: the file is not particularly hierarchical. Books are delimited by an occurance of an attribute element with name='Title'. But the name='Author' attribute node is optional. Is there a simple xpath statement I can use to find the authors of book 'n'? It is easy to identify the title of book 'n', but the authors value is optional. And you can't just take the following author because in the case of book 2, this would give the author for book 3. I have written a state machine to parse this as a series of elements, but I can't help thinking there would have been a way to directly get the results that I want.

    Read the article

  • Why does a conditional not affect query speed?

    - by Telos
    I have a stored procedure that was taking a "long" period of time to execute. The query only needs to return data in one case, so I figured I could check for that case and just return before hitting the actual query. The only problem is that it still takes the same amount of time to execute with an if statement. I have verified that the code inside the if is not executing, and that if I replace the complex query with a simple select the speed is fine... so now I'm confused. Why is the query being slowed down by code that doesn't get executed when the conditional is false? Here's the query itself: ALTER PROCEDURE [dbo].[pr_cbc_GetCokeInfo] @pa_record int, @pb_record int AS BEGIN SET NOCOUNT ON; declare @ticketRec int SELECT @ticketRec = TicketRecord FROM eservice_live..v_sdticket where TicketRecord=@pa_record AND serviceCompanyID = 1139 AND @pb_record IS NULL if @ticketRec IS NULL return select record = null, doc_ref = @pa_record, memo_type = 'I', memo = 'Bottler: ' + isnull(Bottler, '') + ' ' + 'Sales Loc: ' + isnull(SalesLocation, '') + ' ' + 'Outlet Desc: ' + isnull(OutletDesc, '') + ' ' + 'City: ' + isnull(OutletCity, '') + ' ' + 'EquipNo: ' + isnull(EquipNo, '') + ' ' + 'SerialNo: ' + isnull(SerialNo, '') + ' ' + 'PhaseNo: ' + isnull(cast(PhaseNo as varchar(255)), '') + ' ' + 'StaticIP: ' + isnull(StaticIP, '') + ' ' + 'Air Card: ' + isnull(AirCard, '') FROM eservice_live..v_SDExtendedInfoField ef JOIN eservice_live..CokeSNList csl ON ef.valueText=csl.SerialNo where ef.docType='CLH' AND ef.docref = @ticketRec AND ef.ExtendedDocNumber=5 SET NOCOUNT OFF; END

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >