Search Results

Search found 3111 results on 125 pages for 'labeled statements'.

Page 32/125 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Upgrading to 9.2 - Info You Can Use (part 1)

    - by John Webb
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Rebekah Jackson joins our blog with a series of helpful hints on planning your upgrade to PeopleSoft 9.2.   Find Features & Capabilities There are many ways that you might learn about new features and capabilities within our releases, but if you aren’t sure where to start or how best to go about it, we recommend: Go to www.peoplesoftinfo.com Select the product line you are interested in, and go to the ‘Release Content’ tab Use the Video Feature Overviews (VFOs) on YouTube and the Cumulative Feature Overview (CFO) tool to find features and functions. The VFOs are brief recordings that summarize some of our most popular capabilities. These recordings are great tools for learning about new features, or helping others to visualize the value they can bring to your organization. The VFOs focus on some of our highest value and most compelling new capabilities. We also provide summarized ‘Why Upgrade to 9.2’ VFOs for HCM, Financials, and Supply Chain. The CFO is a spreadsheet based tool that allows you to select the release you are currently on, and compare it to the new release. It will return the list of all new features and capabilities, by product. You can browse the full list and / or highlight areas that look particularly interesting. Once you have a list of features by product, use the Release Value Proposition, Pre-Release Notes, and the Release Notes documents to get more details on and supporting value statements about why those features will be helpful. Gather additional data and supporting information, including: Go to the Product Data Sheets tab, and review the respective data sheets. These summarize the capabilities in the product, and provide succinct value statements for the product and capabilities. The PeopleSoft 9.2 Upgrade page, which has many helpful resources. Important Notes:   -  We recommend that you go through the above steps for the application areas of interest, as well as for PeopleTools. There are many areas in PeopleTools 8.53 and the 9.2 application releases that combine technical and functional capabilities to deliver transformative value.    - We also recommend that you review the Portal Solutions content. With your license to PeopleSoft applications, you have access to many of the most powerful capabilities within the Interaction Hub.    -  If you have recently upgraded to PeopleSoft 9.1, and an immediate upgrade to 9.2 is simply not realistic, you can apply the same approaches described here to find untapped capabilities in your current products. Many of the features in 9.2 were delivered first in our 9.1 Feature Packs. To find the Release Value Proposition, Pre-Release Notes, and Release Notes for these releases, search on ‘PeopleSoft 9.1 Documentation Home Page’ on My Oracle Support, and select your desired product area. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • How do I copy a python function to a remote machine and then execute it?

    - by Hugh
    I'm trying to create a construct in Python 3 that will allow me to easily execute a function on a remote machine. Assuming I've already got a python tcp server that will run the functions it receives, running on the remote server, I'm currently looking at using a decorator like @execute_on(address, port) This would create the necessary context required to execute the function it is decorating and then send the function and context to the tcp server on the remote machine, which then executes it. Firstly, is this somewhat sane? And if not could you recommend a better approach? I've done some googling but haven't found anything that meets these needs. I've got a quick and dirty implementation for the tcp server and client so fairly sure that'll work. I can get a string representation the function (e.g. func) being passed to the decorator by import inspect string = inspect.getsource(func) which can then be sent to the server where it can be executed. The problem is, how do I get all of the context information that the function requires to execute? For example, if func is defined as follows, import MyModule def func(): result = MyModule.my_func() MyModule will need to be available to func either in the global context or funcs local context on the remote server. In this case that's relatively trivial but it can get so much more complicated depending on when and how import statements are used. Is there an easy and elegant way to do this in Python? The best I've come up with at the moment is using the ast library to pull out all import statements, using the inspect module to get string representations of those modules and then reconstructing the entire context on the remote server. Not particularly elegant and I can see lots of room for error. Thanks for your time

    Read the article

  • Why use short-circuit code?

    - by Tim Lytle
    Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators) There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons? I'm not asking about situations where two entities need to be evaluated anyway, for example: if($user->auth() AND $model->valid()){ $model->save(); } To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data. This also has a (to me) obvious purpose: if(is_string($userid) AND strlen($userid) > 10){ //do something }; Because it wouldn't be wise to call strlen() with a non-string value. What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page: defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); This could have been: if(!defined('APPLICATION_PATH')){ define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); } Or even as a single statement: if(!defined('APPLICATION_PATH')) define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?

    Read the article

  • How to validate selects / inserts are hitting the right server with MySQL Master/Slave

    - by bwizzy
    I've got a rails app using the master_slave_adapter plugin (http://github.com/mauricio/master_slave_adapter/tree/master) to send all selects to a slave, and all other statements to the master. Replication is setup using Mysql master / slave. I'm trying to validate that all the SQL statements are indeed going to the right place. Selects to the slave (db2), inserts to the master (db1) but I'm not sure how to do it. I've tried using tcpdump on the webservers: sudo /usr/sbin/tcpdump -q -i eth0 dst port 3306 and this is the output for a page request with a ton of selects: 10:32:36.570930 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.576805 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577201 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577980 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 86 10:32:36.578186 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 21 10:32:36.578359 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 27 10:32:36.578522 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 5 10:32:36.578741 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 13 10:32:36.579611 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 29 10:32:36.588201 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588323 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588677 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588784 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 86 It doesn't look like all the selects are going to the slave. Maybe this isn't the right way to test, anyone know a better way?

    Read the article

  • Pylons error "No object (name: request) has been registered for this thread" with debug = false

    - by Evgeny
    I'm unable to access the request object in my Pylons 0.9.7 controller when I set debug = false in the .ini file. I have the following code: def run_something(self): print('!!! request = %r' % request) print('!!! request.params = %r' % request.params) yield 'Stuff' With debugging enabled this works fine and prints out: !!! request = <Request at 0x9571190 POST http://my_url> !!! request.params = UnicodeMultiDict([... lots of stuff ...]) If I set debug = false I get the following: !!! request = <paste.registry.StackedObjectProxy object at 0x4093790> Error - <type 'exceptions.TypeError'>: No object (name: request) has been registered for this thread The stack trace confirms that the error is on the print('!!! request.params = %r' % request.params) line. I'm running it using the Paste server and these two lines are the very first lines in my controller method. This only occurs if I have yield statements in the method (even though the statements aren't reached). I'm guessing Pylons sees that it's a generator method and runs it on some other thread. My questions are: How do I make it work with debug = false ? Why does it work with debug = true ? Obviously this is quite a dangerous bug, since I normally develop with debug = true, so it can go unnoticed during development.

    Read the article

  • What is your best-practice advice on implementing SQL stored procedures (in a C# winforms applicatio

    - by JYelton
    I have read these very good questions on SO about SQL stored procedures: When should you use stored procedures? and Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS’s? I am a beginner on integrating .NET/SQL though I have used basic SQL functionality for more than a decade in other environments. It's time to advance with regards to organization and deployment. I am using .NET C# 3.5, Visual Studio 2008 and SQL Server 2008; though this question can be regarded as language- and database- agnostic, meaning that it could easily apply to other environments that use stored procedures and a relational database. Given that I have an application with inline SQL queries, and I am interested in converting to stored procedures for organizational and performance purposes, what are your recommendations for doing so? Here are some additional questions in my mind related to this subject that may help shape the answers: Should I create the stored procedures in SQL using SQL Management Studio and simply re-create the database when it is installed for a client? Am I better off creating all of the stored procedures in my application, inside of a database initialization method? It seems logical to assume that creating stored procedures must follow the creation of tables in a new installation. My database initialization method creates new tables and inserts some default data. My plan is to create stored procedures following that step, but I am beginning to think there might be a better way to set up a database from scratch (such as in the installer of the program). Thoughts on this are appreciated. I have a variety of queries throughout the application. Some queries are incredibly simple (SELECT id FROM table) and others are extremely long and complex, performing several joins and accepting approximately 80 parameters. Should I replace all queries with stored procedures, or only those that might benefit from doing so? Finally, as this topic obviously requires some research and education, can you recommend an article, book, or tutorial that covers the nuances of using stored procedures instead of direct statements?

    Read the article

  • Odd behaviour with PHP's in_array function.

    - by animuson
    I have a function that checks multiple form items and returns either boolean(true) if the check passed or the name of the check that was run if it didn't pass. I built the function to run multiple checks at once, so it will return an array of these results (one result for each check that was run). When I run the function, I get this array result: Array ( [0] => 1 [1] => password [2] => birthday ) // print_r array(3) { [0]=> bool(true) [1]=> string(8) "password" [2]=> string(8) "birthday" } // var_dump The 'username' check passed and the 'password' and 'birthday' checks both failed. Then I am using simple in_array statements to determine which ones failed, like so: $results = $ani->e->vld->simulate("register.php", $checks); die(var_dump($results)); // Added after to see what array was being returned if (in_array("username", $results)) // do something if (in_array("password", $results)) // do something if (in_array("birthday", $results)) // do something The problem I'm having is that the 'username' line is still executing, even those 'username' is not in the array. It executes all three statements as if they were all true for some reason. Why is this? I thought maybe that the bool(true) was automatically causing the function to return true for every result without checking the rest of the array, but I couldn't find any documentation that would suggest that very odd functionality.

    Read the article

  • Django Form Preview

    - by Mark Kecko
    I'm trying to use django's FormPreview and I can't get it to work properly. Here's my code: forms.py class MyForm(forms.ModelForm): status = forms.TypedChoiceField( coerce=int, choices=LIST_STATUS, label="type", widget=forms.RadioSelect ) description = forms.CharField(widget = forms.Textarea) stage = forms.CharField() def __init__(self, useradd=None, *args, **kwargs): super(MyForm, self).__init__(*args, **kwargs) self.fields['firm'].label = "Firm" class Meta: model = MyModel fields = ['status', 'description', 'stage'] class MyFormPreview(FormPreview): form_template = 'templates/post.html' preview_template = 'templates/review.html' def process_preview(self, request, cleaned_data): print "processed" def done(self, request, cleaned_data): print "done" # Do something with the cleaned_data, then redirect # to a "success" page. return HttpResponseRedirect('/') urls.py (r'^post/$', MyFormPreview(MyForm)), post.html <form id = "post_ad" action = "" method = "POST" enctype="multipart/form-data"> <table> {{form.as_table}} </table> <input type="submit" name="save" value="Post" /> </form> When I go to /post/ I get the correct form and I fill it out. When I submit the form it goes right back to /post/ but but there are no errors (I've tried displaying {{errors}}) and the form is empty. None of my print statements execute. I'm not sure what I'm missing. Can anyone help me out? I can't find any documentation besides what's on the django site. Also, what's the "preview" variable called that I should use in my preview.html template? {{preview}} or do I just do {{form}} again? -- Answered below. I tried adding 'django.contrib.formtools' to my installed_apps in settings and I tried using the code from the default form templates from django.contrib as suggested below. Still, when I submit the form I go right back to the post template, none of my print statements execute :(

    Read the article

  • Preparing a MySQL INSERT/UPDATE statement with DEFAULT values

    - by Raveren
    Quoting MySQL INSERT manual - same goes for UPDATE: Use the keyword DEFAULT to set a column explicitly to its default value. This makes it easier to write INSERT statements that assign values to all but a few columns, because it enables you to avoid writing an incomplete VALUES list that does not include a value for each column in the table. Otherwise, you would have to write out the list of column names corresponding to each value in the VALUES list. So in short if I write INSERT INTO table1 (column1,column2) values ('value1',DEFAULT); A new row with column2 set as its default value - whatever it may be - is inserted. However if I prepare and execute a statement in PHP: $statement = $pdoObject-> prepare("INSERT INTO table1 (column1,column2) values (?,?)"); $statement->execute(array('value1','DEFAULT')); The new row will contain 'DEFAULT' as its text value - if the column is able to store text values. Now I have written an abstraction layer to PDO (I needed it) and to get around this issue am considering to introduce a const DEFAULT_VALUE = "randomstring"; So I could execute statements like this: $statement->execute(array('value1',mysql::DEFAULT_VALUE)); And then in method that does the binding I'd go through values that are sent to be bound and if some are equal to self::DEFAULT_VALUE, act accordingly. I'm pretty sure there's a better way to do this. Has someone else encountered similar situations?

    Read the article

  • DirectShow Filter I wrote dies after 10-24 seconds in Skype video call

    - by Robert Oschler
    I've written a DirectShow push filter for use with Skype using Delphi Pro 6 and the DSPACK DirectShow library. In preview mode, when you test a video input device in the Skype client Video Settings window, my filter works flawlessly. I can leave it up and running for many minutes without an error. However when I start a video call after 10 to 24 seconds, never longer, the video feed freezes. The call continues fine with the call duration counter clicking away the seconds, but the video feed is dead, stuck on whatever frame the freeze happened (although after a long while it turns black which I believe means Skype has given up on the filter). I tried attaching to the process from my debugger with a breakpoint literally set on every method call and none of them are hit once the freeze takes place. It's as if the thread that makes the DirectShow FillBuffer() call to my filter on behalf of Skype is dead or has been shutdown. I can't trace my filter in the debugger because during a Skype call I get weird int 1 and int 3 debugger hard interrupt calls when a Skype video call is in progress. This behavior happens even with my standard web cam input device selected and my DirectShow filter completely unregistered as a ActiveX server. I suspect it might be some "anti-debugging" code since it doesn't happen in video input preview mode. Either way, that is why I had to attach to the process after the fact to see if my FillBuffer() called was still being called and instead discovered that appears to be dead. Note, my plain vanilla USB web cam's DirectShow filter does not exhibit the freezing behavior and works fine for many minutes. There's something about my filter that Skype doesn't like. I've tried Sleep() statements of varying intervals, no Sleep statements, doing virtually nothing in the FillBuffer() call. Nothing helps. If anyone has any ideas on what might be the culprit here, I'd like to know. Thanks, Robert

    Read the article

  • Using multiple aggregate functions in an (ANSI) SQL statement

    - by morpheous
    I have aggregate functions foo(), foobar(), fredstats(), barneystats() I want to create a domain specific query language (DSQL) above my DB, to facilitate using a domain language to query the DB. The 'language' comprises of algebraic expressions (or more specifically SQL like criteria) which I use to generate (ANSI) SQL statements which are sent to the db engine. The following lines are examples of what the language statements will look like, and hopefully, it will help further clarify the concept: **Example 1** DQL statement: foobar('yellow') between 1 and 3 and fredstats('weight') > 42 Translation: fetch all rows in an underlying table where computed values for aggregate function foobar() is between 1 and 3 AND computed value for AGG FUNC fredstats() is greater than 42 **Example 2** DQL statement: fredstats('weight') < barneystats('weight') AND foo('fighter') in (9,10,11) AND foobar('green') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 3** DQL statement: foobar('green') / foobar('red') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 4** DQL statement: foobar('green') - foobar('red') >= 42 Translation: Fetch all rows where the specified criteria matches Given the following information: The table upon which the queries above are being executed is called 'tbl' table 'tbl' has the following structure (id int, name varchar(32), weight float) The result set returns only the tbl.id, tbl.name and the names of the aggregate functions as columns in the result set - so for example the foobar() AGG FUNC column will be called foobar in the result set. So for example, the first DQL query will return a result set with the following columns: id, name, foobar, fredstats Given the above, my questions then are: What would be the underlying SQL required for Example1 ? What would be the underlying SQL required for Example3 ? Given an algebraic equation comprising of AGGREGATE functions, Is there a way of generalizing the algorithm needed to generate the required ANSI SQL statement(s)? I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible.

    Read the article

  • Alternative Control Structures

    - by Brock Woolf
    I've been wondering about alternative ways to write control structures. One that you learn early on for if statements is a replacement for this: if ( x ) { // true } else { // false } with this (sometimes this is more readable compared to lots of brackets): x ? true : false It got me thinking. Can we replace anything else incase it's more readable. We of course replace if statements like this: if (a < b) { return true; } else { return false; } with things like this: return a < b; We can save a long if with something like this (pretty much same as the one above): bool xCollisionTrue = (object.xPos < aabb.maxX && object.xPos > aabb.minX); So those are the ones I can think of off the top of my head for the if statement and doing comparisons. So I'm wondering what about looping constructs, for, while, etc. Maybe the code obfuscators might have some ideas.

    Read the article

  • Dynamic SQL Server stored procedure

    - by Pinu
    ALTER PROCEDURE [dbo].[GetDocumentsAdvancedSearch] @SDI CHAR(10) = NULL ,@Client CHAR(4) = NULL ,@AccountNumber VARCHAR(20) = NULL ,@Address VARCHAR(300) = NULL ,@StartDate DATETIME = NULL ,@EndDate DATETIME = NULL ,@ReferenceID CHAR(14) = NULL AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- DECLARE DECLARE @Sql NVARCHAR(4000) DECLARE @ParamList NVARCHAR(4000) SELECT @Sql = 'SELECT DISTINCT ISNULL(Documents.DocumentID, '') ,Person.Name1 ,Person.Name2 ,Person.Street1 ,Person.Street2 ,Person.CityStateZip ,ISNULL(Person.ReferenceID,'') ,ISNULL(Person.AccountNumber,'') ,ISNULL(Person.HasSetPreferences,0) ,Documents.Job ,Documents.SDI ,Documents.Invoice ,ISNULL(Documents.ShippedDate,'') ,ISNULL(Documents.DocumentPages,'') ,Documents.DocumentType ,Documents.Description FROM Person LEFT OUTER JOIN Documents ON Person.PersonID = Documents.PersonID LEFT OUTER JOIN DocumentType ON Documents.DocumentType = DocumentType.DocumentType LEFT OUTER JOIN Addressess ON Person.PersonID = Addressess.PersonID' SELECT @Sql = @Sql + ' WHERE Documents.SDI IN ( '+ QUOTENAME(@sdi) + ') OR (Person.AssociationID = ' + ''' 000000 + ''' + 'AND Person.Client = ' + QUOTENAME(@Client) IF NOT (@AccountNumber IS NULL) SELECT @Sql = @Sql + 'AND Person.AccountNumber LIKE' + QUOTENAME(@AccountNumber) IF NOT (@Address IS NULL) SELECT @Sql = @Sql + 'AND Person.Name1 LIKE' +QUOTENAME(@Address)+ 'AND Person.Name2 LIKE' +QUOTENAME(@Address)+ 'AND Person.Street1 LIKE' +QUOTENAME(@Address)+ 'AND Person.Street2 LIKE' +QUOTENAME(@Address)+ 'AND Person.CityStateZip LIKE' +QUOTENAME(@Address) IF NOT (@StartDate IS NULL) SELECT @Sql = @Sql + 'AND Documents.ShippedDate >=' +@StartDate IF NOT (@EndDate IS NULL) SELECT @Sql = @Sql + 'AND Documents.ShippedDate <=' +@EndDate IF NOT (@ReferenceID IS NULL) SELECT @Sql = @Sql + 'AND Documents.ReferenceID =' +QUOTENAME(@ReferenceID) -- Insert statements for procedure here -- PRINT @Sql SELECT @ParamList = '@Psdi CHAR(10),@PClient CHAR(4),@PAccountNumber VARCHAR(20),@PAddress VARCHAR(300),@PStartDate DATETIME ,@PEndDate DATETIME,@PReferenceID CHAR(14)' EXEC SP_EXECUTESQL @Sql,@ParamList,@Sdi,@Client,@AccountNumber,@Address,@StartDate,@EndDate,@ReferenceID --PRINT @Sql END ERROR Msg 102, Level 15, State 1, Line 23 Incorrect syntax near '000000'. Msg 105, Level 15, State 1, Line 23 Unclosed quotation mark after the character string 'AND Person.Client = [1 ]AND Person.AccountNumber LIKE[1]'.

    Read the article

  • Test-Driven Development Problem

    - by Zeck
    Hi guys, I'm newbie to Java EE 6 and i'm trying to develop very simple JAX-RS application. RESTfull web service working fine. However when I ran my test application, I got the following. What have I done wrong? Or am i forget any configuration? Of course i'm create a JNDI and i'm using Netbeans 6.8 IDE. In finally, thank you for any advise. My Entity: @Entity @Table(name = "BOOK") @NamedQueries({ @NamedQuery(name = "Book.findAll", query = "SELECT b FROM Book b"), @NamedQuery(name = "Book.findById", query = "SELECT b FROM Book b WHERE b.id = :id"), @NamedQuery(name = "Book.findByTitle", query = "SELECT b FROM Book b WHERE b.title = :title"), @NamedQuery(name = "Book.findByDescription", query = "SELECT b FROM Book b WHERE b.description = :description"), @NamedQuery(name = "Book.findByPrice", query = "SELECT b FROM Book b WHERE b.price = :price"), @NamedQuery(name = "Book.findByNumberofpage", query = "SELECT b FROM Book b WHERE b.numberofpage = :numberofpage")}) public class Book implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = "ID") private Integer id; @Basic(optional = false) @Column(name = "TITLE") private String title; @Basic(optional = false) @Column(name = "DESCRIPTION") private String description; @Basic(optional = false) @Column(name = "PRICE") private double price; @Basic(optional = false) @Column(name = "NUMBEROFPAGE") private int numberofpage; public Book() { } public Book(Integer id) { this.id = id; } public Book(Integer id, String title, String description, double price, int numberofpage) { this.id = id; this.title = title; this.description = description; this.price = price; this.numberofpage = numberofpage; } public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public double getPrice() { return price; } public void setPrice(double price) { this.price = price; } public int getNumberofpage() { return numberofpage; } public void setNumberofpage(int numberofpage) { this.numberofpage = numberofpage; } @Override public int hashCode() { int hash = 0; hash += (id != null ? id.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Book)) { return false; } Book other = (Book) object; if ((this.id == null && other.id != null) || (this.id != null && !this.id.equals(other.id))) { return false; } return true; } @Override public String toString() { return "com.entity.Book[id=" + id + "]"; } } My Junit test class: public class BookTest { private static EntityManager em; private static EntityManagerFactory emf; public BookTest() { } @BeforeClass public static void setUpClass() throws Exception { emf = Persistence.createEntityManagerFactory("E01R01PU"); em = emf.createEntityManager(); } @AfterClass public static void tearDownClass() throws Exception { em.close(); emf.close(); } @Test public void createBook() { Book book = new Book(); book.setId(1); book.setDescription("Mastering the Behavior Driven Development with Ruby on Rails"); book.setTitle("Mastering the BDD"); book.setPrice(25.9f); book.setNumberofpage(1029); em.persist(book); assertNotNull("ID should not be null", book.getId()); } } My persistence.xml jta-data-sourceBookstoreJNDI And exception is: May 7, 2009 11:10:37 AM org.hibernate.validator.util.Version INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 May 7, 2009 11:10:37 AM org.hibernate.validator.engine.resolver.DefaultTraversableResolver detectJPA INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. [EL Info]: 2009-05-07 11:10:37.531--ServerSession(13671123)--EclipseLink, version: Eclipse Persistence Services - 2.0.0.v20091127-r5931 May 7, 2009 11:10:40 AM com.sun.enterprise.transaction.JavaEETransactionManagerSimplified initDelegates INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate May 7, 2009 11:10:43 AM com.sun.enterprise.connectors.ActiveRAFactory createActiveResourceAdapter SEVERE: rardeployment.class_not_found May 7, 2009 11:10:43 AM com.sun.enterprise.connectors.ActiveRAFactory createActiveResourceAdapter SEVERE: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR at com.sun.enterprise.connectors.ActiveRAFactory.createActiveResourceAdapter(ActiveRAFactory.java:104) at com.sun.enterprise.connectors.service.ResourceAdapterAdminServiceImpl.createActiveResourceAdapter(ResourceAdapterAdminServiceImpl.java:216) at com.sun.enterprise.connectors.ConnectorRuntime.createActiveResourceAdapter(ConnectorRuntime.java:352) at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:106) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.impl.SerialContext.getObjectInstance(SerialContext.java:472) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:437) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:569) at javax.naming.InitialContext.lookup(InitialContext.java:396) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:110) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:94) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:584) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:228) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:368) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getServerSession(EntityManagerFactoryImpl.java:151) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:207) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:195) at com.entity.BookTest.setUpClass(BookTest.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:515) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1031) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:888) Caused by: java.lang.ClassNotFoundException: com.sun.gjc.spi.ResourceAdapter at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:276) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at com.sun.enterprise.connectors.ActiveRAFactory.createActiveResourceAdapter(ActiveRAFactory.java:96) ... 32 more [EL Severe]: 2009-05-07 11:10:43.937--ServerSession(13671123)--Local Exception Stack: Exception [EclipseLink-7060] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.ValidationException Exception Description: Cannot acquire data source [BookstoreJNDI]. Internal Exception: javax.naming.NamingException: Lookup failed for 'BookstoreJNDI' in SerialContext ,orb'sInitialHost=localhost,orb'sInitialPort=3700 [Root exception is javax.naming.NamingException: Failed to look up ConnectorDescriptor from JNDI [Root exception is com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR]] at org.eclipse.persistence.exceptions.ValidationException.cannotAcquireDataSource(ValidationException.java:451) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:116) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:94) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:584) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:228) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:368) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getServerSession(EntityManagerFactoryImpl.java:151) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:207) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:195) at com.entity.BookTest.setUpClass(BookTest.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:515) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1031) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:888) Caused by: javax.naming.NamingException: Lookup failed for 'BookstoreJNDI' in SerialContext ,orb'sInitialHost=localhost,orb'sInitialPort=3700 [Root exception is javax.naming.NamingException: Failed to look up ConnectorDescriptor from JNDI [Root exception is com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR]] at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:442) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:569) at javax.naming.InitialContext.lookup(InitialContext.java:396) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:110) ... 23 more Caused by: javax.naming.NamingException: Failed to look up ConnectorDescriptor from JNDI [Root exception is com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR] at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:109) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.impl.SerialContext.getObjectInstance(SerialContext.java:472) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:437) ... 26 more Caused by: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR at com.sun.enterprise.connectors.ActiveRAFactory.createActiveResourceAdapter(ActiveRAFactory.java:104) at com.sun.enterprise.connectors.service.ResourceAdapterAdminServiceImpl.createActiveResourceAdapter(ResourceAdapterAdminServiceImpl.java:216) at com.sun.enterprise.connectors.ConnectorRuntime.createActiveResourceAdapter(ConnectorRuntime.java:352) at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:106) ... 29 more Caused by: java.lang.ClassNotFoundException: com.sun.gjc.spi.ResourceAdapter at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:276) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at com.sun.enterprise.connectors.ActiveRAFactory.createActiveResourceAdapter(ActiveRAFactory.java:96) ... 32 more Exception Description: Cannot acquire data source [BookstoreJNDI]. Internal Exception: javax.naming.NamingException: Lookup failed for 'BookstoreJNDI' in SerialContext ,orb'sInitialHost=localhost,orb'sInitialPort=3700 [Root exception is javax.naming.NamingException: Failed to look up ConnectorDescriptor from JNDI [Root exception is com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR]])

    Read the article

  • IPhone SDK removal of NSLog causes NSArray to be undeclared !

    - by Maxwell Segal
    I have a working application I'm about to distribute and am tidying up NSLog statements in it. When I remove NSLog from from a "case" statement, the NSArray declared within the "case" statement errors as Expected expression before AND undeclared. Anybody any idea why this may be? This is happening on all case statements in my app where I'm now removing NSLog. An example code sections appears below: switch (chosenScene) { case 0: //NSLog(@"group1"); // the following NSArray errors with "expected expression.." AND "..group1Secondsarray undeclared" NSArray *group1SecondsArray = [NSArray arrayWithObjects: @"Dummy",@"1/15",@"1/30",@"1/30",@"1/60",@"1/125",@"1/250",nil]; NSArray *group1FStopArray = [NSArray arrayWithObjects: @"Dummy",@"2.8",@"2.8",@"4",@"5.6",@"5.6",@"5.6",nil]; NSString *group1SecondsText = [group1SecondsArray objectAtIndex:slider.value]; calculatedSeconds.text = group1SecondsText; NSString *group1FStopText = [group1FStopArray objectAtIndex:slider.value]; calculatedFStop.text = group1FStopText; [group1SecondsText release]; [group1FStopText release]; break;

    Read the article

  • Code Golf: Shortest Turing-complete interpreter.

    - by ilya n.
    I've just tried to create the smallest possible language interpreter. Would you like to join and try? Rules of the game: You should specify a programming language you're interpreting. If it's a language you invented, it should come with a list of commands in the comments. Your code should start with example program and data assigned to your code and data variables. Your code should end with output of your result. It's preferable that there are debug statements at every intermediate step. Your code should be runnable as written. You can assume that data are 0 and 1s (int, string or boolean, your choice) and output is a single bit. The language should be Turing-complete in the sense that for any algorithm written on a standard model, such as Turing machine, Markov chains, or similar of your choice, it's reasonably obvious (or explained) how to write a program that after being executred by your interpreter performs the algorithm. The length of the code is defined as the length of the code after removal of input part, output part, debug statements and non-necessary whitespaces. Please add the resulting code and its length to the post. You can't use functions that make compiler execute code for you, such as eval(), exec() or similar. This is a Community Wiki, meaning neither the question nor answers get the reputation points from votes. But vote anyway!

    Read the article

  • Find actual value of PHP variable

    - by Simon S
    Hi all. I am having a real headache with reading in a tab delimited text file and inserting it into a MySQL Database. The tab delimited text file was generated (I think) from a MS SQL Database, and I have written a simple script to read in the file and insert it into an existing table in my MySQL database. However, there seems to be some problem with the data in the txt file. When my PHP script parses the file and I output the INSERT statements, the values in each of the fields are longer than they should be. For example, the first field should be a simple two character alphanumeric value. If I echo out the INSERT statements, using Firebug (in Firefox), between each of the characters is a question mark in a black diamond. If I var_dump the values, I get the following: string(5) "A1" Now, this clearly shows a two character string, but var_dump tells me it is five characters long!! If I trim() the value, all I get is the first character (in this case "A"). How can I get at the other characters, even if it is only to remove them? Additionally, this appears to be forcing MySQL to insert the value as a BLOB, not as a varchar as it should. Simon

    Read the article

  • Rolling Back a Transaction with MySQL Connector in VB.net

    - by Jonathan
    Hey all- I have one multi-row INSERT statement (300 or so sets of values) that I would like to commit to the MySQL database in an all-or-nothing fashion. insert into table VALUES (1, 2, 3), (4, 5, 6), (7, 8, 9); In some cases, a set of values in the command will not meet the criteria of the table (duplicate key, for example). When that happens I do not want any of the previous sets added to the database. I've implemented this with the following code, however, my rollback command doesn't appear to be making a difference. I've used this documentation: http://dev.mysql.com/doc/refman/5.0/es/connector-net-examples-mysqltransaction.html Dim transaction As MySqlTransaction = sqlConnection.BeginTransaction() sqlCommand = New MySqlCommand(insertStr, sqlConnection, transaction) Try sqlCommand.ExecuteNonQuery() Catch ex As Exception writeToLog("EXCEPTION: " & ex.Message & vbNewLine) writeToLog("Could not execute " & sqlCmd & vbNewLine) Try transaction.Rollback() writeToLog("All statements were rolled back." & vbNewLine) Return False Catch rollbackEx As Exception writeToLog("EXCEPTION: " & rollbackEx.Message & vbNewLine) writeToLog("All statements were not rolled back." & vbNewLine) Return False End Try End Try transaction.commit() I get the DUPLICATE KEY exception thrown, no Rollback Exception thrown, and every set of values up to duplicate key committed to the database. What am I doing wrong? Thanks- Jonathan

    Read the article

  • SQL UNION ALL problem after using UNION ALL more than 10 times

    - by VBGKM
    I'm getting a formatting problem if I use more than 10 UNION ALL statements in my VBA Code. If I use 10 or less everything works great. What I'm trying to do is combine 12 worksheets (Excel 2007). I have a numerical column called SC that turns into string and date if I have more than 10 UNION ALL. If I try to use ROUND with more than 10 UNION ALL my last selection will change all the records by one unit. I'm using Microsoft.ACE.OLEDB.12.0 as my provider and my connection string has worked for several things in my code so far. Is there any limit for UNION ALL statements when using OLEDB? Here is my code. Dim StrOr As String Dim i As Variant Dim Cnt As ADODB.Connection Dim Rs As ADODB.Recordset For i = 1 To 12 StrOr = StrOr & " " & "SELECT SC FROM [" & MonthName(i, True) & "$" & "] UNION ALL" Next StrOr = Left(StrOr, Len(StrOr) - 9) & ";" Call GetADOCnt Call ADORs

    Read the article

  • Why can't I access elements inside an XML file with XPath in XML::LibXML?

    - by John
    I have an XML file, part of which looks like this: <wave waveID="1"> <well wellID="1" wellName="A1"> <oneDataSet> <rawData>0.1123975676</rawData> </oneDataSet> </well> ... more wellID's and rawData continues here... I am trying to parse the file with Perl's libXML and output the wellName and the rawData using the following: use XML::LibXML; my $parser = XML::LibXML->new(); my $doc = $parser->parse_file('/Users/johncumbers/Temp/1_12-18-09-111823.orig.xml'); my $xc = XML::LibXML::XPathContext->new( $doc->documentElement() ); $xc->registerNs('ns', 'http://moleculardevices.com/microplateML'); my @n = $xc->findnodes('//ns:wave[@waveID="1"]'); #xc is xpathContent # should find a tree from the node representing everything beneath the waveID 1 foreach $nod (@n) { my @c = $nod->findnodes('//rawData'); #element inside the tree. print @c; } It is not printing out anything right now and I think I have a problem with my Xpath statements. Please can you help me fix it, or can you show me how to trouble shoot the xpath statements? Thanks.

    Read the article

  • What do these .NET auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • Use an Array to Cycle Through a MySQL Where Statement

    - by Ryan
    I'm trying to create a loop that will output multiple where statements for a MySQL query. Ultimately, my goal is to end up with these four separate Where statements: `fruit` = '1' AND `vegetables` = '1' `fruit` = '1' AND `vegetables` = '2' `fruit` = '2' AND `vegetables` = '1' `fruit` = '2' AND `vegetables` = '2' My theoretical code is pasted below: <?php $columnnames = array('fruit','vegetables'); $column1 = array('1','2'); $column2 = array('1','2'); $where = ''; $column1inc =0; $column2inc =0; while( $column1inc <= count($column1) ) { if( !empty( $where ) ) $where .= ' AND '; $where = "`".$columnnames[0]."` = "; $where .= "'".$column1[$column1inc]."'"; while( $column2inc <= count($column2) ) { if( !empty( $where ) ) $where .= ' AND '; $where .= "`".$columnnames[1]."` = "; $where .= "'".$column2[$column2inc]."'"; echo $where."\n"; $column2inc++; } $column1inc++; } ?> When I run this code, I get the following output: `fruit` = '1' AND `vegetables` = '1' `fruit` = '1' AND `vegetables` = '1' AND `vegetables` = '2' `fruit` = '1' AND `vegetables` = '1' AND `vegetables` = '2' AND `vegetables` = '' Does anyone see what I am doing incorrectly? Thanks.

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >