Search Results

Search found 14016 results on 561 pages for 'mysql 5 5'.

Page 418/561 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • October 2012 Critical Patch Update and Critical Patch Update for Java SE Released

    - by Eric P. Maurice
    Hi, this is Eric Maurice. Oracle has just released the October 2012 Critical Patch Update and the October 2012 Critical Patch Update for Java SE.  As a reminder, the release of security patches for Java SE continues to be on a different schedule than for other Oracle products due to commitments made to customers prior to the Oracle acquisition of Sun Microsystems.  We do however expect to ultimately bring Java SE in line with the regular Critical Patch Update schedule, thus increasing the frequency of scheduled security releases for Java SE to 4 times a year (as opposed to the current 3 yearly releases).  The schedules for the “normal” Critical Patch Update and the Critical Patch Update for Java SE are posted online on the Critical Patch Updates and Security Alerts page. The October 2012 Critical Patch Update provides a total of 109 new security fixes across a number of product families including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Supply Chain Products Suite, Oracle PeopleSoft Enterprise, Oracle Customer Relationship Management (CRM), Oracle Industry Applications, Oracle FLEXCUBE, Oracle Sun products suite, Oracle Linux and Virtualization, and Oracle MySQL. Out of these 109 new vulnerabilities, 5 affect Oracle Database Server.  The most severe of these Database vulnerabilities has received a CVSS Base Score of 10.0 on Windows platforms and 7.5 on Linux and Unix platforms.  This vulnerability (CVE-2012-3137) is related to the “Cryptographic flaws in Oracle Database authentication protocol” disclosed at the Ekoparty Conference.  Because of timing considerations (proximity to the release date of the October 2012 Critical Patch Update) and the need to extensively test the fixes for this vulnerability to ensure compatibility across the products stack, the fixes for this vulnerability were not released through a Security Alert, but instead mitigation instructions were provided prior to the release of the fixes in this Critical Patch Update in My Oracle Support Note 1492721.1.  Because of the severity of these vulnerabilities, Oracle recommends that this Critical Patch Update be installed as soon as possible. Another 26 vulnerabilities fixed in this Critical Patch Update affect Oracle Fusion Middleware.  The most severe of these Fusion Middleware vulnerabilities has received a CVSS Base Score of 10.0; it affects Oracle JRockit and is related to Java vulnerabilities fixed in the Critical Patch Update for Java SE.  The Oracle Sun products suite gets 18 new security fixes with this Critical Patch Update.  Note also that Oracle MySQL has received 14 new security fixes; the most severe of these MySQL vulnerabilities has received a CVSS Base Score of 9.0. Today’s Critical Patch Update for Java SE provides 30 new security fixes.  The most severe CVSS Base Score for these Java SE vulnerabilities is 10.0 and this score affects 10 vulnerabilities.  As usual, Oracle reports the most severe CVSS Base Score, and these CVSS 10.0s assume that the user running a Java Applet or Java Web Start application has administrator privileges (as is typical on Windows XP). However, when the user does not run with administrator privileges (as is typical on Solaris and Linux), the corresponding CVSS impact scores for Confidentiality, Integrity, and Availability are "Partial" instead of "Complete", typically lowering the CVSS Base Score to 7.5 denoting that the compromise does not extend to the underlying Operating System.  Also, as is typical in the Critical Patch Update for Java SE, most of the vulnerabilities affect Java and Java FX client deployments only.  Only 2 of the Java SE vulnerabilities fixed in this Critical Patch Update affect client and server deployments of Java SE, and only one affects server deployments of JSSE.  This reflects the fact that Java running on servers operate in a more secure and controlled environment.  As discussed during a number of sessions at JavaOne, Oracle is considering security enhancements for Java in desktop and browser environments.  Finally, note that the Critical Patch Update for Java SE is cumulative, in other words it includes all previously released security fixes, including the fix provided through Security Alert CVE-2012-4681, which was released on August 30, 2012. For More Information: The October 2012 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpuoct2012-1515893.html The October 2012 Critical Patch Update for Java SE advisory is located at http://www.oracle.com/technetwork/topics/security/javacpuoct2012-1515924.html.  An online video about the importance of keeping up with Java releases and the use of the Java auto update is located at http://medianetwork.oracle.com/video/player/1218969104001 More information about Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html  

    Read the article

  • Certify October Updates

    - by Sadia2
    Normal 0 false false false EN-US X-NONE X-NONE We have added some release and platform certifications to MOS Certify. Applications: Oracle Demantra 12.2.2 Collaboration Technologies: Oracle On Track Communication 1.0.0.0.0 Database : Oracle Database 11.2.0.4.0, Oracle Database Client 11.2.0.4.0, 11.2.0.3.0, Oracle Clusterware 12.1.0.1.0, 11.2.0.4.0, Oracle Real Application Clusters 12.1.0.1.0, 11.2.0.4.0, Oracle TimesTen In-Memory Database 11.2.2.5.0, Oracle Audit Vault and Database Firewall 12.1.1.0.0, Oracle Database Client 10.2.0.5, Oracle Secure Enterprise Search 11.2.2.2.0 E-Business Suite: Oracle E-Business Suite 12.2.2, 12.1.3, 12.1.2, 12.1.1, 12.0.4, 11.5.10.2, 11.5.9.2 Edge Applications: Oracle Transportation Management 6.3.2 Enterprise Manager: Enterprise Manager Base Platform – OMS 12.1.0.3.0 FSGBU Insurance Group: Oracle Health Insurance Back Office 10.13.2.0.0 Fusion Middleware: Oracle Application Development Framework 11.1.1.6.0, Oracle Business Intelligence Enterprise Edition 11.1.1.7.0, Oracle BI Answers 11.1.1.7.0, Oracle BI Composer 11.1.1.7.0, Oracle BI Presentation Services 11.1.1.7.0, Oracle BI Delivers 11.1.1.7.0, Oracle BI Interactive Dashboards 11.1.1.7.0, Oracle BI Scorecard and Strategy Management 11.1.1.7.0, Oracle BI Catalog Manager 11.1.1.7.0, Oracle BI Search 11.1.1.7.0, Oracle BIP Enterprise 11.1.1.7.0, Oracle BIP Scheduler 11.1.1.7.0, Oracle Real-Time Decision Center 11.1.1.7.0, Oracle Segmentation Server 11.1.1.7.0, Oracle JRE 1.7.0_45, 1.7.0_40, 1.7.0_25, 1.7.0_21, 1.7.0_17, 1.7.0_15, 1.7.0_13, 1.7.0_11, 1.7.0_10, 1.6.0_65, 1.6.0_26, Oracle JDK 1.7.0_45, 1.7.0_25, 1.7.0_17, 1.7.0_15, 1.7.0_13, 1.7.0_11, 1.6.0_65, 1.6.0_41, 1.6.0_26, Oracle Discoverer 11.1.1.7.0, 11.1.1.6.0, Discoverer Administrator 11.1.1.7.0, 11.1.1.6.0, Discoverer Desktop 11.1.1.7.0, 11.1.1.6.0, Oracle GoldenGate 12.1.2.0.0, Oracle GoldenGate Director 12.1.2.0.0, Java 1.7.0_10, Oracle Fusion Middleware 12.1.2.0.0, Oracle Data Integrator Agent 12.1.2.0.0, Oracle Data Integrator Studio 12.1.2.0.0, Oracle Data Integrator Console 12.1.2.0.0 JD Edwards EnterpriseOne: JD Edwards EnterpriseOne Enterprise Server 9.1.3.0, JD Edwards EnterpriseOne One View Reporting 9.1.3.0, JD Edwards EnterpriseOne Mobile Applications 9.0.2.0, 9.0.0.0, 9.1.2.0, JD Edwards EnterpriseOne for iPad 1.0.0.0 Linux & Server Virtualization (x86): Oracle VM Server for x86 3.2.6.0.0, 3.2.4.0.0, 3.2.3.0.0, 3.2.2.0.0, 3.2.1.0.0 MySQL: MySQL Database Server 5.6, 5.5, MySQL Cluster 7.3, 7.2, 7.1 Oracle Fusion Applications : Oracle Fusion Applications 11.1.7.0.0, 11.1.6.0.0, 11.1.5.0.0, 11.1.4.0.0 PeopleSoft: PeopleSoft PeopleTools 8.53, 8.52, 8.51, 8.5 Primavera GBU: Primavera Project Portfolio Mgmt 6.2.1, Primavera P6 Enterprise Project Portfolio Management 8.3.0.0.0 Siebel Enterprise: Siebel Application Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Database Server 8.2.2.4.0, 8.1.1.11.0 Siebel Web Server Extension 8.1.1.10.0 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • Thrift,.NET,Cassandra - Is this is right combination?

    - by Vadi
    I've been evaluating technology stack for developing a social network based application. Below are the stack I think could well suitable for this application type of application: GUI -- ASP.NET MVC, Flash (Flex) Business Services -- Thrift based services One of the advantage of using Thrift is to solve scaling problems that will come in future when the user base increases rapidly. All the business logic can be exposed as a services using REST,JSON etc., This also allows us to go with C++ or Erlang based services when situation demands. Database -- mySQL, CasSandara mySQL can be used for storing the data which needs to be persisted. Cassandara will be used for storing global identifiers to the persisted data. Since Cassandara is also very good at scaling by introducing more nodes this will leverage Thrift based services as well. And also there is native support between Cassandara and Thrift Cache Server -- Memcached Any requests from Business Services will only talk to Memcached if any non-dirty data is required, otherwise there will be some background jobs that will invalidate the cache from database. The question is: Is the Thrift which is open-sourced one is production-ready? Is it the right stack for services layer to choose when the application (GUI) is primarily gets developed in ASP.NET and DB is mysql? Is there any other caveats that anyone here experienced? One of the main objective behind this stack is to easily scale up with more nodes and also this helps us to use Linux boxes, it will reduce our cost significantly Thoughts please ..

    Read the article

  • JPA merge fails due to duplicate key

    - by wobblycogs
    I have a simple entity, Code, that I need to persist to a MySQL database. public class Code implements Serializable { @Id private String key; private String description; ...getters and setters... } The user supplies a file full of key, description pairs which I read, convert to Code objects and then insert in a single transaction using em.merge(code). The file will generally have duplicate entries which I deal with by first adding them to a map keyed on the key field as I read them in. A problem arises though when keys differ only by case (for example: XYZ and XyZ). My map will, of course, contain both entries but during the merge process MySQL sees the two keys as being the same and the call to merge fails with a MySQLIntegrityConstraintViolationException. I could easily fix this by uppercasing the keys as I read them in but I'd like to understand exactly what is going wrong. The conclusion I have come to is that JPA considers XYZ and XyZ to be different keys but MySQL considers them to be the same. As such when JPA checks its list of known keys (or does whatever it does to determine whether it needs to perform an insert or update) it fails to find the previous insert and issuing another which then fails. Is this corrent? Is there anyway round this other than better filtering the client data? I haven't defined .equals or .hashCode on the Code class so perhaps this is the problem.

    Read the article

  • Newly installed Ruby gems not showing up in $LOAD_PATH

    - by randombits
    I'm using MacPorts in order to manage my Ruby/Rails/Gems installations. Recently after doing a gem install wirble, wirble fails to load when I start an instance of irb. Here's the output: $ irb --simple-prompt Couldn't load Wirble: no such file to load -- wirble The Wirble gem doesn't show up in my $LOAD_PATH: >> puts $: /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionmailer-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/activerecord-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/activeresource-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/activesupport-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/mysql-2.8.1/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/mysql-2.8.1/ext /opt/local/lib/ruby1.9/gems/1.9.1/gems/mysql-2.8.1/bin /opt/local/lib/ruby1.9/gems/1.9.1/gems/rack-1.0.1/bin /opt/local/lib/ruby1.9/gems/1.9.1/gems/rack-1.0.1/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/rails-2.3.5/bin /opt/local/lib/ruby1.9/gems/1.9.1/gems/rails-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/rake-0.8.7/bin /opt/local/lib/ruby1.9/gems/1.9.1/gems/rake-0.8.7/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/rubygems-update-1.3.7/hide_lib_for_update /opt/local/lib/ruby1.9/gems/1.9.1/gems/rubygems-update-1.3.7/bin /opt/local/lib/ruby1.9/site_ruby/1.9.1 /opt/local/lib/ruby1.9/site_ruby/1.9.1/i386-darwin10 /opt/local/lib/ruby1.9/site_ruby /opt/local/lib/ruby1.9/vendor_ruby/1.9.1 /opt/local/lib/ruby1.9/vendor_ruby/1.9.1/i386-darwin10 /opt/local/lib/ruby1.9/vendor_ruby /opt/local/lib/ruby1.9/1.9.1 /opt/local/lib/ruby1.9/1.9.1/i386-darwin10 . => nil >> The gem is definitely installed: $ gem list |grep -i wirble wirble (0.1.3) It is located in /opt/local/lib/ruby/gems/1.9.1/gems/wirble-0.1.3/ How do I get this and future gems I installed appended to my $LOAD_PATH?

    Read the article

  • Question about MySQLdb, OS X 10.5, and authentication

    - by timpone
    I'm a noob at Python and have been having problems with MySQLdb and OS X Leopard 10.5. I have a php app that is doing db access just fine with pdo but also want to access with Python. When I use the same credentials with MySQLdb as php, I get the following error: File "build/bdist.macosx-10.5-i386/egg/MySQLdb/connections.py", line 188, in __init__ _mysql_exceptions.OperationalError: (1045, "Access denied for user 'arc_db'@'localhost' (using password: YES)") The authentication piece works fine on my ubuntu server (installed via apt-get) implying that it is something specific to my OS X MySQLdb install. Looking at some postings, I thought it would be my local build of MySQLdb which seems to be problematic with OS X. But I am able to import fine: Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import MySQLdb >>> Also, wanting to create a positive, I am able to access and return results from a database tilted test_something (which presumably bypasses the MySQL's authtentication - not sure exactly how though). Trying to figure out a little more what is going on, I turn on logging for mysql and get the following (added my own comments): 100609 19:09:45 3 Connect Access denied for user 'arc_db'@'localhost' (using password: YES) //not worked 100609 19:10:02 4 Connect arc_db@localhost on arc_development //did work I'm not really sure what the 3 or 4 means but presumably a sucess or failue. So, I guess what would be the next step? Am I doing some obvious stupid python mistake (very likely)? Is there a better way for me to prove that this should / can be working? Is there any way to determine what MySQLdb is sending exactly in its authentication message to MySQL? thanks

    Read the article

  • Where is my Drupal View pager?

    - by anotherthink
    Hi, I have a Drupal 6 site where I've created a view that shows a list of nodes. Nothing complicated -- except that when I choose "use pager" -- "yes" (and choose the "full pager" option), the pager doesn't show up on the page. The first page of nodes shows up, but there's no way to get to other pages. Through googling, I saw that some people had an issue with the "Pager Element" item, so I changed that from 0 to 1 -- no luck. This shouldn't be very complicated, but I've been at it for a while! Help!? ETA: I've tracked it down to the following lines in /modules/views/theme/theme.inc: $pager_theme = views_theme_functions($pager_type, $view, $view->display_handler->display); $vars['pager'] = theme($pager_theme, $exposed_input, $view->pager['items_per_page'], $view->pager['element']); The first line returns an array; the second line returns nothing. I suspect now that this is a theming problem with the custom theme I'm using that may not have fully been correctly updated for Drupal 6 -- like, maybe I'm missing a pager template somehow? -- however, I'm quite new to Drupal and don't really understand how to further track down and fix the issue. Any advice would be much appreciated! ETA yet again: The pager also doesn't show up when using Garland, so it's not a theme issue after all. ALSO: I have a copy of this site set up on a development server as well, and that copy has working pagination! I've checked what I thought might be different -- files in the theme, what modules are enabled -- and it seems like pretty much everything is the same. The one thing that I know is different, however, is that the production server has a lower version of MySQL (lower than recommended for Drupal 6 -- we're waiting on the hosting company being able to change this later). Would it make sense that the old version of MySQL is unable to do pagination correctly in Drupal 6? If so, does anyone know a workaround I can do until we are able to update MySQL?

    Read the article

  • Beginner SQL question: querying gold and silver tag badges in Stack Exchange Data Explorer

    - by polygenelubricants
    I'm using the Stack Exchange Data Explorer to learn SQL, but I think the fundamentals of the question is applicable to other databases. I'm trying to query the Badges table, which according to Stexdex (that's what I'm going to call it from now on) has the following schema: Badges Id UserId Name Date This works well for badges like [Epic] and [Legendary] which have unique names, but the silver and gold tag-specific badges seems to be mixed in together by having the same exact name. Here's an example query I wrote for [mysql] tag: SELECT UserId as [User Link], Date FROM Badges Where Name = 'mysql' Order By Date ASC The (slightly annotated) output is: as seen on stexdex: User Link Date --------------- ------------------- // all for silver except where noted Bill Karwin 2009-02-20 11:00:25 Quassnoi 2009-06-01 10:00:16 Greg 2009-10-22 10:00:25 Quassnoi 2009-10-31 10:00:24 // for gold Bill Karwin 2009-11-23 11:00:30 // for gold cletus 2010-01-01 11:00:23 OMG Ponies 2010-01-03 11:00:48 Pascal MARTIN 2010-02-17 11:00:29 Mark Byers 2010-04-07 10:00:35 Daniel Vassallo 2010-05-14 10:00:38 This is consistent with the current list of silver and gold earners at the moment of this writing, but to speak in more timeless terms, as of the end of May 2010 only 2 users have earned the gold [mysql] tag: Quassnoi and Bill Karwin, as evidenced in the above result by their names being the only ones that appear twice. So this is the way I understand it: The first time an Id appears (in chronological order) is for the silver badge The second time is for the gold Now, the above result mixes the silver and gold entries together. My questions are: Is this a typical design, or are there much friendlier schema/normalization/whatever you call it? In the current design, how would you query the silver and gold badges separately? GROUP BY Id and picking the min/max or first/second by the Date somehow? How can you write a query that lists all the silver badges first then all the gold badges next? Imagine also that the "real" query may be more complicated, i.e. not just listing by date. How would you write it so that it doesn't have too many repetition between the silver and gold subqueries? Is it perhaps more typical to do two totally separate queries instead? What is this idiom called? A row "partitioning" query to put them into "buckets" or something?

    Read the article

  • Find actual value of PHP variable

    - by Simon S
    Hi all. I am having a real headache with reading in a tab delimited text file and inserting it into a MySQL Database. The tab delimited text file was generated (I think) from a MS SQL Database, and I have written a simple script to read in the file and insert it into an existing table in my MySQL database. However, there seems to be some problem with the data in the txt file. When my PHP script parses the file and I output the INSERT statements, the values in each of the fields are longer than they should be. For example, the first field should be a simple two character alphanumeric value. If I echo out the INSERT statements, using Firebug (in Firefox), between each of the characters is a question mark in a black diamond. If I var_dump the values, I get the following: string(5) "A1" Now, this clearly shows a two character string, but var_dump tells me it is five characters long!! If I trim() the value, all I get is the first character (in this case "A"). How can I get at the other characters, even if it is only to remove them? Additionally, this appears to be forcing MySQL to insert the value as a BLOB, not as a varchar as it should. Simon

    Read the article

  • Eclipselink update existing tables

    - by javydreamercsw
    Maybe I got it wrong but i though that JPA was able to update an existing table (model changed adding a column) but is not working in my case. I can see in the logs eclipselink attempting to create it but failing because it already exists. Instead of trying an update to add the column it keeps going. (Had to remove some < so it displays) property name="javax.persistence.jdbc.url" value="jdbc:mysql://localhost:3306/jwrestling"/ property name="javax.persistence.jdbc.password" value="password"/ property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver"/ property name="javax.persistence.jdbc.user" value="user"/ property name="eclipselink.ddl-generation" value="create-tables"/ property name="eclipselink.logging.logger" value="org.eclipse.persistence.logging.DefaultSessionLog"/ property name="eclipselink.logging.level" value="INFO"/ And here's the table with the change (online column added) [EL Warning]: 2010-05-31 14:39:06.044--ServerSession(16053322)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.1.0.v20100517-r7246): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'account' already exists Error Code: 1050 Call: CREATE TABLE account (ID INTEGER NOT NULL, USERNAME VARCHAR(32) NOT NULL, SECURITY_KEY VARCHAR(255) NOT NULL, EMAIL VARCHAR(64) NOT NULL, STATUS VARCHAR(8) NOT NULL, TIMEDATE DATETIME NOT NULL, PASSWORD VARCHAR(255) NOT NULL, ONLINE TINYINT(1) default 0 NOT NULL, PRIMARY KEY (ID)) Query: DataModifyQuery(sql="CREATE TABLE account (ID INTEGER NOT NULL, USERNAME VARCHAR(32) NOT NULL, SECURITY_KEY VARCHAR(255) NOT NULL, EMAIL VARCHAR(64) NOT NULL, STATUS VARCHAR(8) NOT NULL, TIMEDATE DATETIME NOT NULL, PASSWORD VARCHAR(255) NOT NULL, ONLINE TINYINT(1) default 0 NOT NULL, PRIMARY KEY (ID))") [EL Warning]: 2010-05-31 14:39:06.074--ServerSession(16053322)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.1.0.v20100517-r7246): org.eclipse.persistence.exceptions.DatabaseException After this it continues with the following. Am I doing something wrong or is a bug?

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Web application architecture, and application servers?

    - by seanieb
    Hi, I'm building a web application, and I need to use an architecture that allows me to run it on two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB. Then I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user. I'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server. As you can see I'm fairly clueless. I'm familiar with using Php, MySQL and the basics of Python, but bringing this all together using something more complex than a cron job is new to me. How do go about implementing this? What frame work(s) should I use? Is MVC a good architecture for this? (I'm new to MVC, architectures etc.) Is Cakephp a good solution? If so will I be able to control and monitor the Python code using it?

    Read the article

  • Running RSpec Files From ruby code

    - by Brian D.
    I'm trying to run RSpec tests straight from ruby code. More specifically, I'm running some mysql scripts, loading the rails test environment and then I want to run my rspec tests (which is what I'm having trouble with)... I'm trying to do this with a rake task. Here is my code so far: require"spec" require "spec/rake/spectask" RAILS_ENV = 'test' namespace :run_all_tests do desc "Run all of your tests" puts "Reseting test database..." system "mysql --user=root --password=dev < C:\\Brian\\Work\\Personal\\BrianSite\\database\\BrianSite_test_CreateScript.sql" puts "Filling database tables with test data..." system "mysql --user=root --password=dev < C:\\Brian\\Work\\Personal\\BrianSite\\database\\Fill_Test_Tables.sql" puts "Starting rails test environment..." task :run => :environment do puts "RAILS_ENV is #{RAILS_ENV}" # Run rspec test files here... require "spec/models/blog_spec.rb" end end I thought the require "spec/models/blog_spec.rb" would do it, but the tests aren't running. Anyone know where I'm going wrong? Thanks for any help.

    Read the article

  • Use ini/appconfig file or sql server file to store user config?

    - by h2g2java
    I know that the preference for INI or appconfig XML is their human readability. Let's say user preferences stored for my app are hierarchical and numbers about a thousand items and it would be really confusing for a user to edit an INI to change things anyway. I have always been using a combination of INI with appconfig. I am leaning towards using sql server db file, now. Every time the user changes a preference while using the app, it would be stored into the db file - that's my line of thought. I am also thinking that such a config db file could move around with the app too, just like an INI. Before I do that, any advice 1. If there are any disadvantages against using a db file over INI or appconfig. 2. If a shop uses mysql or oracle, do you think your colleagues would lift up their pro-mysql or pro-oracle eyebrow questioning why you would use sql server technology in a mysql or oracle shop? I mean, I am just using it like an INI file or app.config anyway, right?

    Read the article

  • Hibernate 3.5.0 causes extreme performance problems

    - by user303396
    I've recently updated from hibernate 3.3.1.GA to hibernate 3.5.0 and I'm having a lot of performance issues. As a test, I added around 8000 entities to my DB (which in turn cause other entities to be saved). These entities are saved in batches of 20 so that the transactions aren't too large for performance reasons. When using hibernate 3.3.1.GA all 8000 entities get saved in about 3 minutes. When using hibernate 3.5.0 it starts out slower than with hibernate 3.3.1. But it gets slower and slower. At around 4,000 entities, it sometimes takes 5 minutes just to save a batch of 20. If I then go to a mysql console and manually type in an insert statement from the mysql general query log, half of them run perfect in 0.00 seconds. And half of them take a long time (maybe 40 seconds) or timeout with "ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction" from MySQL. Has something changed in hibernate's transaction management in version 3.5.0 that I should be aware of? The ONLY thing I changed to experience these unusable performance issues is replace the following hibernate 3.3.1.GA jar files: com.springsource.org.hibernate-3.3.1.GA.jar, com.springsource.org.hibernate.annotations-3.4.0.GA.jar, com.springsource.org.hibernate.annotations.common-3.3.0.ga.jar, com.springsource.javassist-3.3.0.ga.jar with the new hibernate 3.5.0 release hibernate3.jar and javassist-3.9.0.GA.jar. Thanks.

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • how to make connection pool in spring application using BasicDataSource.

    - by vipin
    hi friend, I have created the application in which I need to configure the connection pool.In which I am configuring the connection pooling in the spring_Config file. using the Basicdatasource. but there is some problem to create the connection pool. Please tell me how to create the connection pooling in spring application using BasicDatasource. I tried this one code in spring config ;- bean id="datasource" class="org.apache.commons.dbcp.BasicDataSource" com.mysql.jdbc.Driver jdbc:mysql://192.168.1.12:3306/revup?noAccessToProcedureBodies=true jdbc:mysql://localhost:3306/revup?noAccessToProcedureBodies=true-- revuser root-- kjacob gme997FK-- <property name="poolPreparedStatements"> <value>true</value> </property> <property name="initialSize"> <value>2</value> </property> <property name="maxActive"> <value>15</value> </property> Is there any modification of code please tell me. thanks in advance.

    Read the article

  • Artisan unable to access environment variables from $_ENV

    - by hansn
    Any artisan command I enter into the command line throws this error: $ php artisan <? return array( 'DB_HOSTNAME' => 'localhost', 'DB_USERNAME' => 'root', 'DB_NAME' => 'pc_booking', 'DB_PASSWORD' => 'secret', ); PHP Warning: Invalid argument supplied for foreach() in /home/martin/code/www/pc_backend/vendor/laravel/framework/src/Illuminate/Config/EnvironmentVariables.php on line 35 {"error":{"type":"ErrorException","message":"Undefined index: DB_HOSTNAME","file":"\/home\/martin\/code\/www\/pc_backend\/app\/config\/database.php","line":57}} This is only on my local development system, where I recently installed apache and php. On my production system on a shared host artisan commands work just fine. The prod system has it's own .env.php, but other than that the code should be identical. Relevant files: .env.local.php <? return array( 'DB_HOSTNAME' => 'localhost', 'DB_USERNAME' => 'root', 'DB_NAME' => 'pc_booking', 'DB_PASSWORD' => 'secret', ); app/config/database.php <?php return array( 'fetch' => PDO::FETCH_CLASS, 'default' => 'mysql', 'connections' => array( 'mysql' => array( 'driver' => 'mysql', 'host' => $_ENV['DB_HOSTNAME'], 'database' => $_ENV['DB_NAME'], 'username' => $_ENV['DB_USERNAME'], 'password' => $_ENV['DB_PASSWORD'], 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', ), ), 'migrations' => 'migrations', ), ); The $_ENV array is populated as expected on the website - the problem appears to be with artisan only.

    Read the article

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • [PHP, CSS, & ?] fixed width div, resizing text on the fly based on length

    - by Andrew Heath
    Let's say you've got a simple fixed-width layout that pulls a title from a MySQL database. CSS: #wrapper { width: 800px; } h1 { width: 100%; } HTML: <html> <body> <div id="wrapper"> <h1> $titleString </h1> </div> </body> </html> But the catch is, the length of the title string pulled from your MySQL database varies wildly. Sometimes it might be 10 characters, sometimes it might be 80. It's possible to establish a min & max character count. How, if at all possible, do I get the text-size of my <h1>$titleString</h1> to enlarge/decrease on-the-fly such that the string is only ever on one line and best fit to that line length? I've seen a lot of questions about resizing the div - but in my case the div must always be 100% (800px) and I want to best-fit the title. Obviously a maximum text-size value would have to be set so 5 character strings don't become gargantuan. Does anyone have a suggestion? I'm only using PHP/MySQL/CSS on this page at the moment, but incorporation of another language is fine if it means I can solve the problem. The only thing I can think of is a bruteforce approach whereby through trial and error I establish acceptable string character count ranges matched with CSS em sizes, but that'd be a pretty ugly implementation from the code side.

    Read the article

  • strtotime not working with TIME?

    - by Prashant
    My mysql column has this datetime value, 2011-04-11 11:00:00 when I am applying strtotime then its returning date less than today,whereas it should be greater than today. also when I am trying this strtotime(date('d/m/Y h:i A')); code, its returning wrong values. Is there any problem with giving TIME in strtotime? Basically I want to do, is to compare my mysql column date with today's date, if its in future then show "Upcoming" else show nothing? Please help and advise, what should I do? Edited code $_startdatetime = $rs['startdatetime']; $_isUpcoming = false; if(!empty($_startdatetime)){ $TEMP_strtime = strtotime($_startdatetime); $TEMP_strtime_today = strtotime(date('d/m/Y h:i A')); if($TEMP_strtime_today < $TEMP_strtime){ $_isUpcoming = true; $_startdatetime = date('l, d F, Y h:i A' ,$TEMP_strtime); } } And the value in $rs['startdatetime'] is 2011-04-11 11:00:00. And with this value I am getting following output. $TEMP_strtime - 1302519600 $TEMP_strtime_today - 1314908160 $_startdatetime - 2011-04-11 11:00:00 $_startdatetime its value is not formatted as the upcoming condition is false, so returning as is mysql value.

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >