Search Results

Search found 62912 results on 2517 pages for 'data entry'.

Page 71/2517 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Mysql::Error: Duplicate entry

    - by Shaliko
    Hi I have a model class Gift < ActiveRecord::Base validates_uniqueness_of :giver_id, :scope => :account_id end add_index(:gifts, [:account_id, :giver_id], :uniq => true) Action def create @gift= Gift.new(params[:gift]) if @gift.save ... else ... end end In the "production" mode, I sometimes get an error ActiveRecord::StatementInvalid: Mysql::Error: Duplicate entry '122394471958-50301499' for key 'index_gifts_on_account_id_and_user_id' What the problem?

    Read the article

  • Mysql::Error: Duplicate entry

    - by Shaliko
    Hi I have a model class Gift < ActiveRecord::Base validates_uniqueness_of :giver_id, :scope => :account_id end add_index(:gifts, [:account_id, :giver_id], :uniq => true) Action def create @gift= Gift.new(params[:gift]) if @gift.save ... else ... end end In the "production" mode, I sometimes get an error ActiveRecord::StatementInvalid: Mysql::Error: Duplicate entry '122394471958-50301499' for key 'index_gifts_on_account_id_and_giver_id' What the problem?

    Read the article

  • JQuery UI Autocomplete - disallow free text entry?

    - by JK
    Is it possible to disallow free text entry in the JQuery UI autocomplete widget? eg I only want the user to be allowed to select from the list of items that are presented in the autocomplete list, and dont want them to be able to write some random text. I didn't see anything in the demos/docs describing how to do this. http://jqueryui.com/demos/autocomplete/ I'm using autocomplete like this $('#selector').autocomplete({ source: url, minlength: 2, select: function (event, ui) { // etc }

    Read the article

  • man kaio: No manual entry for kaio.

    - by Daniel
    I trussed a process, and they are lines as below. And I want to know the definition of kaio, but there is no manual entry for kaio, so whether can I get the definition? /1: kaio(AIOWRITE, 259, 0x3805B2A00, 8704, 0x099C9E000755D3C0) = 0 /1: kaio(AIOWRITE, 259, 0x380CF9200, 14336, 0x099CC0000755D5B8) = 0 /1: kaio(AIOWRITE, 259, 0x381573600, 8704, 0x099CF8000755D7B0) = 0 /1: kaio(AIOWRITE, 259, 0x381ACA600, 8192, 0x099D1A000755D9A8) = 0 /1: kaio(AIOWAIT, 0xFFFFFFFF7FFFD620) = 4418032576 /1: timeout: 600.000000 sec /1: kaio(AIOWAIT, 0xFFFFFFFF7FFFD620) = 4418033080 /1: timeout: 600.000000 sec /1: kaio(AIOWAIT, 0xFFFFFFFF7FFFD620) = 4418033584 /1: timeout: 600.000000 sec

    Read the article

  • Duplicate entry issue in magento database

    - by user691146
    The problem is as below... ERROR 1062 (23000) at line 1893: Duplicate entry '179-81-0' for key 'UNQ_CAT_PRD_ENTT_DTIME_ENTT_ID_ATTR_ID_STORE_ID' Probably you have faced the similar issue as I am.If I take the value '179-81-0' I am sure 179 is the product id but not sure about other -81-0.Also in other threads it was suggested to delete the row from the table but without knowing 100% what I am doing it would be a foolish act to just delete the row.I need to know what exactly is it. Thanks, Raj

    Read the article

  • "RepeatForUnit" item missing in Calender entry?

    - by Pari
    Hi, I am accessing RepeatForUnit to manage "Repeats" in Lotus Notes. String RepeatForUnit = (string)((object[])docCalendarDoc.GetItemValue("RepeatForUnit"))[0]; Initially i was getting "D" for Daily event, "W" for Weekly and "Y" for Yearly. But now properties field not showing any of this value even after adding Repeat in calender.It is not visible in Properties list of Lotus Notes Calender and showing "" (black entry) for above code. I am not getting why this is happening.Can anybody help me out in this?

    Read the article

  • How to get the totalResults entry from the YouTube video API

    - by ohana
    I'm using PHP and the YouTube API to get videos from YouTube to feed to my web app, but I don't know how to get the totalResults entry Here is my code: $yt = new Zend_Gdata_YouTube(); $query = $yt->newVideoQuery(); $query->setQuery($searchTerm); $query->setStartIndex($startIndex); $query->setMaxResults($maxResults); $feed = $yt->getVideoFeed($query); echo $feed->totalResults; // this desn't work

    Read the article

  • How to add Nickname in Contact Entry?

    - by Jene
    Hi, I want to add "Nickname" in Google Contact using "Google.GData.Extensions.Apps" . I am able to create Nickname as: NicknameElement obj_nickname = new NicknameElement(); obj_nickname.Name = " Jenifer"; But how to add i to Contact Entry?

    Read the article

  • During Spring unit test, data written to db but test not seeing the data

    - by richever
    I wrote a test case that extends AbstractTransactionalJUnit4SpringContextTests. The single test case I've written creates an instance of class User and attempts to write it to the database using Hibernate. The test code then uses SimpleJdbcTemplate to execute a simple select count(*) from the user table to determine if the user was persisted to the database or not. The test always fails though. I was suspect because in the Spring controller I wrote, the ability to save an instance of User to the db is successful. So I added the Rollback annotation to the unit test and sure enough, the data is written to the database since I can even see it in the appropriate table -- the transaction isn't rolled back when the test case is finished. Here's my test case: @ContextConfiguration(locations = { "classpath:context-daos.xml", "classpath:context-dataSource.xml", "classpath:context-hibernate.xml"}) public class UserDaoTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private UserDao userDao; @Test @Rollback(false) public void teseCreateUser() { try { UserModel user = randomUser(); String username = user.getUserName(); long id = userDao.create(user); String query = "select count(*) from public.usr where usr_name = '%s'"; long count = simpleJdbcTemplate.queryForLong(String.format(query, username)); Assert.assertEquals("User with username should be in the db", 1, count); } catch (Exception e) { e.printStackTrace(); Assert.assertNull("testCreateUser: " + e.getMessage()); } } } I think I was remiss by not adding the configuration files. context-hibernate.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd> <bean id="namingStrategy" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"> <property name="staticField"> <value>org.hibernate.cfg.ImprovedNamingStrategy.INSTANCE</value> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" destroy-method="destroy" scope="singleton"> <property name="namingStrategy"> <ref bean="namingStrategy"/> </property> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>com/company/model/usr.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.query.substitutions">yes 'Y', no 'N'</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.use_minimal_puts">false</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.current_session_context_class">thread</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> <property name="nestedTransactionAllowed" value="false" /> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager"> <ref local="transactionManager"/> </property> <property name="transactionAttributes"> <props> <prop key="create">PROPAGATION_REQUIRED</prop> <prop key="delete">PROPAGATION_REQUIRED</prop> <prop key="update">PROPAGATION_REQUIRED</prop> <prop key="*">PROPAGATION_SUPPORTS,readOnly</prop> </props> </property> </bean> </beans> context-dataSource.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="org.postgresql.Driver" /> <property name="jdbcUrl" value="jdbc\:postgresql\://localhost:5432/company_dev" /> <property name="user" value="postgres" /> <property name="password" value="postgres" /> </bean> </beans> context-daos.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="extendedFinderNamingStrategy" class="com.company.dao.finder.impl.ExtendedFinderNamingStrategy"/> <bean id="finderIntroductionAdvisor" class="com.company.dao.finder.impl.FinderIntroductionAdvisor"/> <bean id="abstractDaoTarget" class="com.company.dao.impl.GenericDaoHibernateImpl" abstract="true" depends-on="sessionFactory"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> <property name="namingStrategy"> <ref bean="extendedFinderNamingStrategy"/> </property> </bean> <bean id="abstractDao" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true"> <property name="interceptorNames"> <list> <value>transactionInterceptor</value> <value>finderIntroductionAdvisor</value> </list> </property> </bean> <bean id="userDao" parent="abstractDao"> <property name="proxyInterfaces"> <value>com.company.dao.UserDao</value> </property> <property name="target"> <bean parent="abstractDaoTarget"> <constructor-arg> <value>com.company.model.UserModel</value> </constructor-arg> </bean> </property> </bean> </beans> Some of this I've inherited from someone else. I wouldn't have used the proxying that is going on here because I'm not sure it's needed but this is what I'm working with. Any help much appreciated.

    Read the article

  • How to test crontab entry?

    - by Mark
    I have an entry in my crontab that looks like this: 0 3 * * * pg_dump mydb | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz That script works perfectly when I execute it from the shell, but it doesn't seem to be running every night. I'm assuming there's something wrong with the permissions, maybe crontab is running under a different user or something. How can I debug this? I'm a shared hosting environment (WebFaction).

    Read the article

  • Removing entry from table

    - by Bnhjhvbq7
    Can't remove an entry from table. here's my code dropItem = dropList[ math.random( #dropList ) ] dropSomething[brick.index] = crackSheet:grabSprite(dropItem, true) dropSomething[brick.index].x = brick.x dropSomething[brick.index].y = brick.y dropSomething[brick.index].name = dropItem dropSomething[brick.index].type = "dropppedItems" collision function bounce(event) local item = event.other if item.type == "dropppedItems" then if item.name == "bomb" then Lives = Lives - 1 LivesNum.text = tostring(Lives) end item:removeSelf(); end end What I've tried: item:removeSelf(); - removes the whole table item = nil - seams to do nothing, the object continue to move and i still see the image

    Read the article

  • Render {embed} for each entry in {exp:weblog:entries} loop

    - by eyelidlessness
    {exp:weblog:entries [args]} [content] {embed="path/to/sub-template" [args]} {/exp:weblog:entries} The sub-template only renders for the first entry, and the {embed} template tag is swallowed for all subsequent entries. Is there a way to make it render the sub-template for each iteration? Edit: stranger yet, if caching is enabled for the sub-template, it renders for each iteration—but, of course, the arguments on the embed tag aren't passed to subsequent iterations, as the sub-template is cached.

    Read the article

  • Java Double entry table

    - by Tom
    Hi, Does anyone know a double entry table implementation in java I can download ? I need to do something like this 1 2 3 _______ a| x y z b| h l m c| o a k table.get(a,1) would return x Of course, it should use any Object as key, value, etc Thanks in advance

    Read the article

  • Skip first entry in Drupal Views Query?

    - by RD
    I've created a view that selects all nodes of type "shoot". But I want it to select all nodes of type "shoot", EXCEPT for the first entry. So, if the normal result is: Node 1 Node 2 Node 3 I want Node 2 Node 3 Node 4 Is that possible?

    Read the article

  • jQuery - field entry - how to duplicate and convert to seo friendly url

    - by Binyamin
    I have two HTML input fields 'article' and 'url'. How to duplicate field 'article' entry to field 'url' to SEO friendly link!? 'url' allowed symbols 'a-z' (capital letters converted to lowercase), 'url' space symbol replace with dash '-', 'url' other symbols ignored. required typing just in field 'article' <input type="text" name="article"> url field is jquery auto generated and updated by field article value <input type="text" name="url">

    Read the article

  • Performance impact: What is the optimal payload for SqlBulkCopy.WriteToServer()?

    - by Linchi Shea
    For many years, I have been using a C# program to generate the TPC-C compliant data for testing. The program relies on the SqlBulkCopy class to load the data generated by the program into the SQL Server tables. In general, the performance of this C# data loader is satisfactory. Lately however, I found myself in a situation where I needed to generate a much larger amount of data than I typically do and the data needed to be loaded within a confined time frame. So I was driven to look into the code...(read more)

    Read the article

  • Oracle Announces Oracle Data Integrator 12c and Oracle GoldenGate 12c

    - by Roxana Babiciu
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. To help customers succeed, Oracle is enhancing its data integration offering with Oracle Data Integrator 12c and Oracle GoldenGate 12c. These flexible and comprehensive solutions help customers capitalize on their data to reduce costs and drive business growth. Read more here

    Read the article

  • Next-Generation Data Integration on Oracle Exadata

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE Companies are currently faced with increasing data volumes and retention times while simultaneously batch windows are shrinking. In the ‘Next-Generation Data Integration on Oracle Exadata’ session we will be discussing how Oracle with its innovative Data Integration solution along with Exadata can help companies tackle that challenge. Oracle Data Integrator and Oracle GoldenGate provide industry-leading performance and scalability for data integration on Oracle Exadata. They are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components which can often be bottlenecks for data movement and transformation. Combined with the extreme performance provided by Exadata our Data Integration products help companies move towards a more efficient and flexible data integration infrastructure. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} If you’re interested in hearing more about how our customers maximize the performance of their Exadata systems while minimizing batch windows, all without adding more hardware resources join us for the following session: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Next-Generation Data Integration on Oracle Exadata  Thursday October, 4th - 11:15AM - 12:15PM Moscone West – Room 3005 We also have many other exciting sessions including 'Oracle Data Integrator Product Update and Future Strategy' on October 2nd at 1:15PM in Moscone West Room 3005. In this session we will discuss the ODI roadmap and its integration with engineered systems such as the Oracle Big Data Appliance. It's a session not to be missed! You can find a list of all the Data Integration sessions happening at Oracle OpenWorld in this document: Focus On Data Integration. If you will not be able to come to OpenWorld, for more information please check out our data sheet Oracle Data Integration Solutions and the Oracle Exadata Database Machine. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Instruction vs data cache usage

    - by Nick Rosencrantz
    Say I've got a cache memory where instruction and data have different cache memories ("Harvard architecture"). Which cache, instruction or data, is used most often? I mean "most often" as in time, not amount of data since data memory might be used "more" in terms of amount of data while instruction cache might be used "more often" especially depending on the program. Are there different answers a) in general and b) for a specific program?

    Read the article

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up.

    Read the article

  • Google I/O 2012 - Spatial Data Visualization

    Google I/O 2012 - Spatial Data Visualization Brendan Kenny, Enoch Lau Maps were among the first data visualizations, but they can also provide the backdrop for visualizing your own spatial data. In this session, we'll take a voyage through the world of map based data visualization, arming you with the tools you need to most effectively bring your data to life on a map using the Maps API v3. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1053 26 ratings Time: 01:00:17 More in Science & Technology

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >