Search Results

Search found 14956 results on 599 pages for 'mysql dba'.

Page 534/599 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • How do you get the glyph for a character encoded as '&#333;' from a utf-8 encoded database field usi

    - by AE
    I have a MySQL database table with a collation of 'utf8_general_ci' and the value in the field is: x & #299; bán yá wén (without the spaces). When this is converted (for example by StackOverflow's editor) it looks like this: xī bán yá wén where the second character looks like a lower case i with a bar over the top. In PHP, what function converts the & #299 ; entity into the ī character? I've tried using html_entity_decode($str,ENT_COMPAT,'UTF-8'), however I get characters like the following: yÄ«n wén or zhÅ•ng wén I'm pretty sure there's something I don't understand about the decoding, which is why I'm using the wrong function. Can anyone shed some light on how to get the single character glyph that's represented by the entity & #299 and similar high-number characters above 255? Many thanks, AE

    Read the article

  • How To Aggregate API Data?

    - by Mindblip
    Hi, I have a system that connects to 2 popular APIs. I need to aggregate the data from each into a unified result that can then be paginated. The scope of the project means that the system could end up supporting 10's of APIs. Each API imposes a max limit of 50 results per request. What is the best way of aggregating this data so that it is reliable i.e ordered, no duplicates etc I am using CakePHP framework on a LAMP environment, however, I think this question relates to all programming languages. My approach so far is to query the search API of each provider and then populate a MySQL table. From this the results are ordered, paginated etc. However, my concern is performance: API communication, parsing, inserting and then reading all in one execution. Am I missing something, does anyone have any other ideas? I'm sure this is a common problem with many alternative solutions. Any help would be greatly appreciated. Thanks, Paul

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • how to implement color search with sphinx?

    - by harald
    hello, searching a photo by dominant colors using mysql is quite simple. assuming that the r,g,b values of the most dominant colors of the photo is already stored in the database, this could be achieved for example by something like: SELECT * FROM colors WHERE ABS(dominant_r - :r) < :threshold AND ABS(dominant_g - :g) < :threshold AND ABS(dominant_b - :b) < :threshold i wonder, if it's anyhow possible to store the colors in sphinx and perform the querying using the sphinx search engine? thanks!

    Read the article

  • Array of previous weeks

    - by azz0r
    Hello, I am trying to create an array of weeks for users to select to view stats week on week. What I'm looking for is an array that has the timestamp of week start (monday 00.00 to sunday 11.59) that I can then use in MYSQL queries. Has anyone got an code that might assist? I was thinking of doing something like: $number_of_weeks = 4; $week_array = array(); foreach ($number_of_weeks as $week) { $number_of_weeks--; $value = $number_of_weeks * 604800; $StartOfLastWeek = 6 + date("w",mktime()); $week_array[$number_of_week]['start'] = date('Y-m-d', strtotime("-$StartOfLastWeek day"), $value); $week_array[$number_of_week]['end'] = date('Y-m-d', strtotime("-$StartOfLastWeek day"), $value+ 604800); }

    Read the article

  • Installing PHP APC in Fedora - Unable to initialize module ?

    - by sri
    I have been trying to install APC on my Fedora Apache Server for showing progress bar while uploading files. But I am getting the following PHP Warning while starting XAMPP. Starting XAMPP for Linux 1.7.1... PHP Warning: PHP Startup: apc: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to matchin Unknown on line 0 XAMPP: Starting Apache with SSL (and PHP5)... XAMPP: Starting MySQL... XAMPP: Another FTP daemon is already running. XAMPP for Linux started. My Server Details : OS : Fedora-12 XAMPP version : 1.7.1 PHP Version : 5.2.9 APC Version : 3.1.9 I have tried the process as is mentioned in here : 1)http://2bits.com/articles/installing-php-apc-gnulinux-centos-5.html 2)http://stevejenkins.com/blog/2011/08/how-to-install-apc-alternative-php-cache-on-centos-5-6/

    Read the article

  • GoogleAppEngine : ClassNotFoundException : javax.jdo.metadata.ComponentMetadata

    - by James.Elsey
    I'm trying to deploy my application to a locally running GoogleAppEngine development server, but I'm getting the following stack trace when I start the server Apr 23, 2010 9:03:33 PM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: Nested in org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'clientDao' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Cannot resolve reference to bean 'entityManagerFactory' while setting bean property 'entityManagerFactory'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: javax/jdo/metadata/ComponentMetadata: java.lang.ClassNotFoundException: javax.jdo.metadata.ComponentMetadata at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at com.google.appengine.tools.development.IsolatedAppClassLoader.loadClass(IsolatedAppClassLoader.java:151) at java.lang.ClassLoader.loadClass(ClassLoader.java:264) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at javax.jdo.JDOHelper$18.run(JDOHelper.java:2009) at javax.jdo.JDOHelper$18.run(JDOHelper.java:2007) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.forName(JDOHelper.java:2006) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1155) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698) at org.datanucleus.jpa.EntityManagerFactoryImpl.initialisePMF(EntityManagerFactoryImpl.java:482) at org.datanucleus.jpa.EntityManagerFactoryImpl.<init>(EntityManagerFactoryImpl.java:255) at org.datanucleus.store.appengine.jpa.DatastoreEntityManagerFactory.<init>(DatastoreEntityManagerFactory.java:68) at org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider.createContainerEntityManagerFactory(DatastorePersistenceProvider.java:45) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:224) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:291) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1369) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1335) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:269) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:104) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1245) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1010) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:530) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) at org.mortbay.jetty.Server.doStart(Server.java:217) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at com.google.appengine.tools.development.JettyContainerService.startContainer(JettyContainerService.java:181) at com.google.appengine.tools.development.AbstractContainerService.startup(AbstractContainerService.java:116) at com.google.appengine.tools.development.DevAppServerImpl.start(DevAppServerImpl.java:217) at com.google.appengine.tools.development.DevAppServerMain$StartAction.apply(DevAppServerMain.java:162) at com.google.appengine.tools.util.Parser$ParseResult.applyArgs(Parser.java:48) at com.google.appengine.tools.development.DevAppServerMain.<init>(DevAppServerMain.java:113) at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMain.java:89) The server is running at http://localhost:1234/ I'm a little confused over this, since I have the same application running locally on GlassFish/MySQL. All I have done is to swap in the relevant jar files, and change the persistence.xml. My applicationContext.xml looks as follows : <context:annotation-config/> <bean id="clientDao" class="com.jameselsey.salestracker.dao.jpa.JpaDaoClient"> <property name="entityManagerFactory" ref="entityManagerFactory"/> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"/> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean id="org.springframework.context.annotation.internalPersistenceAnnotationProcessor" class="com.jameselsey.salestracker.util.GaeFixInternalPersistenceAnnotationProcessor" /> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <tx:annotation-driven/> <bean id="clientService" class="com.jameselsey.salestracker.service.ClientService"/> </beans> My JPA DAO looks like this public class JpaDao extends JpaDaoSupport { protected <T> List<T> findAll(Class<T> clazz) { return getJpaTemplate().find("select c from " + clazz.getName() + " c"); } protected <T> T findOne(String jpql, Map params) { List<T> results = getJpaTemplate().findByNamedParams(jpql, params); if(results.isEmpty()) { return null; } if(results.size() > 1) { throw new IncorrectResultSizeDataAccessException(1, results.size()); } return results.get(0); } } And an example implemented method looks like this : @Override public Client getClientById(Integer clientId) { String jpql = "SELECT c " + "FROM com.jameselsey.salestracker.domain.Client c " + "WHERE c.id = " + clientId; return (Client) getJpaTemplate().find(jpql).get(0); } Like I say, this works ok on Glassfish/MySQL, is it possible this error could be a red herring to something else?

    Read the article

  • When I run Django on Dreamhost using SQLite, why do I get an OperationalError telling me that a tabl

    - by Paul D. Waite
    I had a Django site running on Dreamhost. Although I used SQLite when developing locally, I initially used MySQL on Dreamhost, because that’s what the wiki page said to do, and because if I’m using an ORM, I might as well take advantage of it by running against a different database. After a while, I switched the settings on the server to use SQLite, to make it easier to keep my development database in sync with the server one. python manage.py syncdb worked on the server, but when I tried to access the site, I got an OperationalError. The Django error page said that one of my tables didn’t exist. I checked the database using sqlite on the command line on the server, and using python manage.py shell, and both worked fine.

    Read the article

  • Sequence numbers best practice

    - by Abdullah Jibaly
    What's the best practice or well known methods to implement sequence numbers for business entities such as invoices, purchase orders, job numbers, etc? I want to be able to save the latest value in the database and be able to set it programatically. Is it OK to use a table for this purpose that has a SEQUENCE_NAME, SEQUENCE_NUMBER tuple? I know some databases have a first class sequence type but others (eg, MySQL) do not so it's not something I want to rely on. If a table is used to hold these sequences what is the right way to get and increment them in a synchronized fashion to ensure no data inconsistencies arise?

    Read the article

  • PHP array taking up to much memory

    - by Dylan Taylor
    I have a multidimensional array. The array itself is fine. My problem is that the script takes up monster amounts of memory, and since I'm running this on my MAMP install on my iBook G4, my computer freezes up. Below is the full script. $query = "SELECT * FROM posts ORDER BY id DESC LIMIT 10"; $result = mysql_query($query); $posts = array(); while($row = mysql_fetch_array($result)){ $posts[$row["id"]]['post_id'] = $row["id"]; $posts[$row["id"]]['post_title'] = $row["title"]; $posts[$row["id"]]['post_text'] = $row["text"]; $posts[$row["id"]]['post_tags'] = $row["tags"]; $posts[$row["id"]]['post_category'] = $row["category"]; foreach ($posts as $post) { echo $post["post_id"]; } Is there a workaround that still achieves my goal (to export the MySQL query rows to an array)? -Dylan

    Read the article

  • When to use CouchDB vs RDBMS

    - by Andrew Whitehouse
    I am looking at CouchDB, which has a number of appealing features over relational databases including: intuitive REST/HTTP interface easy replication data stored as documents, rather than normalised tables I appreciate that this is not a mature product so should be adopted with caution, but am wondering whether it is actually a viable replacement for an RDBMS (in spite of the intro page saying otherwise - http://couchdb.apache.org/docs/intro.html). Under what circumstances would CouchDB be a better choice of database than an RDBMS (e.g. MySQL), e.g. in terms of scalability, design + development time, reliability and maintenance. Are there still cases where an RDBMS is still clearly the right choice? Is this an either-or choice, or is a hybrid solution more likely to emerge as best practice?

    Read the article

  • How to extract common / significant phrases from a series of text entries

    - by arronsky
    I have a series of text items- raw HTML from a MYSQL database. I want to find the most common phrases in these entries (not the single most common phrase, and ideally, not enforcing word-for-word matching). My example is any review on Yelp.com, that shows 3 snippets from hundreds of reviews of a given restaurant, in the format: "Try the hamburger" (in 44 reviews) e.g., the "Review Highlights" section of this page: http://www.yelp.com/biz/sushi-gen-los-angeles/ I have NLTK installed and I've played around with it a bit, but am honestly overwhelmed by the options. This seems like a rather common problem and I haven't been able to find a straightforward solution by searching here. Thanks in advance for any help.

    Read the article

  • XHR FF POST size limit

    - by usurper
    Hi, My XHR POST REQUEST is cut off. When I try to reload my page information is missing. Firebugs sends following message: ... Firebug request size limit has been reached by Firebug. ... My question is: What are my options? Would it work if I declare the content.length in the header? I added a line to my apache config file and restarted it: LimitRequestBody 0 I increased the size of transfer files in mysql config file Or it is a browser issue? The only solution I could think of was to cut the data in pieces and transmit the array one by one but I don't like this idea. The content length is 91691 according to firebug. Any suggestions?

    Read the article

  • Inverted index in a search engine

    - by Ayman
    Hello there, I'm trying to write some code to make a small application for searching text from files. Files should be crawled, and I need to put an inverted index to boost searches. My problem is that I kind of have ideas about how the parser would be, I'm willing to implement the AND, NOT, OR in the query. Whereas, I couldn't figure out how my index should be... I have never created an inverted index so if any body could suggest a feasable way to do it I would be very grateful... I do know in theory how it works but my problem is I absolutely have no idea to make happen in MySql I need to give keywords being indexed a weight too... Thank you so much.

    Read the article

  • haystack's RealTimeSearchIndex causes django to hang on data entry

    - by lsc
    I'm using django-haystack and a xapian backend with real time indexing (haystack.indexes.RealTimeSearchIndexing) of model data and it works fine on my Ubuntu server. However, it causes django to hang upon data entry when I deployed the app on a RHEL5 server. Everything is hunky dory if I switch to a standard SearchIndex. Running ./manage.py rebuild_index manually works fine too. The major differences between the two setups would be the versions of Python (2.4.3 vs 2.6.4) and the xapian (1.0.4-1 vs 1.0.15). Any suggestions on what may be the problem? Nothing interesting appears in the logs, and I've tried different databases (mysql, sqlite3) and deployment methods (mod_python, wsgi) with no luck yet. I have noted the warning on the haystack docs stating that RealTimeSearchIndex is only handled gracefully with a Solr backend, however I'm running a very traffic site with only occasional writes so I'm fine with some CPU overheads on writes.

    Read the article

  • Ampersand in GET, PHP

    - by NightMICU
    I have a simple form that generates a new photo gallery, sending the title and a description to MySQL and redirecting the user to a page where they can upload photos. Everything worked fine until the ampersand entered the equation. The information is sent from a jQuery modal dialog to a PHP page which then submits the entry to the database. After Ajax completes successfully, the user is sent to the upload page with a GET URL to tell the page what album it is uploading to -- $.ajax ({ type: "POST", url: "../../includes/forms/add_gallery.php", data: $("#addGallery form").serialize(), success: function() { $("#addGallery").dialog('close'); window.location.href = 'display_album.php?album=' + title; } }); If the title has an ampersand, the Title field on the upload page does not display properly. Is there a way to escape ampersand for GET? Thanks

    Read the article

  • Creating database desktop application with data manipulation in Netbeans using Java Persistence

    - by Lulu
    It's my first time to use Persistence in developing a Java program because I usually connect via JDBC. I read that for large amounts of data, it is best to use persistence. I tried playing with the CRUD example of Netbeans. It's not very helpful thought because it only connects to the DB and allows addition and deletion of records. I need something that will allow me to manipulate the data like if the value from column C1 of table T1 is such, it will retrieve data from table t2. In short, I need to apply conditions before knowing what to retrieve exactly. The example in CRUD example already has a specific table to retrieve and only acts like a database manager. How is it possible to retrieve a specific item first then from this, will determine the next steps to be done. I'm also using embedded JavaDB/Derby as my database (also my first time to use because I usually use remote mysql)

    Read the article

  • Approaches for memcached sessions

    - by Industrial
    Hi everybody, I was thinking about using memcached to store sessions instead of mySQL, which seemed like a good idea, at first. When it comes to the failover part of utilizing memcached servers, It's a bit worrying that my sessions will stop working if the memcached would go offline. It will certainly affect my users. There's a few techniques that we already utilize to reduce failover, including having a pool of servers available to compensate in the event of downtime, utilizing sharding/consistent hashing across the server pool and so on. We would also do some sort of graceful degradation that tells the users that something have gone wrong and they are welcome to login again, in the event of them being kicked out due to memcached server failover. So how does people generally deal with these issues when storing sessions on memcached servers?

    Read the article

  • Very long strings as primary keys in a database for caching

    - by Bill Zimmerman
    Hi, I am working on a web app that allows users to create dynamic PDF files based on what they enter into a form (it is not very structured data). The idea is that User 1 enters several words (arbitrary # of words, practically capped of course), for example: A B C D E There is no such string in the database, so I was thinking: Store this string as a primary key in a MySQL database (it could be maybe around 50-100k of text, but usually probably less than 200 words) Generate the PDF file, and create a link to it in the database When the next user requests A B C D E, then I can just serve the file instead of recreating it each time. (simple cache) The PDF is cpu intensive to generate, so I am trying to cache as much as I can... My questions are: Does anyone have any alternative ideas to my approach What will the database performance be like? Is there a better way to design the schema than using the input string as the primary key?

    Read the article

  • XAMPP or MAMP on Mac OS X 10.6.2 (Snow Leopard)

    - by Steve
    I just bought a new MacBook Pro which comes with Snow Leopard 10.6.2 (Mac OS X 10.6.2). I am used to using XAMPP as my local development server on XP. Since Mac OS X is based on Unix, I was thinking on activating/installing all the necessary stuff as I would normally do on Linux. However, I am not quite ready to be playing around with the system at this point so having an external package would be a nice temporary solution I think. The question is whether I should go with MAMP or XAMPP. Does anybody have any suggestions? The Pro and Cons I suppose. As far as I know, Mac OS X comes with Apache2 and PHP5. Would MAMP or XAMPP modify the existing Apache and PHP installation? Any comments on how I should proceed? PS: Eventually I would use the default installation of Apache and PHP, and install a binary package of MySQL but time for development is an essence and I don't have time to familiarize myself with Mac OS X.

    Read the article

  • Django loaddata throws ValidationError: [u'Enter a valid date in YYYY-MM-DD format.'] on null=true f

    - by datakid
    When I run: django-admin.py loaddata ../data/library_authors.json the error is: ... ValidationError: [u'Enter a valid date in YYYY-MM-DD format.'] The model: class Writer(models.Model): first = models.CharField(u'First Name', max_length=30) other = models.CharField(u'Other Names', max_length=30, blank=True) last = models.CharField(u'Last Name', max_length=30) dob = models.DateField(u'Date of Birth', blank=True, null=True) class Meta: abstract = True ordering = ['last'] unique_together = ("first", "last") class Author(Writer): language = models.CharField(max_length=20, choices=LANGUAGES, blank=True) class Meta: verbose_name = 'Author' verbose_name_plural = 'Authors' Note that the dob DateField has blank=True, null=True The json file has structure: [ { "pk": 1, "model": "books.author", "fields": { "dob": "", "other": "", "last": "Carey", "language": "", "first": "Peter" } }, { "pk": 3, "model": "books.author", "fields": { "dob": "", "other": "", "last": "Brown", "language": "", "first": "Carter" } } ] The backing mysql database has the relevent date field in the relevant table set to NULL as default and Null? = YES. Any ideas on what I'm doing wrong or how I can get loaddata to accept null date values?

    Read the article

  • How to import text file to table with primary key as auto-increment

    - by webworm
    I have some bulk data in a text file that I need to import into a MySQL table. The table consists of two fields .. ID (integer with auto-increment) Name (varchar) The text file is a large collection of names with one name per line ... (example) John Doe Alex Smith Bob Denver I know how to import a text file via phpMyAdmin however, as far as I understand, I need to import data that has the same number of fields as the target table. Is there a way to import the data from my text file into one field and have the ID field auto-increment automatically? Thank you in advance for any help.

    Read the article

  • interesting Delphi open-source applications/projects (not components/component packs!)

    - by vic
    Hi, I'd like to know what interesting open-source projects written in Delphi (or FreePascal) you know? I'm not asking for components/components packs, I know there were questions for that. Please do not duplicate answers, vote them up instead ;) Please do not point components/packs/closed-source projects. Please provide at least word of description ;) Two examples from me: PyScripter - Python IDE written in Delphi - hosted at google code (*) HeidiSQL - MySQL Frontend - http://www.heidisql.com/ (*)sorry, as a new user I can't post more than one link :(

    Read the article

  • asp.net free webcontrol to display crosstab or pivot reports with column and row grouping, subtotals

    - by dev-cu
    Hello, I want to develop some crosstab also know as pivot reports in Asp.net with x-axis and y-axis being dynamics, allowing grouping by row and column, for example: have products in y-axis and date in x-axis having in body number of sells of a given product in a given date, if date in x-axis are years, i want subtotals for each month for a product (row) and subtotals of sells of all products in date (column) I know there are products available to build reports, but i am using Mysql, so Reporting Service is not an option. It's not necessary for the client build additional reports, i think the simplest solution is having a control to display such information and not using crystal report (which is not free) or something more complex, i want to know if is there an available free control to reach my goal. Well, does anybody know a control or have a different idea, thanks in advance.

    Read the article

  • Relationship Modelling in Core Data

    - by Stevie
    Hi there, I'm fairly new to Objective C and Core Data and have a problem designing a case where players team up one-on-one and have multiple matches that end up with a specific result. With MySQL, I would have a Player table (player primary key, name) and a match table (player A foreign key, player B foreign key, result). Now how do I do this with Core Data? I can easily tie a player entity to a match entity using a relationship. But how do I model the inverse direction for the second player ref. in the match entity? Player Name: Attribute Match: Relationship Match Match Result: Attribute PlayerA: Relationship to Player (<- Inverse to Player.Match) PlayerB: Relationship to Player (<- Inverse to ????) Would be great if someone could give me an idea on this! Thanks, Stevie.

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >