Search Results

Search found 14956 results on 599 pages for 'mysql dba'.

Page 534/599 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • How To Aggregate API Data?

    - by Mindblip
    Hi, I have a system that connects to 2 popular APIs. I need to aggregate the data from each into a unified result that can then be paginated. The scope of the project means that the system could end up supporting 10's of APIs. Each API imposes a max limit of 50 results per request. What is the best way of aggregating this data so that it is reliable i.e ordered, no duplicates etc I am using CakePHP framework on a LAMP environment, however, I think this question relates to all programming languages. My approach so far is to query the search API of each provider and then populate a MySQL table. From this the results are ordered, paginated etc. However, my concern is performance: API communication, parsing, inserting and then reading all in one execution. Am I missing something, does anyone have any other ideas? I'm sure this is a common problem with many alternative solutions. Any help would be greatly appreciated. Thanks, Paul

    Read the article

  • Creating a customized video using Flash and XML

    - by Aaron Ladage
    The problem: I have to create a Flash video (in CS3) that will query a MySQL database and display that data at certain points in the video. The bigger problem: I'm not a Flash/ActionScript developer, so this is all very foreign to me! I've divided this project into two parts: a.) dynamically generate an XML feed from the data using PHP (using an ID number passed in the URL's query string), and b.) be able to work with it in Flash. I've got the first part working, but am pretty lost in Flash. I can parse the XML, but I'm not sure how to set the data up as variables and attach it to a video's cue points. Can anyone point me in the direction of a good tutorial or offer some advice?

    Read the article

  • Django loaddata throws ValidationError: [u'Enter a valid date in YYYY-MM-DD format.'] on null=true f

    - by datakid
    When I run: django-admin.py loaddata ../data/library_authors.json the error is: ... ValidationError: [u'Enter a valid date in YYYY-MM-DD format.'] The model: class Writer(models.Model): first = models.CharField(u'First Name', max_length=30) other = models.CharField(u'Other Names', max_length=30, blank=True) last = models.CharField(u'Last Name', max_length=30) dob = models.DateField(u'Date of Birth', blank=True, null=True) class Meta: abstract = True ordering = ['last'] unique_together = ("first", "last") class Author(Writer): language = models.CharField(max_length=20, choices=LANGUAGES, blank=True) class Meta: verbose_name = 'Author' verbose_name_plural = 'Authors' Note that the dob DateField has blank=True, null=True The json file has structure: [ { "pk": 1, "model": "books.author", "fields": { "dob": "", "other": "", "last": "Carey", "language": "", "first": "Peter" } }, { "pk": 3, "model": "books.author", "fields": { "dob": "", "other": "", "last": "Brown", "language": "", "first": "Carter" } } ] The backing mysql database has the relevent date field in the relevant table set to NULL as default and Null? = YES. Any ideas on what I'm doing wrong or how I can get loaddata to accept null date values?

    Read the article

  • how to implement color search with sphinx?

    - by harald
    hello, searching a photo by dominant colors using mysql is quite simple. assuming that the r,g,b values of the most dominant colors of the photo is already stored in the database, this could be achieved for example by something like: SELECT * FROM colors WHERE ABS(dominant_r - :r) < :threshold AND ABS(dominant_g - :g) < :threshold AND ABS(dominant_b - :b) < :threshold i wonder, if it's anyhow possible to store the colors in sphinx and perform the querying using the sphinx search engine? thanks!

    Read the article

  • How do you get the glyph for a character encoded as '&#333;' from a utf-8 encoded database field usi

    - by AE
    I have a MySQL database table with a collation of 'utf8_general_ci' and the value in the field is: x & #299; bán yá wén (without the spaces). When this is converted (for example by StackOverflow's editor) it looks like this: xī bán yá wén where the second character looks like a lower case i with a bar over the top. In PHP, what function converts the & #299 ; entity into the ī character? I've tried using html_entity_decode($str,ENT_COMPAT,'UTF-8'), however I get characters like the following: yÄ«n wén or zhÅ•ng wén I'm pretty sure there's something I don't understand about the decoding, which is why I'm using the wrong function. Can anyone shed some light on how to get the single character glyph that's represented by the entity & #299 and similar high-number characters above 255? Many thanks, AE

    Read the article

  • django objects.all() method issue

    - by xlione
    after I saved one item using MyModelClass.save() method of django in one view/page , at another view I use MyModelClass.objects.all() to list all items in MyModelClass but the newly added one always is missing at the new page. i am using django 1.1 i am using mysql middleware setting MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.locale.LocaleMiddleware', ) my model: class Company(models.Model): name = models.CharField(max_length=500) description = models.CharField(max_length=500,null=True) addcompany view def addcompany(request): if request.POST: form = AddCompanyForm(request.POST) if form.is_valid(): companyname = form.cleaned_data['companyname'] c = Company(name=companyname,description='description') c.save() return HttpResponseRedirect('/admins/') else: form = AddCompanyForm() return render_to_response('user/addcompany.html',{'form':form},context_instance=RequestContext(request)) after this page in another view i called this form in another view class CompanyForm(forms.Form): companies=((0,' '),) for o in CcicCompany.objects.all(): x=o.id,o.name companies+=(x,) company = forms.ChoiceField(choices=companies,label='Company Name') to list all companies but the recently added one is missing. The transaction should be successful, since after i do a apache server reboot , i can see the newly added company name Thanks for any help...

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • Real time web services

    - by daliz
    Hi everybody, I have a little (maybe the answer could require a book) question about web services and server side programming. But first, a little preamble. Recently we have seen new kind of applications & games using some kind of real-time interaction with a database, or more generally, with other users. I'm talking about shared drawing canvas, games like this , or simple chats, or the Android app "a World of Photo", where in real time you see who is online, to share your photos, etc. Now my question: Are all these apps based on classic TCP client/server architectures or is there a way to make them in a simpler way, like a web platform like LAMP? What I'm asking, in other words is: Can PHP+MySQL (or JSP, or RoR, or any other server language) provide a way to make online users communicate in real time and share data? Is there a way to do that without the ugly and heavy mechanism of temporary tables? Thank you! I hope I've been clear.

    Read the article

  • Best practices for using memcached in Rails?

    - by Matt
    Hello everybody, as database transcations in our app are getting more and more time consuming, we have started to use memcached to reduce the amount of queries passed to MySQL. All in all, it works fine and really saves a lot of time. But as caching was "silently appearing" as a workaround to give the app more juice, a lot of our models now contain code like this: def self.all_cached Rails.cache.fetch('object_name') { find( :all, :include => [associations]) } end This is getting more and more a pain as filling and flushing the cache happens in several classes accross the application. Now, I was wondering if there was a better way to abstract memcached logic to make it more powerful and easy to use across all needed models? I was thinking about having some kind of memcached-module which is included in all needed modules. But before playing around, I thought: Let's ask experts first :-) Thanks Matt

    Read the article

  • When I run Django on Dreamhost using SQLite, why do I get an OperationalError telling me that a tabl

    - by Paul D. Waite
    I had a Django site running on Dreamhost. Although I used SQLite when developing locally, I initially used MySQL on Dreamhost, because that’s what the wiki page said to do, and because if I’m using an ORM, I might as well take advantage of it by running against a different database. After a while, I switched the settings on the server to use SQLite, to make it easier to keep my development database in sync with the server one. python manage.py syncdb worked on the server, but when I tried to access the site, I got an OperationalError. The Django error page said that one of my tables didn’t exist. I checked the database using sqlite on the command line on the server, and using python manage.py shell, and both worked fine.

    Read the article

  • haystack's RealTimeSearchIndex causes django to hang on data entry

    - by lsc
    I'm using django-haystack and a xapian backend with real time indexing (haystack.indexes.RealTimeSearchIndexing) of model data and it works fine on my Ubuntu server. However, it causes django to hang upon data entry when I deployed the app on a RHEL5 server. Everything is hunky dory if I switch to a standard SearchIndex. Running ./manage.py rebuild_index manually works fine too. The major differences between the two setups would be the versions of Python (2.4.3 vs 2.6.4) and the xapian (1.0.4-1 vs 1.0.15). Any suggestions on what may be the problem? Nothing interesting appears in the logs, and I've tried different databases (mysql, sqlite3) and deployment methods (mod_python, wsgi) with no luck yet. I have noted the warning on the haystack docs stating that RealTimeSearchIndex is only handled gracefully with a Solr backend, however I'm running a very traffic site with only occasional writes so I'm fine with some CPU overheads on writes.

    Read the article

  • How to extract common / significant phrases from a series of text entries

    - by arronsky
    I have a series of text items- raw HTML from a MYSQL database. I want to find the most common phrases in these entries (not the single most common phrase, and ideally, not enforcing word-for-word matching). My example is any review on Yelp.com, that shows 3 snippets from hundreds of reviews of a given restaurant, in the format: "Try the hamburger" (in 44 reviews) e.g., the "Review Highlights" section of this page: http://www.yelp.com/biz/sushi-gen-los-angeles/ I have NLTK installed and I've played around with it a bit, but am honestly overwhelmed by the options. This seems like a rather common problem and I haven't been able to find a straightforward solution by searching here. Thanks in advance for any help.

    Read the article

  • .NET Data Providers - How do I determine what they can do?

    - by rbellamy
    I have code which could be executed using a Provider that doesn't support transactions, or doesn't support nested transactions. How would I programmatically determine such support? E.g. The code below throws a System.InvalidOperationException on the final commit when using the MySQL .NET Connector, but works fine for MSSQL. I'd like to be able to alter the code to accommodate various providers, without having to hardcode tests based on the type of provider (E.g. I don't want to have to do if(typeof(connection) == "some provider name")) using (IDbConnection connection = Use.Connection(ConnectionStringName)) using (IDbTransaction transaction = connection.BeginTransaction()) { using (currentCommand = connection.CreateCommand()) { using (IDbCommand cmd = connection.CreateCommand()) { currentCommand = cmd; currentCommand.Transaction = transaction; currentCommand.ExecuteNonQuery(); } if (PipelineExecuter.HasErrors) { transaction.Rollback(); } else { transaction.Commit(); } } transaction.Commit(); }

    Read the article

  • Sequence numbers best practice

    - by Abdullah Jibaly
    What's the best practice or well known methods to implement sequence numbers for business entities such as invoices, purchase orders, job numbers, etc? I want to be able to save the latest value in the database and be able to set it programatically. Is it OK to use a table for this purpose that has a SEQUENCE_NAME, SEQUENCE_NUMBER tuple? I know some databases have a first class sequence type but others (eg, MySQL) do not so it's not something I want to rely on. If a table is used to hold these sequences what is the right way to get and increment them in a synchronized fashion to ensure no data inconsistencies arise?

    Read the article

  • Array of previous weeks

    - by azz0r
    Hello, I am trying to create an array of weeks for users to select to view stats week on week. What I'm looking for is an array that has the timestamp of week start (monday 00.00 to sunday 11.59) that I can then use in MYSQL queries. Has anyone got an code that might assist? I was thinking of doing something like: $number_of_weeks = 4; $week_array = array(); foreach ($number_of_weeks as $week) { $number_of_weeks--; $value = $number_of_weeks * 604800; $StartOfLastWeek = 6 + date("w",mktime()); $week_array[$number_of_week]['start'] = date('Y-m-d', strtotime("-$StartOfLastWeek day"), $value); $week_array[$number_of_week]['end'] = date('Y-m-d', strtotime("-$StartOfLastWeek day"), $value+ 604800); }

    Read the article

  • Creating database desktop application with data manipulation in Netbeans using Java Persistence

    - by Lulu
    It's my first time to use Persistence in developing a Java program because I usually connect via JDBC. I read that for large amounts of data, it is best to use persistence. I tried playing with the CRUD example of Netbeans. It's not very helpful thought because it only connects to the DB and allows addition and deletion of records. I need something that will allow me to manipulate the data like if the value from column C1 of table T1 is such, it will retrieve data from table t2. In short, I need to apply conditions before knowing what to retrieve exactly. The example in CRUD example already has a specific table to retrieve and only acts like a database manager. How is it possible to retrieve a specific item first then from this, will determine the next steps to be done. I'm also using embedded JavaDB/Derby as my database (also my first time to use because I usually use remote mysql)

    Read the article

  • Inverted index in a search engine

    - by Ayman
    Hello there, I'm trying to write some code to make a small application for searching text from files. Files should be crawled, and I need to put an inverted index to boost searches. My problem is that I kind of have ideas about how the parser would be, I'm willing to implement the AND, NOT, OR in the query. Whereas, I couldn't figure out how my index should be... I have never created an inverted index so if any body could suggest a feasable way to do it I would be very grateful... I do know in theory how it works but my problem is I absolutely have no idea to make happen in MySql I need to give keywords being indexed a weight too... Thank you so much.

    Read the article

  • Ampersand in GET, PHP

    - by NightMICU
    I have a simple form that generates a new photo gallery, sending the title and a description to MySQL and redirecting the user to a page where they can upload photos. Everything worked fine until the ampersand entered the equation. The information is sent from a jQuery modal dialog to a PHP page which then submits the entry to the database. After Ajax completes successfully, the user is sent to the upload page with a GET URL to tell the page what album it is uploading to -- $.ajax ({ type: "POST", url: "../../includes/forms/add_gallery.php", data: $("#addGallery form").serialize(), success: function() { $("#addGallery").dialog('close'); window.location.href = 'display_album.php?album=' + title; } }); If the title has an ampersand, the Title field on the upload page does not display properly. Is there a way to escape ampersand for GET? Thanks

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by grant tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS i have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah i know :) Anyways i have followed this LEMP(will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what i want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so i can host multiple websites...mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what i need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)? Thanks.

    Read the article

  • PHP array taking up to much memory

    - by Dylan Taylor
    I have a multidimensional array. The array itself is fine. My problem is that the script takes up monster amounts of memory, and since I'm running this on my MAMP install on my iBook G4, my computer freezes up. Below is the full script. $query = "SELECT * FROM posts ORDER BY id DESC LIMIT 10"; $result = mysql_query($query); $posts = array(); while($row = mysql_fetch_array($result)){ $posts[$row["id"]]['post_id'] = $row["id"]; $posts[$row["id"]]['post_title'] = $row["title"]; $posts[$row["id"]]['post_text'] = $row["text"]; $posts[$row["id"]]['post_tags'] = $row["tags"]; $posts[$row["id"]]['post_category'] = $row["category"]; foreach ($posts as $post) { echo $post["post_id"]; } Is there a workaround that still achieves my goal (to export the MySQL query rows to an array)? -Dylan

    Read the article

  • When to use CouchDB vs RDBMS

    - by Andrew Whitehouse
    I am looking at CouchDB, which has a number of appealing features over relational databases including: intuitive REST/HTTP interface easy replication data stored as documents, rather than normalised tables I appreciate that this is not a mature product so should be adopted with caution, but am wondering whether it is actually a viable replacement for an RDBMS (in spite of the intro page saying otherwise - http://couchdb.apache.org/docs/intro.html). Under what circumstances would CouchDB be a better choice of database than an RDBMS (e.g. MySQL), e.g. in terms of scalability, design + development time, reliability and maintenance. Are there still cases where an RDBMS is still clearly the right choice? Is this an either-or choice, or is a hybrid solution more likely to emerge as best practice?

    Read the article

  • Approaches for memcached sessions

    - by Industrial
    Hi everybody, I was thinking about using memcached to store sessions instead of mySQL, which seemed like a good idea, at first. When it comes to the failover part of utilizing memcached servers, It's a bit worrying that my sessions will stop working if the memcached would go offline. It will certainly affect my users. There's a few techniques that we already utilize to reduce failover, including having a pool of servers available to compensate in the event of downtime, utilizing sharding/consistent hashing across the server pool and so on. We would also do some sort of graceful degradation that tells the users that something have gone wrong and they are welcome to login again, in the event of them being kicked out due to memcached server failover. So how does people generally deal with these issues when storing sessions on memcached servers?

    Read the article

  • Apache Custom Log Format to track complete file downloads

    - by Shishant
    Hello, I am trying to write a reward system wherein users will be given reward points if they download complete files, So what should be my log format. After searching alot this is what I understand its my first time and havent done custom logs before. First of all which file should I edit for custom logs because this thing I cant find. I am using ubuntu server with default apache, php5 and mysql installation # I use this commands and they work fine nano /etc/apache2/apache2.conf /etc/init.d/apache2 restart I think this is what I need to do for my purpose LogLevel notice LogFormat "%f %u %x %o" rewards CustomLog /var/www/logs/rewards_log rewards This is as it is command or there is something missing? and is there any particular location where I need to add this? and one more thing %o is for filesize that was sent and is it possible to log only files from a particular directory? or for files with size more than 10mb. Thank You.

    Read the article

  • Unable to find sphinx.yml file

    - by Nikhil Garg
    I am running rails 2.2.3 with mysql as database scheme & thinking sphinx installed as plugin. I am having two problems : 1) I am unable to find file confing/sphinx.yml. I just have a config/development.sphinx.conf 2) I have specified min_infix_len & enable_start property from define_index method of model. I also have checked development.sphinx.conf file & these properties are correctly set there. But I am not getting any infix search results. Please help.

    Read the article

  • XHR FF POST size limit

    - by usurper
    Hi, My XHR POST REQUEST is cut off. When I try to reload my page information is missing. Firebugs sends following message: ... Firebug request size limit has been reached by Firebug. ... My question is: What are my options? Would it work if I declare the content.length in the header? I added a line to my apache config file and restarted it: LimitRequestBody 0 I increased the size of transfer files in mysql config file Or it is a browser issue? The only solution I could think of was to cut the data in pieces and transmit the array one by one but I don't like this idea. The content length is 91691 according to firebug. Any suggestions?

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >