Search Results

Search found 10602 results on 425 pages for 'media queries'.

Page 174/425 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • WP7 BarcodeManager - Invalid cross-thread access

    - by rpf
    I'm trying to use Windows Phone 7 Silverlight ZXing Barcode Scanning Library but I'm having some problems. I'm using a background worker to check the image, but when I do this: WP7BarcodeManager.ScanBarcode(this.Image, BarcodeResults_Finished); The code throws an exception: Invalid cross-thread access. Here is my code... void photoChooserTask_Completed(object sender, PhotoResult e) { if (e.TaskResult == TaskResult.OK) { ShowImage(); System.Windows.Media.Imaging.BitmapImage bmp = new System.Windows.Media.Imaging.BitmapImage(); bmp.SetSource(e.ChosenPhoto); imgCapture.Source = bmp; this.Image = new BitmapImage(); this.Image.SetSource(e.ChosenPhoto); progressBar.Visibility = System.Windows.Visibility.Visible; txtStatus.Visibility = System.Windows.Visibility.Collapsed; worker.RunWorkerAsync(); } else ShowMain(); } void worker_DoWork(object sender, DoWorkEventArgs e) { try { Thread.Sleep(2000); WP7BarcodeManager.ScanMode = com.google.zxing.BarcodeFormat.UPC_EAN; WP7BarcodeManager.ScanBarcode(this.Image, BarcodeResults_Finished); } catch (Exception ex) { Debug.WriteLine("Error processing image.", ex); } } How can I solve this?

    Read the article

  • Tilting web browser on windows phone 7

    - by marcus
    Hi guys, i'm working on a windows phone 7 emulator. I have a web browser which navigates to local host. So my problem i faced was that when i tilt the windows phone 7 emulator 90% right, the screen doesn't. Could there be any advice on how to do so? using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Microsoft.Phone.Controls; namespace DSP { public partial class MainPage : PhoneApplicationPage { // Constructor public MainPage() { InitializeComponent(); } private void ContentPanel_Loaded(object sender, RoutedEventArgs e) { MessageBox.Show("Loading website. This might take a few seconds..."); webBrowser1.Navigate(new Uri("http://localhost/Liweiyi_fyp_082648y/homepage.html", UriKind.Absolute)); } private void webBrowser1_Loaded(object sender, RoutedEventArgs e) { } } }

    Read the article

  • how to write this query using joins?

    - by aquero
    Hi, i have a table campaign which has details of campaign mails sent. campaign_table: campaign_id campaign_name flag 1 test1 1 2 test2 1 3 test3 0 another table campaign activity which has details of campaign activities. campaign_activity: campaign_id is_clicked is_opened 1 0 1 1 1 0 2 0 1 2 1 0 I want to get all campaigns with flag value 3 and the number of is_clicked columns with value 1 and number of columns with is_opened value 1 in a single query. ie. campaign_id campaign_name numberofclicks numberofopens 1 test1 1 1 2 test2 1 1 I did this using sub-query with the query: select c.campaign_id,c.campaign_name, (SELECT count(campaign_id) from campaign_activity WHERE campaign_id=c.id AND is_clicked=1) as numberofclicks, (SELECT count(campaign_id) from campaign_activity WHERE campaign_id=c.id AND is_clicked=1) as numberofopens FROM campaign c WHERE c.flag=1 But people say that using sub-queries are not a good coding convention and you have to use join instead of sub-queries. But i don't know how to get the same result using join. I consulted with some of my colleagues and they are saying that its not possible to use join in this situation. Is it possible to get the same result using joins? if yes, please tell me how.

    Read the article

  • SELECT set of most recent id, amount FROM table, where id occurs many times

    - by Jon Cram
    I have a table recording the amount of data transferred by a given service on a given date. One record is entered daily for a given service. I'd like to be able to retrieve the most recent amount for a set of services. Example data set: serviceId | amount | date ------------------------------- 1 | 8 | 2010-04-12 2 | 11 | 2010-04-12 2 | 14 | 2010-04-11 3 | 9 | 2010-04-11 1 | 6 | 2010-04-10 2 | 5 | 2010-04-10 3 | 22 | 2010-04-10 4 | 17 | 2010-04-19 Desired response (service ids 1,2,3): serviceId | amount | date ------------------------------- 1 | 8 | 2010-04-12 2 | 11 | 2010-04-12 3 | 9 | 2010-04-11 Desired response (service ids 2, 4): serviceId | amount | date ------------------------------- 2 | 11 | 2010-04-12 4 | 17 | 2010-04-19 This retrieves the equivalent as running the following once per serviceId: SELECT serviceId, amount, date FROM table WHERE serviceId = <given serviceId> ORDER BY date DESC LIMIT 0,1 I understand how I can retrieve the data I want in X queries. I'm interested to see how I can retrieve the same data using either a single query or at the very least less than X queries. I'm very interested to see what might be the most efficient approach. The table currently contains 28809 records. I appreciate that there are other questions that cover selecting the most recent set of records. I have examined three such questions but have been unable to apply the solutions to my problem.

    Read the article

  • How to render all records from a nested set into a real html tree

    - by Christoph Schiessl
    I'm using the awesome_nested_set plugin in my Rails project. I have two models that look like this (simplified): class Customer < ActiveRecord::Base has_many :categories end class Category < ActiveRecord::Base belongs_to :customer # Columns in the categories table: lft, rgt and parent_id acts_as_nested_set :scope => :customer_id validates_presence_of :name # Further validations... end The tree in the database is constructed as expected. All the values of parent_id, lft and rgt are correct. The tree has multiple root nodes (which is of course allowed in awesome_nested_set). Now, I want to render all categories of a given customer in a correctly sorted tree like structure: for example nested <ul> tags. This wouldn't be too difficult but I need it to be efficient (the less sql queries the better). Update: Figured out that it is possible to calculate the number of children for any given Node in the tree without further SQL queries: number_of_children = (node.rgt - node.lft - 1)/2. This doesn't solve the problem but it may prove to be helpful.

    Read the article

  • MySQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, Specifications: MySQL 4.1+ I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. Appreciated advice in advance :)

    Read the article

  • PHP issue json_decode echo array

    - by dfdf
    <?php $string = file_get_contents("http://www.reddit.com/r/news.json"); $array = json_decode($string, true); echo $array['data'][0]['children'][0]['data'][0]['title'][0]; ?> I've got a problem- the code doesn't echo anything. I'm kinda new to json_decode, so any help is appreciated :-) Edit: As a response to a comment, here's what print_r results to: Array ( [kind] => Listing [data] => Array ( [modhash] => [children] => Array ( [0] => Array ( [kind] => t3 [data] => Array ( [domain] => syracuse.com [banned_by] => [media_embed] => Array ( ) [subreddit] => news [selftext_html] => [selftext] => [likes] => [secure_media] => [link_flair_text] => [id] => 1qdtqr [secure_media_embed] => Array ( ) [clicked] => [stickied] => [author] => cadencehz [media] => [score] => 1552 [approved_by] => [over_18] => [hidden] => [thumbnail] => [subreddit_id] => t5_2qh3l [edited] => [link_flair_css_class] => [author_flair_css_class] => [downs] => 978 [saved] => [is_self] => [permalink] => /r/news/comments/1qdtqr/thousands_defend_grocery_store_employee_with/ [name] => t3_1qdtqr [created] => 1384215998 [url] => http://www.syracuse.com/news/index.ssf/2013/11/thousands_come_to_defense_of_clay_wegmans_employee_with_aspergers_syndrome_after.html#incart_m-rpt-2 [author_flair_text] => [title] => Thousands defend grocery store employee with Asperger's syndrome after customer yells at him for being too slow [created_utc] => 1384187198 [ups] => 2530 [num_comments] => 401 [visited] => [num_reports] => [distinguished] => ) ) [1] => Array ( [kind] => t3 [data] => Array ( [domain] => bostonherald.com [banned_by] => [media_embed] => Array ( ) [subreddit] => news [selftext_html] => [selftext] => [likes] => [secure_media] => [link_flair_text] => [id] => 1qddl6 [secure_media_embed] => Array ( ) [clicked] => [stickied] => [author] => boxofrain [media] => [score] => 2086 [approved_by] => [over_18] => [hidden] => [thumbnail] => [subreddit_id] => t5_2qh3l [edited] => [link_flair_css_class] => [author_flair_css_class] => [downs] => 2556 [saved] => [is_self] => [permalink] => /r/news/comments/1qddl6/motorcycle_stolen_in_1961_found_is_recovered_and/ [name] => t3_1qddl6 [created] => 1384199801 [url] => http://bostonherald.com/news_opinion/offbeat_news/2013/11/man_glad_stolen_motorcycle_found_after_46_years [author_flair_text] => [title] => Motorcycle stolen in 1961 found is recovered and will be returned to it 73 year old owner. [created_utc] => 1384171001 [ups] => 4642 [num_comments] => 141 [visited] => [num_reports] => [distinguished] => ) ) [2] => Array ( [kind] => t3 [data] => Array ( [domain] => hosted.ap.org [banned_by] => [media_embed] => Array ( ) [subreddit] => news [selftext_html] => [selftext] => [likes] => [secure_media] => [link_flair_text] => [id] => 1qe6gp [secure_media_embed] => Array ( ) [clicked] => [stickied] => [author] => donkey-kick [media] => [score] => 415 [approved_by] => [over_18] => [hidden] => [thumbnail] => [subreddit_id] => t5_2qh3l [edited] => [link_flair_css_class] => [author_flair_css_class] => [downs] => 334 [saved] => [is_self] => [permalink] => /r/news/comments/1qe6gp/atheist_mega_churches_take_hold_in_the_us_and/ [name] => t3_1qe6gp [created] => 1384224670 [url] => http://hosted.ap.org/dynamic/stories/U/US_ATHEIST_MEGACHURCH?SITE=AP&amp;SECTION=HOME&amp;TEMPLATE=DEFAULT&amp;CTIME=2013-11-10-17-03-15 [author_flair_text] => [title] => Atheist Mega Churches take hold in the US and around the world. [created_utc] => 1384195870 [ups] => 749 [num_comments] => 368 [visited] => [num_reports] => [distinguished] => ) ) [3] => Array ( [kind] => t3 [data] => Array ( [domain] => abcnews.go.com [banned_by] => [media_embed] => Array ( ) [subreddit] => news [selftext_html] => [selftext] => [likes] => [secure_media] => [link_flair_text] => [id] => 1qdie4 [secure_media_embed] => Array ( ) [clicked] => [stickied] => [author] => Hoosier_made [media] => [score] => 984 [approved_by] => [over_18] => [hidden] => [thumbnail] => [subreddit_id] => t5_2qh3l [edited] => [link_flair_css_class] => [author_flair_css_class] => [downs] => 400 [saved] => [is_self] => [permalink] => /r/news/comments/1qdie4/abc_news_amy_robach_has_mammogram_live_on_gma/ [name] => t3_1qdie4 [created] => 1384206209 [url] => http://abcnews.go.com/blogs/health/2013/11/11/abc-news-amy-robach-reveals-breast-cancer-diagnosis/ [author_flair_text] => [title] => ABC News’ Amy Robach has mammogram live on GMA. Results Reveal Breast Cancer Diagnosis. [created_utc] => 1384177409 [ups] => 1384 [num_comments] => 130 [visited] => [num_reports] => [distinguished] => ) ) // ECT... I CUT HERE BECAUSE IT WENT ON FOR A WHILE [after] => t3_1qdinr [before] => ) ) Here's the url to the exact output, I had to trim the code to fit the character limit so maybe there's something I messed up ! :P Okay: Link goes here...

    Read the article

  • What is the best approach to embed mp4 for the iPhone without using JavaScript?

    - by usingtechnology
    I am trying to troubleshoot this code and am running into a dead-end, so I thought I would ask my first question here. I have three questions: 1) What is the best approach to embed an mp4 for an iPhone specific view, preferably without using Javascript? 2) Are there any preferred practices to debug code on an iPhone? 3) Can anyone tell me what is wrong with my specific code below? I should mention upfront that the variable $fileName does indeed contain the correct info, I've just omitted that portion of the code. Also, the poster image does flicker for a brief moment before I receive the greyed out, broken QuickTime image so that is an indication that this is partially working. Code: <object width="448" height="335" classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B" codebase="http://www.apple.com/qtactivex/qtplugin.cab"> <param name="src" value="/libraries/images/$fileName.jpg" /> <param name="href" value="/libraries/media/$fileName.mp4" /> <param name="target" value="myself" /> <param name="controller" value="true" /> <param name="autoplay" value="false" /> <param name="scale" value="aspect" /> <embed src="/libraries/images/$fileName.jpg" href="/libraries/media/$fileName.mp4" type="video/mp4" target="myself" width="448" height="335" scale="aspect" controller="false" autoplay="false"> </embed> </object>

    Read the article

  • How do I efficiently locate key-value pairs in a multi-dimensional PHP array?

    - by Kyle Noland
    I have an array in PHP as a result of the following query to a Wordpress database: SELECT * FROM wp_postmeta WHERE post_id = :id I am returned a multidimensional array that looks like this: Array ( [0] => Array ( [meta_id] => 380 [post_id] => 72 [meta_key] => _edit_last [meta_value] => 1 ) ... etc. What is the best way to find a particular key-value pair in this array? For instance, how would I located the row where [meta_key] = event_name so that I can extract that same row's [meta_value] value into a PHP variable? I realize I could turn this into many individual MySQL queries. Does anyone have an opinion of the efficiency of doing 10 SQL queries in a row rather than searching the array 10 times? I would think since the array is in memory, that will be the fastest method to find the values I need. Alternatively, is there a better way to query the database from the beginning so that my result set is formatted in a way that is easier to search?

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • What are the Limitations for Connecting to an Access Query in Excel

    - by thornomad
    I have an Access 2007 database that has a number of tables, some are fairly large (100,000+ records); I have created a union query to pull some of the same types of data from multiple tables into one large query for pivot table manipulation and reporting. For example: SELECT Language FROM Table1 UNION ALL SELECT Language FROM Table2 UNION ALL SELECT Language FROM Table3; This works. I found, quickly, however, that a union query will not show up when connecting to the datasource from Excel 2007. So, I created a second query to reference the union query. Like so: SELECT * FROM [The Above Union Query]; This query works and it, initially, was accessible from Excel. Time passed, I've added more data. Suddenly, when I connect to my Access database from Excel my query referencing the union has disappeared. MS Access shows no signs of an issue (data displays in Access) and my other non-union queries are showing up in Excel 2007 ... but not the one that references the union. What could be going on? Why did it disappear? I noticed if I switch some of the referenced tables in the union query to a smaller table (with less rows) all of sudden the query appears in Excel again. At least, I think that's what the difference is. I really can't put my finger on why some of the union queries won't show up and some will. Am stumped and need some guidance. Thanks.

    Read the article

  • Rails 3 fields_for agressive loading?

    - by Seth
    Hi all, I'm trying to optimize (limit) queries in a view. I am using the fields_for function. I need to reference various properties of the object, such as username for display purposes. However, this is a rel table, so I need to join with my users table. The result is N sub-queries, 1 for each field in fields_for. It's difficult to explain, but I think you'll understand what I'm asking if I paste my code: <%= form_for @election do |f| %> <%= f.fields_for :voters do |voter| %> <%= voter.hidden_field :id %> <%= voter.object.user.preferred_name %> <% end %> <% end %> I have like 10,000 users, and many times each election will include all 10,000 users. That's 10,000 subqueries every time this view is loaded. I want fields_for to JOIN on users. Is this possible? I'd like to do something like: ... <%= f.fields_for :voters, :joins => :users do |voter| %> ... <% end %> ... But that, of course, doesn't work :(

    Read the article

  • Tiny MCE (jquery plugin version) not showing toolbar in IE (works fine in other browsers)

    - by I am Wonder
    I had an implementation working well but for some reason IE decided it was tired of playing nice. I have an advanced implementation of TinyMCE (the jquery plugin version - see http://tinymce.moxiecode.com/examples/example_23.php for details). It still works great in all browsers but IE. In IE it shows the drop-down options for Format, Font family, and Font size, but only as text.. not as a drop down normally looks. All other buttons on the toolbar are missing. (I've tried IE8 and IE8 Compatibility Mode) I get a javascript error: Syntax error Line 36 Char 1. Unfortunately the javascript is being loaded dynamically so this doesn't help me. Here is my implementation code for the TinyMCE editor: $(function () { $('#InputStuffHere').tinymce({ // General options theme: "advanced", plugins: "safari,pagebreak,style,layer,table,save,advhr,advimage,advlink,inlinepopups,preview,media,searchreplace,contextmenu,paste,fullscreen,noneditable,visualchars,nonbreaking,template", // Theme options theme_advanced_buttons1: "bold,italic,underline,strikethrough,|,justifyleft,justifycenter,justifyright,justifyfull,formatselect,fontselect,fontsizeselect,|,hr,removeformat", theme_advanced_buttons2: "cut,copy,paste,pastetext,pasteword,|,search,replace,|,bullist,numlist,|,outdent,indent,blockquote,|,undo,redo,|,link,unlink,image,|,forecolor,backcolor,|,spellchecker", theme_advanced_buttons3 : "", theme_advanced_toolbar_location: "top", theme_advanced_toolbar_align: "left", theme_advanced_statusbar_location: "bottom", theme_advanced_resizing: true, // Drop lists for link/image/media/template dialogs template_external_list_url: "lists/template_list.js", external_link_list_url: "lists/link_list.js", external_image_list_url: "lists/image_list.js", media_external_list_url: "lists/media_list.js", //initialization callback init_instance_callback: "TinyMCEReady", add_form_submit_trigger : false }); }); So... anyone seen anything like this or have any ideas for me? Thank you so much everyone!

    Read the article

  • Use jquery ':contains' to find specific javascript within a span

    - by Rob
    This is my first time here, I hope that this is clear. So i will have code similar to this this. <span class="mediaSource ui-draggable" id="purchsePlay7915504"> <a href="" onclick="return popup_window(this, 'MediaView', 850, 680)" class="control" enter code hereid="GenericLink"></a> <img id="Any_71" alt="Media Source" src="images/9672web.gif" class="mediaStationIcons mediaWin"/> <img class="player" alt="Media Source" src="images/playmedia.gif" style="display: none;"/> </span> The href portion is generated on the backend, and I have no access to it. I need to modify some existing jquery code to do something based on what the 'onclick' function is(there are different ones e.g. popup_window1,popup_window2 etc.) . I tried something like this: $('.segmentLeft span.mediaSource').click(function(){ if ($('span:contains("popup_window")').length > 0) { do something } }); but it does not seem to work.

    Read the article

  • How can i design a DB where the user can define the fields and types of a detail table in a M-D rela

    - by Simon
    My application has one table called 'events' and each event has approx 30 standard fields, but also user defined fields that could be any name or type, in an 'eventdata' table. Users can define these event data tables, by specifying x number of fields (either text/double/datetime/boolean) and the names of these fields. This 'eventdata' (table) can be different for each 'event'. My current approach is to create a lookup table for the definitions. So if i need to query all 'event' and 'eventdata' per record, i do so in a M-D relaitionship using two queries (i.e. select * from events, then for each record in 'events', select * from 'some table'). Is there a better approach to doing this? I have implemented this so far, but most of my queries require two distinct calls to the DB - i cannot simply join my master 'events' table with different 'eventdata' tables for each record in in 'events'. I guess my main question is: can i join my master table with different detail tables for each record? E.g. SELECT E.*, E.Tablename FROM events E LEFT JOIN 'E.tablename' T ON E._ID = T.ID If not, is there a better way to design my database considering i have no idea on how many user defined fields there may be and what type they will be.

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • Can a second stored procedure doing the same thing finish before first one?

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called 'CreatedAudit' as well. I call the CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") Right after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • calculating change (over a period) for a dated field

    - by morpheous
    I have two tables with the following schema: CREATE TABLE sales_data ( sales_time date NOT NULL, product_id integer NOT NULL, sales_amt double NOT NULL ); CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); I want to write two types of queries that will allow me to calculate: period on period change (e.g. week on week change) change in period on period change (e.g. change in week on week change) I would prefer to write this in ANSI SQL, since I dont want to be tied to any particular db. [Edit] In light of some of the comments, if I have to be tied to a single database (in terms of SQL dialect), it will have to be PostgreSQL The queries I want to write are of the form (pseudo SQL of course): Query Type 1 (Period on Period Change) ======================================= a). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) b). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as month_on_month_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) Query Type 2 (Change in Period on Period Change) ================================================= a). select product_id, ((a2.week_on_week_change - a1.week_on_week_change)/a1.week_on_week_change) as change_on_week_on_week_change from (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a1), (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a2) WHERE {SOME OTHER CRITERIA}

    Read the article

  • Getting the Item Count of a large sharepoint list in fastest way

    - by sooraj
    I am trying to get the count of the items in a sharepoint document library programatically. The scale I am working with is 30-70000 items. We have usercontrol in a smartpart to display the count . Ours is a TEAM site. This is the code to get the total count SPList VoulnterrList = web.Lists[ListTitle]; SPQuery query = new SPQuery(); query.ViewAttributes = "Scope=\"Recursive\""; string queries = "<Where><Eq><FieldRef Name='ApprovalStatus' /><Value Type='Choice'>Pending</Value></Eq></Where>"; query.Query = queries; SPListItemCollection lstitemcollAssoID = VoulnterrList.GetItems(query); lblCount.Text = "Total Proofs: " + VoulnterrList.Items.Count.ToString() + " Pending Proofs: " + lstitemcollAssoID.Count.ToString(); The problem is this has serious performance issue it takes 75 to 80 sec to load the page. if we comment this page load will decrees to 4 sec. Any better approch for this problem Ours is sharepoint 2007

    Read the article

  • SQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. I'm using MySQL 4.1+, but if you know the answer for another engine, just share. Appreciated advice in advance :)

    Read the article

  • Clarification of a some code

    - by Legend
    I have come across a website that appears to use Ajax but does not include any js file except one file called ajax.js which has the following: function run(c, f, b, a, d) { var e = null; if (b && f) { document.getElementById(b).innerHTML = f } if (window.XMLHttpRequest) { e = new XMLHttpRequest() } else { if (window.ActiveXObject) { e = new ActiveXObject(Microsoft.XMLHTTP) } } e.onreadystatechange = function () { if (e.readyState == 4) { if (e.status == 200 || e.statusText == "OK") { if (b) { document.getElementById(b).innerHTML = e.responseText } if (a) { setTimeout(a, 0) } } else { console.log("AJAX Error: " + e.status + " | " + e.statusText); if (b && d != 1) { document.getElementById(b).innerHTML = "AJAX Error. Please try refreshing." } } } }; e.open("GET", c, true); e.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); e.send(null) } Like you might have guessed, the way it issues queries inside the page with queries like this: run('page.php',loadingText,'ajax-test', 'LoadSamples()'); I must admit that this is the first time I've seen a page from which I could not figure how things are being done. I have a few questions: Is this Server-Side Ajax or something similar? If not, can someone clarify what exactly is this? Why does one use this? Is it for hiding the design details? (which are otherwise revealed in plain text by javascript) How difficult would it be to convert my existing application into this design pattern? (maybe a subjective question but any short suggestion will do) Any suggestions?

    Read the article

  • php memory how much is too much

    - by Rob
    I'm currently re-writing my site using my own framework (it's very simple and does exactly what I need, i've no need for something like Zend or Cake PHP). I've done alot of work in making sure everything is cached properly, caching pages in files so avoid sql queries and generally limiting the number of sql queries. Overall it looks like it's very speedy. The average time taken for the front page (taken over 100 times) is 0.046152 microseconds. But one thing i'm not sure about is whether i've done enough to reduce php memory usage. The only time i've ever encountered problems with it is when uploading large files. Using memory_get_peak_usage(TRUE), which I THINK returns the highest amount of memory used whilst the script has been running, the average (taken over 100 times) is 1572864 bytes. Is that good? I realise you don't know what it is i'm doing (it's rather simple, get the 10 latest articles, the comment count for each, get the user controls, popular tags in the sidebar etc). But would you be at all worried with a script using that sort of memory getting hit 50,000 times a day? Or once every second at peak times? I realise that this is a very open ended question. Hopefully you can understand that it's a bit of a stab in the dark and i'm really just looking for some re-assurance that it's not going to die horribly come re-launch day.

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >