Search Results

Search found 1999 results on 80 pages for 'temporary'.

Page 68/80 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • How to Populate a 'Tree' structure 'Declaratively'

    - by mackenir
    I want to define a 'node' class/struct and then declare a tree of these nodes in code in such a way that the way the code is formatted reflects the tree structure, and there's not 'too much' boiler plate in the way. Note that this isn't a question about data structures, but rather about what features of C++ I could use to arrive at a similar style of declarative code to the example below. Possibly with C++0X this would be easier as it has more capabilities in the area of constructing objects and collections, but I'm using Visual Studio 2008. Example tree node type: struct node { string name; node* children; node(const char* name, node* children); node(const char* name); }; What I want to do: Declare a tree so its structure is reflected in the source code node root = node("foo", [ node("child1"), node("child2", [ node("grand_child1"), node("grand_child2"), node("grand_child3" ]), node("child3") ]); NB: what I don't want to do: Declare a whole bunch of temporary objects/colls and construct the tree 'backwards' node grandkids[] = node[3] { node("grand_child1"), node("grand_child2"), node("grand_child3" }; node kids[] = node[3] { node("child1"), node("child2", grandkids) node("child3") }; node root = node("foo", kids);

    Read the article

  • How can I efficiently manipulate 500k records in SQL Server 2005?

    - by cdeszaq
    I am getting a large text file of updated information from a customer that contains updates for 500,000 users. However, as I am processing this file, I often am running into SQL Server timeout errors. Here's the process I follow in my VB application that processes the data (in general): Delete all records from temporary table (to remove last month's data) (eg. DELETE * FROM tempTable) Rip text file into the temp table Fill in extra information into the temp table, such as their organization_id, their user_id, group_code, etc. Update the data in the real tables based on the data computed in the temp table The problem is that I often run commands like UPDATE tempTable SET user_id = (SELECT user_id FROM myUsers WHERE external_id = tempTable.external_id) and these commands frequently time out. I have tried bumping the timeouts up to as far as 10 minutes, but they still fail. Now, I realize that 500k rows is no small number of rows to manipulate, but I would think that a database purported to be able to handle millions and millions of rows should be able to cope with 500k pretty easily. Am I doing something wrong with how I am going about processing this data? Please help. Any and all suggestions welcome.

    Read the article

  • Invalid Argument javascript error only on certain computers

    - by Jen
    Getting an error whenever we click a particular button/link on our site. It is generating a javascript "Invalid Argument" error. I know in the other posts it is typically because it is a syntax error in the javascript however it only just seems to have started happening and it doesn't happen on all pcs. ie. in our client's environment if I remote onto their web server and view the uat website I get the javascript error. If I remote onto their sql server and view the uat website I don't get the javascript error. If it was a syntax error then I would always get the error wouldn't I? both browsers are the same version of IE6 (yeah I know...) :) I have tried deleting temporary internet files - including viewing the files and deleting them myself - but no joy. client uses citrix.. and they're all getting the error :( Any ideas would be appreciated - Thanks! :) Update - Sorry I haven't posted specific code as there is too much to post (and I'm not sure where the error is occurring). The "button" launches a new window which in turn opens up a couple of aspx pages and calls lots of javascript. So the window opens ok, and there's a function that gets called to resize the window - but before it calls the resizing of the window/content it throws the invalid argument error. Am busy trying to get alerts to trigger to see if I can see where it's falling over but so far no luck. Again not sure why this error doesn't occur when I use a particular PC (same browser version)

    Read the article

  • c++ floating point precision loss: 3015/0.00025298219406977296

    - by SigTerm
    The problem. Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu. Code: double a = 3015.0; double b = 0.00025298219406977296; //*((unsigned __int64*)(&a)) == 0x40a78e0000000000 //*((unsigned __int64*)(&b)) == 0x3f30945640000000 double f = a/b;//3015/0.00025298219406977296; the result of calculation (i.e. "f") is 11917835.000000000 (*((unsigned __int64*)(&f)) == 0x4166bb4160000000) although it should be 11917834.814763514 (i.e. *((unsigned __int64*)(&f)) == 0x4166bb415a128aef). I.e. fractional part is lost. Unfortunately, I need fractional part to be correct. Questions: 1) Why does this happen? 2) How can I fix the problem? Additional info: 0) The result is taken directly from "watch" window (it wasn't printed, and I didn't forget to set printing precision). I also provided hex dump of floating point variable, so I'm absolutely sure about calculation result. 1) The disassembly of f = a/b is: fld qword ptr [a] fdiv qword ptr [b] fstp qword ptr [f] 2) f = 3015/0.00025298219406977296; yields correct result (f == 11917834.814763514 , *((unsigned __int64*)(&f)) == 0x4166bb415a128aef ), but it looks like in this case result is simply calculated during compile-time: fld qword ptr [__real@4166bb415a128aef (828EA0h)] fstp qword ptr [f] So, how can I fix this problem? P.S. I've found a temporary workaround (i need only fractional part of division, so I simply use f = fmod(a/b)/b at the moment), but I still would like to know how to fix this problem properly - double precision is supposed to be 16 decimal digits, so such calculation isn't supposed to cause problems.

    Read the article

  • Why is T() = T() allowed?

    - by Rimo
    I believe the expression T() creates an rvalue (by the Standard). However, the following code compiles (at least on gcc4.0): class T {}; int main() { T() = T(); } I know technically this is possible because member functions can be invoked on temporaries and the above is just invoking the operator= on the rvalue temporary created from the first T(). But conceptually this is like assigning a new value to an rvalue. Is there a good reason why this is allowed? Edit: The reason I find this odd is it's strictly forbidden on built-in types yet allowed on user-defined types. For example, int(2) = int(3) won't compile because that is an "invalid lvalue in assignment". So I guess the real question is, was this somewhat inconsistent behavior built into the language for a reason? Or is it there for some historical reason? (E.g it would be conceptually more sound to allow only const member functions to be invoked on rvalue expressions, but that cannot be done because that might break some existing code.)

    Read the article

  • File upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • Tree iterator, can you optimize this any further?

    - by Ron
    As a follow up to my original question about a small piece of this code I decided to ask a follow up to see if you can do better then what we came up with so far. The code below iterates over a binary tree (left/right = child/next ). I do believe there is room for one less conditional in here (the down boolean). The fastest answer wins! The cnt statement can be multiple statements so lets make sure this appears only once The child() and next() member functions are about 30x as slow as the hasChild() and hasNext() operations. Keep it iterative <-- dropped this requirement as the recursive solution presented was faster. This is C++ code visit order of the nodes must stay as they are in the example below. ( hit parents first then the children then the 'next' nodes). BaseNodePtr is a boost::shared_ptr as thus assignments are slow, avoid any temporary BaseNodePtr variables. Currently this code takes 5897ms to visit 62200000 nodes in a test tree, calling this function 200,000 times. void processTree (BaseNodePtr current, unsigned int & cnt ) { bool down = true; while ( true ) { if ( down ) { while (true) { cnt++; // this can/will be multiple statesments if (!current->hasChild()) break; current = current->child(); } } if ( current->hasNext() ) { down = true; current = current->next(); } else { down = false; current = current->parent(); if (!current) return; // done. } } }

    Read the article

  • Read in double type from txt file - C++

    - by Greenhouse Gases
    Hi there I'm in the midst of a university project and have decided to implement a method that can accept information from a text file (in this instance called "locations.txt"). input from the text file will look like this: London 345 456 Madrid 234 345 Beinjing 345 456 Frankfurt 456 567 The function looks like this currently (and you will notice I am missing the While condition to finish adding input when reaches end of text in locations.txt, i tried using eof but this didnt work?!). Also get function expects a char and so cant accept input thats a double which is what the latitude and longitude are defined as... void populateList(){ ifstream inputFile; inputFile.open ("locations.txt"); temp = new locationNode; // declare the space for a pointer item and assign a temporary pointer to it while(HASNT REACHED END OF TEXT FILE!!) { inputFile.getline(temp-nodeCityName, MAX_LENGTH); // inputFile.get(temp-nodeLati, MAX_LENGTH); // inputFile.get(temp-nodeLongi, MAX_LENGTH); temp-Next = NULL; //set to NULL as when one is added it is currently the last in the list and so can not point to the next if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2-Next != NULL) { temp2 = temp2-Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } } inputFile.close(); } Any help you can provide would be most useful. If I need to provide anymore detail I will do, I'm in a bustling canteen atm and concentrating is hard!!

    Read the article

  • Measure text size in JavaScript

    - by kayahr
    I want to measure the size of a text in JavaScript. So far this isn't so difficult because I can simply put a temporary invisible div into the DOM tree and check the offsetWidth and offsetHeight. The problem is, I want to do this BEFORE the DOM is ready. Here is a stub: <html> <head> <script type="text/javascript"> var text = "Hello world"; var fontFamily = "Arial"; var fontSize = 12; var size = measureText(text, fontSize, fontFamily); function measureText(text, fontSize, fontFamily) { // TODO Implement me! } </script> </head> <body> </body> </html> Again: I KNOW how to do it asynchronously when DOM (or body) signals that it is ready. But I want to do it synchronously right in the header as shown in the stub above. Any ideas how I can accomplish this? My current opinion is that this is impossible but maybe someone has a crazy idea.

    Read the article

  • Reference remotely located assembly (web uri) from locally installed application?

    - by moonground.de
    Hi Stackoverflowers! :) We have a .NET application for Windows which is installed locally by Microsoft Installer. Now we have the need to use additional assemblies which are located online at our Web Servers. We'd like to refer to a remote uri like https://www.ourserver.com/OurProductName/ExternalLib.dll and reveal additional functionality, which is described roughly by a known common ("AddIn/Plugin") Interface. These are not 3rd Party Plugins, we just want be able to exchange parts of the application frequently, without the need to have frequent software updates. Our first idea was to add some kind of "remote refence" in Visual Studio by setting the path to the remote assembly uri. But Visual Studio downloaded the assembly immediately to a temporary directory, adding a reference to it. Our second attempt then, is simply using a WebRequest (or WebClient) to retrieve a binary stream of the Assembly, loading it "from image" by using Assembly.Load(...). This actually works, but is not very elegant and requires more additional programming for verification etc. We hoped Clickonce would provide useful techniques but apparently it's suitable for standalone applications only. (Correct me?) Is there a way (.net native or by framework/api) to reference remotely located assemblies? Thanks in advance and have a happy easter!

    Read the article

  • Anything wrong with this MySQL quert? takes 10 seconds+ to load

    - by user345426
    I have a search that is taking 10 seconds+ to execute! Keep in mind it is also searching over 200,000 products in the database. I posted the explain and MySQL query here. 1 SIMPLE p ref PRIMARY,products_status,prod_prodid_status,product... products_status 1 const 9048 Using where; Using temporary; Using filesort 1 SIMPLE v ref PRIMARY,vendors_id,vendors_vendorid vendors_vendorid 4 rhinomar_rhinomartnew.p.vendors_id 1 1 SIMPLE s ref products_id products_id 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE pd ref PRIMARY,products,prod_desc_prodid_prodname prod_desc_prodid_prodname 4 rhinomar_rhinomartnew.p.products_id 1 1 SIMPLE p2c ref PRIMARY,ptc_catidx PRIMARY 4 rhinomar_rhinomartnew.p.products_id 1 Using where; Using index 1 SIMPLE c eq_ref PRIMARY PRIMARY 4 rhinomar_rhinomartnew.p2c.categories_id 1 Using where MySQL Query: select p.products_id, p.products_image, p.products_price, p.products_weight, p.products_unit_quantity, s.specials_new_products_price, s.status, pd.products_name, pd.products_img_alt from products p left join vendors v ON v.vendors_id = p.vendors_id left join specials s on s.products_id = p.products_id left join products_description pd on pd.products_id = p.products_id left join products_to_categories p2c on p2c.products_id = p.products_id left join categories c on c.categories_id = p2c.categories_id where ( ( pd.products_name like '%apparel%' ) or p2c.categories_id IN (773, 132, 135, 136, 119, 122, 124, 125, 126, 1749, 1753, 1747, 123, 127, 130, 131, 178, 137, 140, 164, 165, 166, 167, 168, 169, 832, 2045 ) or p.products_id = 'apparel' or p.products_model = 'apparel' or CONCAT(v.vendors_prefix, '-') = 'apparel' or CONCAT( v.vendors_prefix, '-', p.products_id ) = 'apparel' ) and p.products_status = '1' and c.categories_status = '1' group by p.products_id order by pd.products_name

    Read the article

  • All connections in pool are in use

    - by veljkoz
    We currently have a little situation on our hands - it seems that someone, somewhere forgot to close the connection in code. Result is that the pool of connections is relatively quickly exhausted. As a temporary patch we added Max Pool Size = 500; to our connection string on web service, and recycle pool when all connections are spent, until we figure this out. So far we have done this: SELECT SPId FROM MASTER..SysProcesses WHERE DBId = DB_ID('MyDb') and last_batch < DATEADD(MINUTE, -15, GETDATE()) to get SPID's that aren't used for 15 minutes. We're now trying to get the query that was executed last using that SPID with: DBCC INPUTBUFFER(61) but the queries displayed are various, meaning either something on base level regarding connection manipulation was broken, or our deduction is erroneous... Is there an error in our thinking here? Does the DBCC / sysprocesses give results we're expecting or is there some side-effect catch? (for example, connections in pool influence?) (please, stick to what we could find out using SQL since the guys that did the code are many and not all present right now)

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • Cannot implicitly convert type ...

    - by Newbie
    I have the following function public Dictionary<DateTime, object> GetAttributeList( EnumFactorType attributeType ,Thomson.Financial.Vestek.Util.DateRange dateRange) { DateTime startDate = dateRange.StartDate; DateTime endDate = dateRange.EndDate; return (( //Step 1: Iterate over the attribute list and filter the records by // the supplied attribute type from assetAttribute in AttributeCollection where assetAttribute.AttributeType.Equals(attributeType) //Step2:Assign the TimeSeriesData collection into a temporary variable let timeSeriesList = assetAttribute.TimeSeriesData //Step 3: Iterate over the TimeSeriesData list and filter the records by // the supplied date from timeSeries in timeSeriesList.ToList() where timeSeries.Key >= startDate && timeSeries.Key <= endDate //Finally build the needed collection select new AssetAttribute() { TimeSeriesData = PopulateTimeSeriesData(timeSeries.Key, timeSeries.Value) }).ToList<AssetAttribute>().Select(i => i.TimeSeriesData)); } private Dictionary<DateTime, object> PopulateTimeSeriesData(DateTime dateTime, object value) { Dictionary<DateTime, object> timeSeriesData = new Dictionary<DateTime, object>(); timeSeriesData.Add(dateTime, value); return timeSeriesData; } Error:Cannot implicitly convert type 'System.Collections.Generic.IEnumerable' to 'System.Collections.Generic.Dictionary'. An explicit conversion exists (are you missing a cast?) Using C#3.0 Please help

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • shell script segment to avoid overwriting files

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN`; isdone=`ls *.foo | grep $PATTERN`; whatsleft=todo - isdone; # what's the unix magic? #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep?

    Read the article

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

  • slow mysql count because of subselect

    - by frgt10
    how to make this select statement more faster? the first left join with the subselect is making it slower... mysql> SELECT COUNT(DISTINCT w1.id) AS AMOUNT FROM tblWerbemittel w1 JOIN tblVorgang v1 ON w1.object_group = v1.werbemittel_id INNER JOIN ( SELECT wmax.object_group, MAX( wmax.object_revision ) wmaxobjrev FROM tblWerbemittel wmax GROUP BY wmax.object_group ) AS wmaxselect ON w1.object_group = wmaxselect.object_group AND w1.object_revision = wmaxselect.wmaxobjrev LEFT JOIN ( SELECT vmax.object_group, MAX( vmax.object_revision ) vmaxobjrev FROM tblVorgang vmax GROUP BY vmax.object_group ) AS vmaxselect ON v1.object_group = vmaxselect.object_group AND v1.object_revision = vmaxselect.vmaxobjrev LEFT JOIN tblWerbemittel_has_tblAngebot wha ON wha.werbemittel_id = w1.object_group LEFT JOIN tblAngebot ta ON ta.id = wha.angebot_id LEFT JOIN tblLieferanten tl ON tl.id = ta.lieferant_id AND wha.zuschlag = (SELECT MAX(zuschlag) FROM tblWerbemittel_has_tblAngebot WHERE werbemittel_id = w1.object_group) WHERE w1.flags =0 AND v1.flags=0; +--------+ | AMOUNT | +--------+ | 1982 | +--------+ 1 row in set (1.30 sec) Some indexes has been already set and as EXPLAIN shows they were used. +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 2072 | | | 1 | PRIMARY | v1 | ref | werbemittel_group,werbemittel_id_index | werbemittel_group | 4 | wmaxselect.object_group | 2 | Using where | | 1 | PRIMARY | <derived3> | ALL | NULL | NULL | NULL | NULL | 3376 | | | 1 | PRIMARY | w1 | eq_ref | object_revision,or_og_index | object_revision | 8 | wmaxselect.wmaxobjrev,wmaxselect.object_group | 1 | Using where | | 1 | PRIMARY | wha | ref | PRIMARY,werbemittel_id_index | werbemittel_id_index | 4 | dpd.w1.object_group | 1 | | | 1 | PRIMARY | ta | eq_ref | PRIMARY | PRIMARY | 4 | dpd.wha.angebot_id | 1 | | | 1 | PRIMARY | tl | eq_ref | PRIMARY | PRIMARY | 4 | dpd.ta.lieferant_id | 1 | Using index | | 4 | DEPENDENT SUBQUERY | tblWerbemittel_has_tblAngebot | ref | PRIMARY,werbemittel_id_index | werbemittel_id_index | 4 | dpd.w1.object_group | 1 | | | 3 | DERIVED | vmax | index | NULL | object_revision_uq | 8 | NULL | 4668 | Using index; Using temporary; Using filesort | | 2 | DERIVED | wmax | range | NULL | or_og_index | 4 | NULL | 2168 | Using index for group-by | +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ 10 rows in set (0.01 sec) The main problem while the statement above takes about 2 seconds seems to be the subselect where no index can be used. How to write the statement even more faster? Thanks for help. MT

    Read the article

  • MySql product\tag query optimisation - please help!

    - by Nige
    Hi There I have an sql query i am struggling to optimise. It basically is used to pull back products for a shopping cart. The products each have tags attached using a many to many table product_tag and also i pull back a store name from a separate store table. Im using group_concat to get a list of tags for the display (this is why i have the strange groupby orderby clauses at the bottom) and i need to order by dateadded, showing the latest scheduled product first. Here is the query.... SELECT products.*, stores.name, GROUP_CONCAT(tags.taglabel ORDER BY tags.id ASC SEPARATOR " ") taglist FROM (products) JOIN product_tag ON products.id=product_tag.productid JOIN tags ON tags.id=product_tag.tagid JOIN stores ON products.cid=stores.siteid WHERE dateadded < '2010-05-28 07:55:41' GROUP BY products.id ASC ORDER BY products.dateadded DESC LIMIT 2 Unfortunately even with a small set of data (3 tags and about 12 products) the query is taking 00.0034 seconds to run. Eventually i want to have about 2000 products and 50 tagsin this system (im guessing this will be very slooooow). Here is the ExplainSql... id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra 1|SIMPLE|tags|ALL|PRIMARY|NULL|NULL|NULL|4|Using temporary; Using filesort 1|SIMPLE|product_tag|ref|tagid,productid|tagid|4|cs_final.tags.id|2| 1|SIMPLE|products|eq_ref|PRIMARY,cid|PRIMARY|4|cs_final.product_tag.productid|1|Using where 1|SIMPLE|stores|ALL|siteid|NULL|NULL|NULL|7|Using where; Using join buffer Can anyone help?

    Read the article

  • Deleting from 2 arrays of dictionaries

    - by Matt Winters
    I have an array of dictionaries loaded from a plist (below) called arrayHistory. <plist version="1.0"> <array> <dict> <key>item</key> <string>1</string> <key>result</key> <string>8.1</string> <key>date</key> <date>2009-12-15T19:36:59Z</date> </dict> ... </array> </plist> I filter this array based on 'item' so that a second array, arrayHistoryDetail has the same structure as arrayHistory but only contains e.g. 'item's equal to '1'. These detail items are successfully displayed in a tableView. I then want to select an item from the tableView, and delete it from the tableView data source, arrayHistoryDetail (line 2 in the code below) - works, then I want to delete the item from the tableView itself (line 3 in the code below) - also works. My problem is that I also need to delete it from the original arrayHistory, so I tried the following: created a temporary dictionary as an ivar: NSMutableDictionary *tempDict; @property (nonatomic, retain) NSMutableDictionary *tempDict; Then my thinking was to make a copy in line 1 and remove it from the original array in line 4. 1 tempDict = [arrayHistoryDetail objectAtIndex: indexPath.row]; 2 [arrayHistoryDetail removeObjectAtIndex: indexPath.row]; 3 [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; 4 [arrayHistory removeObject:tempDict]; Didn't work. Could someone please guide me in the right direction. I'm thinking that tempDict is a pointer and that removeObject needs a copy? I don't know. Thanks.

    Read the article

  • How can I exclude LEFT JOINed tables from TOP in SQL Server?

    - by Kalessin
    Let's say I have two tables of books and two tables of their corresponding editions. I have a query as follows: SELECT TOP 10 * FROM (SELECT hbID, hbTitle, hbPublisherID, hbPublishDate, hbedID, hbedDate FROM hardback LEFT JOIN hardbackEdition on hbID = hbedID UNION SELECT pbID, pbTitle, pbPublisher, pbPublishDate, pbedID, pbedDate FROM paperback Left JOIN paperbackEdition on pbID = pbedID ) books WHERE hbPublisherID = 7 ORDER BY hbPublishDate DESC If there are 5 editions of the first two hardback and/or paperback books, this query only returns two books. However, I want the TOP 10 to apply only to the number of actual book records returned. Is there a way I can select 10 actual books, and still get all of their associated edition records? In case it's relevant, I do not have database permissions to CREATE and DROP temporary tables. Thanks for reading! Update To clarify: The paperback table has an associated table of paperback editions. The hardback table has an associated table of hardback editions. The hardback and paperback tables are not related to each other except to the user who will (hopefully!) see them displayed together.

    Read the article

  • Sheet and thread memory problem

    - by Xident
    Hi Guys, recently I started a project which can export some precalculated Grafix/Audio to files, for after processing. All I was doing is to put a new Window (with progressindicator and an Abort Button) in my main xib and opened it using the following code: [NSApp beginSheet: REC_Sheet modalForWindow: MOTHER_WINDOW modalDelegate: self didEndSelector: nil contextInfo: nil]; NSModalSession session=[NSApp beginModalSessionForWindow:REC_Sheet]; RECISNOTDONE=YES; while (RECISNOTDONE) { if ([NSApp runModalSession:session]!=NSRunContinuesResponse) break; usleep(100); } [NSApp endModalSession:session]; A Background Thread (pthread) was started earlier, to actually perform the work and save all the targas/wave file. Which worked great, but after an amount of time, it turned out that the main thread was not responding anymore and my memory footprint raised unstoppable. I tried to debug it with Instruments, and saw a lot of CFHash etc stuff growing to infinity. By accident i clicked below the sheet, and temporary it helped, the main thread (AppKit ?) was releasing it's stuff, but just for a little time. I can't explain it to me, first of all I thought it was the access from my thread to the Progressbar to update the Progress (intervalled at 0,5sec), so I cut it out. But even if I'm not updating anything and did nothing with the Progressbar, my Application eat up all the Memory, because of not releasing it's "Main-Event" or whatsoever Stuff. Is there any possibility to "drain" this Main thread Memory stuff (Runloop / NSApp call?). And why the heck doesn't the Main thread respond anymore (after this simple task) ??? I don't have a clou anymore, please help ! Thanks in advance ! P.S. How do you guys implement "threaded long task" Stuff and updating your gui ???

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • is it possible to make one click fires two events or more by javascript?

    - by NewInAlbert
    I am currently making a temporary download page for website visitor. The page includes a form, after the visitor fills the form up, the site will take them to the pdf download page. In the download page, there are some pdf files download links (I am just using a tag.). However, i wanna make a onclick event to those links, once they have been clicked, the page will refresh automatically or redirect to other pages. <a href="/file.pdf" onClick="window.location.reload()">The File</a> I have tried the jquery way as well. <a href="/file.pdf" id="FileDownload">The File</a> <script> $("#FileDownload").click(function(){ location.reload(); }); </script> But all the them are not working. Do you masters have any good ideas about this, many thanks. P.S. What if I wanna add a countdown after a file is being started download, and then do page reload when countdown finishes. Looks like have asked several questions... Thanks a ton in advance.

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >