Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 57/429 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Why does concatenating strings in the argument of EXEC sometimes cause a syntax error in T-SQL?

    - by Tim Goodman
    In MS SQL Server Management Studio 2005, running this code EXEC('SELECT * FROM employees WHERE employeeID = ' + CAST(3 AS VARCHAR)) gives this error: Incorrect syntax near 'CAST' However, if I do this, it works: DECLARE @temp VARCHAR(4000) SET @temp = 'SELECT * FROM employees WHERE employeeID = ' + CAST(3 AS VARCHAR) EXEC(@temp) I found an explanation here: http://stackoverflow.com/questions/1044831/t-sql-cannot-pass-concatenated-string-as-argument-to-stored-procedure According to the accepted answer, EXEC can take a local variable or a value as its argument, but not an expression. However, if that's the case, why does this work: DECLARE @temp VARCHAR(4000) SET @temp = CAST(3 AS VARCHAR) EXEC('SELECT * FROM employees WHERE employeeID = ' + @temp) 'SELECT * FROM employees WHERE employeeID = ' + @temp sure looks like an expression to me, but the code executes with no errors.

    Read the article

  • recv receiving not whole data sometime

    - by milo
    hi all, i have following issue: here is the chunk of code: void get_all_buf(int sock, std::string & inStr) { int n = 1; char c; char temp[1024*1024]; bzero(temp, sizeof(temp)); n = recv(sock, temp, sizeof(temp), 0); inStr = temp; }; but sometimes recv returning not whole data (data length always less then sizeof(temp)), only it's part. write side always sends me whole data (i got it with sniffer). what matter? thx. P.S. i know, good manner suggests me to check n (if (n < 0) perror ("error while receiving data), but it doesn't matter now - it's not reason of my problem. P.S.2 i've forgot - it's blocking socket.

    Read the article

  • [CODE GENERATION] How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS...

    Read the article

  • How can I exclude LEFT JOINed tables from TOP in SQL Server?

    - by Kalessin
    Let's say I have two tables of books and two tables of their corresponding editions. I have a query as follows: SELECT TOP 10 * FROM (SELECT hbID, hbTitle, hbPublisherID, hbPublishDate, hbedID, hbedDate FROM hardback LEFT JOIN hardbackEdition on hbID = hbedID UNION SELECT pbID, pbTitle, pbPublisher, pbPublishDate, pbedID, pbedDate FROM paperback Left JOIN paperbackEdition on pbID = pbedID ) books WHERE hbPublisherID = 7 ORDER BY hbPublishDate DESC If there are 5 editions of the first two hardback and/or paperback books, this query only returns two books. However, I want the TOP 10 to apply only to the number of actual book records returned. Is there a way I can select 10 actual books, and still get all of their associated edition records? In case it's relevant, I do not have database permissions to CREATE and DROP temporary tables. Thanks for reading! Update To clarify: The paperback table has an associated table of paperback editions. The hardback table has an associated table of hardback editions. The hardback and paperback tables are not related to each other except to the user who will (hopefully!) see them displayed together.

    Read the article

  • MYSQL inserting records form table A into tables B and C (linked by foreign key) depending on column values in table A

    - by Chez
    Hi All, Have been searching high and low for a simple solution to a mysql insert problem. The problem is as follows: I am putting together an organisational database consisting of departments and desks. A department may or may not have n number of desks. Both departments and desks have their own table linked by a foreign key in desks to the relevant record in departments (i.e. the pk). I have a temporary table which I use to place all new department data (n records long)...In this table n number of desk records for a department follow the department record directly below. In the TEMP table, if a column department_name has a value,it is a department, if it doesn't it will have a value for the column desk and therefore will be a desk which is related to the above department. As I said there maybe several desk records until you get to the next department record. Ok, so what I want to do is the following: Insert the departments into the departments table and its desks into the desks table , generating a foreign key in the desk record to the relevant departments id. In pseudo-ish code: for each record in TEMP table if Department INSERT the record into Departments get the id of the newly created Department record and store it somewhere else if Desk INSERT the desk into the desks table with the relevant departments id as the foreignkey note once again that all departments desks directly follow the department in the TEMP Table Many Thanks

    Read the article

  • How can I join 3 tables with mysql & php?

    - by steven
    check out the page [url]http://www.mujak.com/test/test3.php[/url] It pulls the users Post,username,xbc/xlk tags etc which is perfect... BUT since I am pulling information from a MyBB bulletin board system, its quite different. When replying, people are are allowed to change the "Thread Subject" by simplying replying and changing it. I dont want it to SHOW the changed subject title, just the original title of all posts in that thread. By default it repies with "RE:thread title". They can easily edit this and it will show up in the "Subject" cell & people wont know which thread it was posted in because they changed their thread to when replying to the post. So I just want to keep the orginial thread title when they are replying. Make sense~?? Tables:mybb_users Fields:uid,username Tables:mybb_userfields Fields:ufid Tables:mybb_posts Fields:pid,tid,replyto,subject,ufid,username,uid,message Tables:mybb_threads Fields:tid,fid,subject,uid,username,lastpost,lastposter,lastposteruid I haev tried multiple queries with no success: $result = mysql_query(" SELECT * FROM mybb_users LEFT JOIN (mybb_posts, mybb_userfields, mybb_threads) ON ( mybb_userfields.ufid=mybb_posts.uid AND mybb_threads.tid=mybb_posts.tid AND mybb_users.uid=mybb_userfields.ufid ) WHERE mybb_posts.fid=42"); $result = mysql_query(" SELECT * FROM mybb_users LEFT JOIN (mybb_posts, mybb_userfields, mybb_threads) ON ( mybb_userfields.ufid=mybb_posts.uid AND mybb_threads.tid=mybb_posts.tid AND mybb_users.uid=mybb_posts.uid ) WHERE mybb_threads.fid=42"); $result = mysql_query(" SELECT * FROM mybb_posts LEFT JOIN (mybb_userfields, mybb_threads) ON ( mybb_userfields.ufid=mybb_posts.uid AND mybb_threads.tid=mybb_posts.tid ) WHERE mybb_posts.fid=42");

    Read the article

  • Mysql ndb cluster - node restart.

    - by Arafat
    Hi guys! I just setup a mysql cluster on a fairly decent baby (IBM x3650 M3) with 24GB memory, xeon 6core, SAS 6Gbps HDD. Running Debian Lenny 5. 64bits. Ndb version is 7.1.9a. Our database size on MyISAM is around 3.2 GB. Ndb_size estimation is 58GB for ndbengine. A little info about my database is as follows. 150 common tables for global purpose. 130 tables for each clients. So it goes like this, 130 x 115(clients) = 14950 tables. Is it normal or usual to have 14000 tables on one database? The reasons why we did this was, Easy maintenance and per client based customization. Now, the problem is, ndb cluster can only support, 20320 tables. But it can support 5,000,000,000 rows in one table if I'm not wrong. My real head ache is my cluster data node takes less than two minutes to startup with out any data. But as soon as convert my tables into ndb, that too only 2000 tables, data node takes at least 30 to 40 mins to start up. Is it normal? If I convertt all my tables into ndb, will it take even longer? Or let's say if consolidate my 14000 table's data into one, which is 130 tables, will it help? Or is there anything idiotically wrong which I'm doing? I'll attach my config.ini file soon. here's the simple overview of my config Datamemory = 14G Indexmemory = 3GB Maxnooftable = 14000 Maxnoofattributes = 78000 I'm just testing these values with 2000 tables first. Please advise, how to increase the start up speed. Please point out where I'm going wrong. Thanks in advance guys!

    Read the article

  • Does Lapping a CPU / Heatsink actually drop the temp?

    - by Pure.Krome
    Hi folks, i've been watching some YouTube vids about Lapping a CPU. I've never heard of this modding technique before and, though extreame, I was wondering if it acutally works? Assuming you lap your cpu and/or heatsink correctly, will the temps drop? When I say drop, at least a 1 degree drop is success (for the debate of this topic). To keep this topic clean, please refrain from anyone commenting on the overkill of labour, just for a 1 degree (worst case) drop, etc. This is a discussion about the theory and concept, not personal opionion of wether to lap or not.

    Read the article

  • Apache AliasMatch and DirectoryMatch not working?

    - by Alex
    I have the following config - please notice the Alias and Directory equivalent -- uncommented they work as expected but the dynamic/regex based versions don't - any ideas??? <VirtualHost *:80> ServerName temp.dev.local ServerAlias temp.dev.local DocumentRoot "C:\wamp\www\temp\public" <Directory "C:\wamp\www\temp\public"> AllowOverride all Order Allow,Deny Allow from all </Directory> # Alias /private/application/core/page/assets/images/ "C:/wamp/www/temp/private/application/core/page/assets/images/" # <Directory "C:/wamp/www/temp/private/application/core/page/assets/images/"> AliasMatch ^/private/application/(.*)/(.*)/assets/images/ /private/application/$1/$2/assets/images/ <DirectoryMatch "^/private/application/(.*)/(.*)/assets/images/"> Options Indexes FollowSymlinks MultiViews Includes AllowOverride None Order allow,deny Allow from all </DirectoryMatch> </VirtualHost>

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • Del/Erase Commands

    - by Robert A Palmer
    I'm currently trying to use the del or the erase command in a RSM Telnet to delete Temp files on users computers. But the problem I'm running into with the command is that it is working, but won't delete any of the files located in the temp folder. Command I'm using : erase c:\users\[username]\appdata\local\temp I have used the command with the /p to prompt me, but some of these temp folders have thousands of files in them and sitting there and pressing Y and then enter endlessly is not going to work, because I have around 90 computers to clean temp files on. Is there something wrong with the command or is there a simpler command to use to delete the temp files on the computer? Thanks

    Read the article

  • Max. Temp. on Intel Burn Test for Stock Dell Precision T3500

    - by HK1
    I'm troubleshooting an issue on a Dell Precision T3500. As part of my troubleshooting I've decided to try running a stress test using Intel Burn Test software. This machine is a stock configuration with 12GB of RAM and a Xeon W3670 processor (nothing overclocked). When I run IBT using the standard mode, SpeedFan reports a processor temperature in excess of 80C. I've seen numbers as high as 90C but even at that temperature the machine does not become unstable or crash. However, it seems way too high. This processor has a TCase of 67.9C according to Intel's website. I'm guessing that means I'm in the danger zone any time I go over that temperature. I've checked the cooling system and everything looks fine. I've even took out the heat sink and reinstalled it with new thermal compound. This did not appear to make the problem better or worse. Is there a discrepancy somewhere here in the way temperatures are measured or displayed? I've also tried using HWMonitor from CPUID and it reports the same temperatures. Should I just let the Standard Test go and disregard the temperature outputs?

    Read the article

  • Quaternion Camera

    - by Alex_Hyzer_Kenoyer
    Can someone help me figure out how to use a Quaternion with the PerspectiveCamera in libGDX or in general? I am trying to rotate my camera around a sphere that is being drawn at (0,0,0). I am not sure how to go about setting up the quaternion correctly, manipulating it, and then applying it to the camera. Edit: Here is what I have tried to do so far. // This is how I set it up Quaternion orientation = new Quaternion(); orientation.setFromAxis(Vector3.Y, 45); // This is how I am trying to update the rotations public void rotateX(float amount) { Quaternion temp = new Quaternion(); temp.set(Vector3.X, amount); orientation.mul(temp); } public void rotateY(float amount) { Quaternion temp = new Quaternion(); temp.set(Vector3.Y, amount); orientation.mul(temp); } public void updateCamera() { // This is where I am unsure how to apply the rotations to the camera // I think I should update the view and projection matrices? camera.view.mul(orientation); ... }

    Read the article

  • Which of these design patterns is superior?

    - by durron597
    I find I tend to design class structures where several subclasses have nearly identical functionality, but one piece of it is different. So I write nearly all the code in the abstract class, and then create several subclasses to do the one different thing. Does this pattern have a name? Is this the best way for this sort of scenario? Option 1: public interface TaxCalc { String calcTaxes(); } public abstract class AbstractTaxCalc implements TaxCalc { // most constructors and fields are here public double calcTaxes(UserFinancials data) { // code double diffNumber = getNumber(data); // more code } abstract protected double getNumber(UserFinancials data); protected double initialTaxes(double grossIncome) { // code return initialNumber; } } public class SimpleTaxCalc extends AbstractCalc { protected double getNumber(UserFinancials data) { double temp = intialCalc(data.getGrossIncome()); // do other stuff return temp; } } public class FancyTaxCalc extends AbstractTaxCalc { protected double getNumber(UserFinancials data) { int temp = initialCalc(data.getGrossIncome()); // Do fancier math return temp; } } Option 2: This version is more like the Strategy pattern, and should be able to do essentially the same sorts of tasks. public class TaxCalcImpl implements TaxCalc { private final TaxMath worker; public DummyImpl(TaxMath worker) { this.worker = worker; } public double calcTaxes(UserFinancials data) { // code double analyzedDouble = initialNumber; int diffNumber = worker.getNumber(data, initialNumber); // more code } protected int initialTaxes(double grossIncome) { // code return initialNumber; } } public interface TaxMath { double getNumber(UserFinancials data, double initial); } Then I could do: TaxCalc dum = new TaxCalcImpl(new TaxMath() { @Override public double getNumber(UserFinancials data, double initial) { double temp = data.getGrossIncome(); // do math return temp; }); And I could make specific implementations of TaxMath for things I use a lot, or I could make a stateless singleton for certain kinds of workers I use a lot. So the question I'm asking is: Which of these patterns is superior, when, and why? Or, alternately, is there an even better third option?

    Read the article

  • One of my most frequently used commands

    - by Kevin Smith
    On a Linux or UNIX server this is one of my most frequently used commands. find . -name "*.htm" -exec grep -iH "alter session" {} \; It is an easy way to find a string you know is in a group of files, but don't know or can't remember which file it is in. For the example above, I knew that WebCenter Content sends a bunch of alter session commands to the database when it opens a new database connection. I wanted to find where these were defined and what all the alter session commands were. So, I ran these commands: cd /opt/oracle/middleware/Oracle_ECM1/ucm/idc/resources/core find . -name "*.htm" -exec grep -iH "alter session" {} \; And the results were: ./tables/query.htm: ALTER SESSION SET optimizer_mode = ?./tables/query.htm: ALTER SESSION SET NLS_LENGTH_SEMANTICS = ?./tables/query.htm: ALTER SESSION SET NLS_SORT = ?./tables/query.htm: ALTER SESSION SET NLS_COMP = ?./tables/query.htm: ALTER SESSION SET CURSOR_SHARING = ?./tables/query.htm: ALTER SESSION SET EVENTS '30579 trace name context forever, level 2'./tables/query.htm: ALTER SESSION SET NLS_DATE_FORMAT = ?./tables/query.htm: alter session set events '30579 trace name context forever, level 2' I could then go edit the query.htm file and find the include that contained all the ALTER SESSION commands.

    Read the article

  • What the Hekaton?

    - by Tony Davis
    Hekaton, the power behind SQL Server 2014′s In-Memory OLTP technology, is intended to make data operations run orders of magnitude faster on SQL Server. This works its magic partly by serving database workloads entirely from main memory, using memory-optimized table structures. It replaces the relational engine’s standard locking model with an optimistic concurrency model based on time-stamped row versions. Deeper down the Hekaton engine uses new, ‘latch free’ data structures. So far, so good, but performance improvements on this scale require a compromise, and the compromise is that these aren’t tables as we understand them. For the database developer, these differences are painful because they involve sacrificing some very important bits of the relational model. Most importantly, Hekaton tables don’t currently support FOREIGN KEY constraints or CHECK constraints, and you can’t put the checks in triggers because there aren’t any DML triggers either. Constraints allow a relational designer to enforce relational integrity and data integrity. Without them, of course, ‘bad data’ can get into our Hekaton tables. There is no easy way of preventing it. For several classes of database and data, this is a show-stopper. One may regard all these restrictions regretfully, seeing limited opportunity to try out Hekaton with current databases, but perhaps there is also a sudden glow of recognition. Isn’t this how we all originally imagined table variables were going to be, back in SQL 2005? And they have much the same restrictions. Maybe, instead of pretending that a currently-designed database can be ‘Hekatonized’ with a few mouse clicks, we should redesign databases for SQL 2014 to replace table variables with Hekaton tables, exploiting this technology for fast intermediate processing, and for the most part forget, for now, the idea of trying to convert our base relational tables into Hekaton tables. Few database developers would be averse to having their working tables running an order of magnitude faster, as long as it didn’t compromise the integrity of the data in the base tables.

    Read the article

  • Layout Columns - Equal Height

    - by Kyle
    I remember first starting out using tables for layouts and learned that I should not be doing that. I am working on a new site and can not seem to do equal height columns without using tables. Here is an example of the attempt with div tags. <div class="row"> <div class="column">column1</div> <div class="column">column2</div> <div class="column">column3</div> <div style="clear:both"></div> </div> Now what I tried with that was doing making columns float left and setting their widths to 33% which works fine, I use the clear:both div so that the row would be the size of the biggest column, but the columns will be different sizes based on how much content they have. I have found many fixes which mostly involve css hacks and just making it look like its right but that's not what I want. I thought of just doing it in javascript but then it would look different for those who choose to disable their javascript. The only true way of doing it that I can think of is using tables since the cells all have equal heights in the same row. But I know its bad to use tables. After searching forever I than came across this: http://intangiblestyle.com/lab/equal-height-columns-with-css/ What it seems to do is exactly the same as tables since its just setting its display exactly like tables. Would using that be just as bad as using tables? I honestly can't find anything else that I could do. edit @Su' I have looked into "faux columns" and do not think that is what I want. I think I would be able to implement better designs for my site using the display:table method. I posted this question because I just wasn't sure if I should since I have always heard its bad using tables in website layouts.

    Read the article

  • Specializing function templates outside class temp. definition - what is the correct way of doing t

    - by LoudNPossiblyRight
    I am attempting to specialize a function template that is a member of a template class. The two of them have different template parameters. The template function specialization inside the temp. class definition is never called and the one func. spec. outside the class definition does not even compile. Should i expect this to work in the first place, and if so, what do i have to change in this code to both compile and make it work correctly: using VS2010 #include<iostream> using namespace std; template <typename T> class klass{ public: template <typename U> void func(const U &u){ cout << "I AM A TEMPLATE FUNC" << endl; } //THIS NEVER GETS CALLED !!! template <> void klass<T>::func(const string &s){ cout << "I AM A STRING SPECIALIST" << endl; } }; //THIS SPECIALIZATION WILL NOT COMPILE !!! template <typename T> template <> void klass<T>::func(const double &s){ cout << "I AM A DOUBLE SPECIALIST" << endl; } int main(){ double d = 3.14159265; klass<int> k; k.func(1234567890); k.func("string"); k.func(3.14159265); return 0; }

    Read the article

  • Alright to truncate database tables when also using Hibernate?

    - by Marcus
    Is it OK to truncate tables while at the same time using Hibernate to insert data? We parse a big XML file with many relationships into Hibernate POJO's and persist to the DB. We are now planning on purging existing data at certain points in time by truncating the tables. Is this OK? It seems to work fine. We don't use Hibernate's second level cache. One thing I did notice, which is fine, is that when inserting we generate primary keys using Hibernate's @GeneratedValue where Hibernate just uses a key value one greater than the highest value in the table - and even though we are truncating the tables, Hibernate remembers the prior value and uses prior value + 1 as opposed to starting over at 1. This is fine, just unexpected. Note that the reason we do truncate as opposed to calling delete() on the Hibernate POJO's is for speed. We have gazillions of rows of data, and truncate is just so much faster.

    Read the article

  • Databases design - one link table or multiple link tables?

    - by David
    Hi there, I'm working on a front end for a database where each table essentially has a many to many relationship with all other tables. I'm not a DB admin, just a few basic DB courses. The typical solution in this case, as I understand it, would be multiple link tables to join each 'real' table. Here's what I'm proposing instead: one link table that has foreign key dependencies to all other PKs of the other tables. Is there any reason this could turn out badly in terms of scalability, flexibility, etc down the road?

    Read the article

  • Storing n-grams in database in < n number of tables.

    - by kurige
    If I was writing a piece of software that attempted to predict what word a user was going to type next using the two previous words the user had typed, I would create two tables. Like so: == 1-gram table == Token | NextWord | Frequency ------+----------+----------- "I" | "like" | 15 "I" | "hate" | 20 == 2-gram table == Token | NextWord | Frequency ---------+------------+----------- "I like" | "apples" | 8 "I like" | "tomatoes" | 12 "I hate" | "tomatoes" | 20 "I hate" | "apples" | 2 Following this example implimentation the user types "I" and the software, using the above database, predicts that the next word the user is going to type is "hate". If the user does type "hate" then the software will then predict that the next word the user is going to type is "tomatoes". However, this implimentation would require a table for each additional n-gram that I choose to take into account. If I decided that I wanted to take the 5 or 6 preceding words into account when predicting the next word, then I would need 5-6 tables, and an exponentially increase in space per n-gram. What would be the best way to represent this in only one or two tables, that has no upper-limit on the number of n-grams I can support?

    Read the article

  • How to create select SQL statement that would produce "merged" dataset from two tables(Oracle DBMS)?

    - by Roman Kagan
    Here's my original question: merging two data sets Unfortunately I omitted some intircacies, that I'd like to elaborate here. So I have two tables events_source_1 and events_source_2 tables. I have to produce the data set from those tables into resultant dataset (that I'd be able to insert into third table, but that's irrelevant). events_source_1 contain historic event data and I have to do get the most recent event (for such I'm doing the following: select event_type,b,c,max(event_date),null next_event_date from events_source_1 group by event_type,b,c,event_date,null events_source_2 contain the future event data and I have to do the following: select event_type,b,c,null event_date, next_event_date from events_source_2 where b>sysdate; How to put outer join statement to fill the void (i.e. when same event_type,b,c found from event_source_2 then next_event_date will be filled with the first date found GREATLY APPRECIATE FOR YOUR HELP IN ADVANCE.

    Read the article

  • Can you hide tables from a MySQL user in phpMyAdmin?

    - by AK
    I have a MySQL user added to a database that I would like to prevent from viewing certain tables. I can limit their privileges through MySQL by preventing them from running statements like DROP or ALTER. But is it possible to prevent them from viewing certain tables in phpMyAdmin? If there isn't a MySQL privilege that controls this (I wouldn't imagine there would be), is there a configuration in phpMyAdmin that allows this? I understand one workaround here is to move the tables to a new database that they're not added to. This isn't an option for my application.

    Read the article

  • Read xml file and import only one table from multiple tables from xml file in the dataset at a time.

    - by Harikrishna
    I want to store the data in the xml file and retrieve the data from that. I have defined more than table in that xml file.Now to read the tables I am using dataset ds = new dataset(); ds.ReadXml(xmlfilepath); Now this dataset contains all the tables those are in xml file when we read the xml file into dataset. But I want only one specified table at a time in a dataset by condition. Like there are PersonalInfo,OtherInfo,PropertiesInfo tables in the xml file. But I want only OtherInfo table in dataset what I should do ?

    Read the article

  • How do I list all tables in all databases in SQL Server in a single result set?

    - by msorens
    I am looking for T-SQL code to list all tables in all databases in SQL Server (at least in SS2005 and SS2008; would be nice to also apply to SS2000). The catch, however, is that I would like a single result set. This precludes the otherwise excellent answer from Pinal Dave: sp_msforeachdb 'select "?" AS db, * from [?].sys.tables' The above stored proc generates one result set per database, which is fine if you are in an IDE like SSMS that can display multiple result sets. However, I want a single result set because I want a query that is essentially a "find" tool: if I add a clause like WHERE tablename like '%accounts' then it would tell me where to find my BillAccounts, ClientAccounts, and VendorAccounts tables regardless of which database they reside in.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >