Search Results

Search found 34207 results on 1369 pages for 'query output'.

Page 43/1369 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • SQL SERVER – Weekly Series – Memory Lane – #048

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Order of Result Set of SELECT Statement on Clustered Indexed Table When ORDER BY is Not Used Above theory is true in most of the cases. However SQL Server does not use that logic when returning the resultset. SQL Server always returns the resultset which it can return fastest.In most of the cases the resultset which can be returned fastest is the resultset which is returned using clustered index. Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT One of the Jr. Developer asked me this question (What will be the Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT?) while I was rushing to an important meeting. I was getting late so I asked him to talk with his Application Tech Lead. When I came back from meeting both of them were looking for me. They said they are confused. I quickly wrote down following example for them. 2008 SQL SERVER – Guidelines and Coding Standards Complete List Download Coding standards and guidelines are very important for any developer on the path of a successful career. A coding standard is a set of guidelines, rules and regulations on how to write code. Coding standards should be flexible enough or should take care of the situation where they should not prevent best practices for coding. They are basically the guidelines that one should follow for better understanding. Download Guidelines and Coding Standards complete List Download Get Answer in Float When Dividing of Two Integer Many times we have requirements of some calculations amongst different fields in Tables. One of the software developers here was trying to calculate some fields having integer values and divide it which gave incorrect results in integer where accurate results including decimals was expected. Puzzle – Computed Columns Datatype Explanation SQL Server automatically does a cast to the data type having the highest precedence. So the result of INT and INT will be INT, but INT and FLOAT will be FLOAT because FLOAT has a higher precedence. If you want a different data type, you need to do an EXPLICIT cast. Renaming SP is Not Good Idea – Renaming Stored Procedure Does Not Update sys.procedures I have written many articles about renaming a tables, columns and procedures SQL SERVER – How to Rename a Column Name or Table Name, here I found something interesting about renaming the stored procedures and felt like sharing it with you all. The interesting fact is that when we rename a stored procedure using SP_Rename command, the Stored Procedure is successfully renamed. But when we try to test the procedure using sp_helptext, the procedure will be having the old name instead of new names. 2009 Insert Values of Stored Procedure in Table – Use Table Valued Function It is clear from the result set that , where I have converted stored procedure logic into the table valued function, is much better in terms of logic as it saves a large number of operations. However, this option should be used carefully. The performance of the stored procedure is “usually” better than that of functions. Interesting Observation – Index on Index View Used in Similar Query Recently, I was working on an optimization project for one of the largest organizations. While working on one of the queries, we came across a very interesting observation. We found that there was a query on the base table and when the query was run, it used the index, which did not exist in the base table. On careful examination, we found that the query was using the index that was on another view. This was very interesting as I have personally never experienced a scenario like this. In simple words, “Query on the base table can use the index created on the indexed view of the same base table.” Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries Working with SQL Server has never seemed to be monotonous – no matter how long one has worked with it. Quite often, I come across some excellent comments that I feel like acknowledging them as blog posts. Recently, I wrote an article on SQL SERVER – Execution Plan and Results of Aggregate Concatenation Queries Depend Upon Expression Location, which is well received in the community. 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 2011 Startup Parameters Easy to Configure If you are a regular reader of this blog, you must be aware that I have written about SQL Server Denali recently. Here is the quickest way to reach into the screen where we can change the startup parameters. Go to SQL Server Configuration Manager >> SQL Server Services >> Right Click on the Server >> Properties >> Startup Parameters 2012 Validating Unique Columnname Across Whole Database I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. For example “Our business requirement is when we add new column we want it unique across current database.” Read the solution to this strange request in this blog post. Excel Losing Decimal Values When Value Pasted from SSMS ResultSet It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Basic Calculation and PEMDAS Order of Operation Read this interesting blog post for fantastic conversation about the subject. Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video http://www.youtube.com/watch?v=x_-3tLqTRv0 Delete From Multiple Table – Update Multiple Table in Single Statement There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? Read the answer in the blog post. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLbits London 2012 - Demos

    - by Adam Machanic
    Thanks to everyone who attended my sessions last Friday and Saturday at SQLbits! It was great to meet many new people, not to mention spending some time exploring one of my favorite cities, London. Attached are the demos for each of the two talks I delivered: Query Tuning Mastery: The Art of and Science of Manhandling Parallelism As a database developer, your job boils down to one word: performance. And in today's multi-core-driven world, query performance is very much determined by how well you're...(read more)

    Read the article

  • Is there an equivalent of jsqlparser but for SPARQL instead of SQL?

    - by Programmer
    I'm trying to use Java to construct a SPARQL query, and then send it off to a remote database. However, I'm new to both Java and SPARQL, so I was wondering if anyone could explain how to do this, rather than just posting a link. I heard there is a tool called jsqlparser for the same task, except that it's for a SQL to SPARQL conversion using Java. Conversion nor parser won't be necessary, just a method for constructing a query and querying the database provided by the user.

    Read the article

  • Why isn't my query using any indices when I use a subquery?

    - by sfussenegger
    I have the following tables (removed columns that aren't used for my examples): CREATE TABLE `person` ( `id` int(11) NOT NULL, `name` varchar(1024) NOT NULL, `sortname` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `sortname` (`sortname`(255)), KEY `name` (`name`(255)) ); CREATE TABLE `personalias` ( `id` int(11) NOT NULL, `person` int(11) NOT NULL, `name` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `person` (`person`), KEY `name` (`name`(255)) ) Currently, I'm using this query which works just fine: select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; mysql> explain select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | 1 | SIMPLE | p | index_merge | name,sortname | name,sortname | 767,767 | NULL | 3 | Using sort_union(name,sortname); Using where | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ 1 row in set (0.00 sec) Now I'd like to extend this query to also consider aliases. First, I've tried using a join: select p.* from person p join personalias a where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; mysql> explain select p.* from person p join personalias a on p.id = a.person where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | 1 | SIMPLE | a | ALL | ref,name | NULL | NULL | NULL | 87401 | Using temporary | | 1 | SIMPLE | p | eq_ref | PRIMARY,name,sortname | PRIMARY | 4 | musicbrainz.a.ref | 1 | Using where | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ 2 rows in set (0.00 sec) This looks bad: no index, 87401 rows, using temporary. Using temporary only appears when I use distinct, but as an alias might be the same as the name, I can't really get rid of it. Next, I've tried to replace the join with a subquery: select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select person from personalias a where a.name = 'John Mayer'); mysql> explain select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select id from personalias a where a.name = 'John Mayer'); +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | 1 | PRIMARY | p | ALL | name,sortname | NULL | NULL | NULL | 540309 | Using where | | 2 | DEPENDENT SUBQUERY | a | index_subquery | person,name | person | 4 | func | 1 | Using where | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ 2 rows in set (0.00 sec) Again, this looks pretty bad: no index, 540309 rows. Interestingly, both queries (select p.* from person ... or p.id in (4711,12345) and select id from personalias a where a.name = 'John Mayer') work extremely well. Why doesn't MySQL use any indices for both of my queries? What else could I do? Currently, it looks best to fetch person.ids for aliases and add them statically as an in(...) to the second query. There certainly has to be another way to do this with a single query. I'm currently out of ideas though. Could I somehow force MySQL into using another (better) query plan?

    Read the article

  • How to add Spatial Solr to a Solrnet query

    - by Flo
    Hi, I am running Solr on my windows machine using jetty. I have downloaded the Spatial Solr Plugin which I finally managed to get up and running. I am also using Solrnet to query against Solr from my asp.net mvc project. Now, adding data into my index seems to work fine and the SpatialTierUpdateProcessorFactory does work as well. The problem is: How do I add the spatial query to my normal query using the Solrnet library. I have tried adding it using the "ExtraParams" parameter but that didn't work very well. Here is an example of me trying to combine the spatial query with a data range query. The date range query works fine without the spatial query attached to it: new SolrQuery("{!spatial lat=51.5224 long=-2.6257 radius=10000 unit=km calc=arc threadCount=2}") && new SolrQuery(MyCustomQuery.Query) && new SolrQuery(DateRangeQuery); which results in the following query against Solr: (({!spatial lat=51.5224 long=-2.6257 radius=100 unit=km calc=arc threadCount=2} AND *:*) AND _date:[2010-05-07T13:13:37Z TO 2011-05-07T13:13:37Z]) And the error message I get back is: The remote server returned an error: (400) Bad Request. SEVERE: org.apache.solr.common.SolrException: org.apache.lucene.queryParser.Pars eException: Cannot parse '(({!spatial lat=51.5224 lng=-2.6257 radius=10000 unit= km calc=arc threadCount=2} AND *:*) AND _date:[2010-05-07T13:09:49Z TO 2011-05-0 7T13:09:49Z])': Encountered " <RANGEEX_GOOP> "lng=-2.6257 "" at line 1, column 2 4. Was expecting: "}" ... Now, the thing is if I use the Solr Web Admin page and execute the following query against it, everything works fine. {!spatial lat=50.8371 long=4.35536 radius=100 calc=arc unit=km threadcount=2}text:London What is the best/correct way to call the spatial function using SolrNet. Is the best way to somehow add that bit of the query manually to the query string and is so how? Any help is much appreciated!

    Read the article

  • function using cl-who:with-html-output ignoring parameter

    - by shanked
    I'm not sure whether this is an issue with my use of cl-who (specifically with-html-output-to-string and with-html-output) or an issue with my understanding of Common Lisp (as this is my first project using Lisp). I created a function to create form fields: (defun form-field (type name label) (cl-who:with-html-output (*standard-output* nil) (:div :class "field" (:label :for name label) (:input :type type :name name)))) When using this function, ie: (form-field "text" "username" "Username") the parameter label seems to be ignored... the HTML output is: <div class="field"><label for="username"></label> <input type="text" name="username"/></div> instead of the expected output: <div class="field"><label for="username">Username</label> <input type="text" name="username"/></div> If I modify the function and add a print statement: (defun form-field (type name label) (cl-who:with-html-output (*standard-output* nil) (print label) (:div :class "field" (:label :for name label) (:input :type type :name name)))) The "Username" string is successfully output (but still ignored in the HTML)... any ideas what might cause this? Keep in mind, I'm calling this function within a cl-who:with-html-output-to-string for use with hunchentoot.

    Read the article

  • Is there a way to make PHP progressively output as the script executes?

    - by Iain Fraser
    So I'm writing a disposable script for my own personal single use and I want to be able see how the process is going. Basically I'm processing a couple of thousand media releases and sending them to our new CMS. So I don't hammer the CMS, I'm making the script sleep for a couple of seconds after every 5 requests. I would like - as the script is executing - to be able to see my echos telling me the script is going to sleep or that the last transaction with the webservice was successful. Is this possible in PHP? Thanks for your help! Iain

    Read the article

  • how to get doctype tag with url using xsl:output

    - by keshav.veerapaneni
    Hi, i have added the below xsl:output tag in xslt <xsl:output method="html" indent="yes" encoding="utf-8" doctype-public="-//W3C//DTD HTML 4.0 Transitional//EN" </xsl:output as a result i get the below doctype tag in the html output- <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" how can i mention the url in the doctype tag using xsl:output which would output a doctype tag that looks like below <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "_http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" Best Regards, Keshav

    Read the article

  • MySQL Query That Can Pull the Data I am Seeking?

    - by Amy
    On the project I am working on, I am stuck with the table structure from Hades. Two things to keep in mind: I can't change the table structure right now. I'm stuck with it for the time being. The queries are dynamically generated and not hard coded. So, while I am asking for a query that can pull this data, what I am really working toward is an algorithm that will generate the query I need. Hopefully, I can explain the problem without making your eyes glaze over and your brain implode. We have an instance table that looks (simplified) along these lines: Instances InstanceID active 1 Y 2 Y 3 Y 4 N 5 Y 6 Y Then, there are multiple data tables along these lines: Table1 InstanceID field1 reference_field2 1 John 5 2 Sally NULL 3 Fred 6 4 Joe NULL Table2 InstanceID field3 5 1 6 1 Table3 InstanceID fieldID field4 5 1 Howard 5 2 James 6 2 Betty Please note that reference_field2 in Table1 contains a reference to another instance. Field3 in Table2 is a bit more complicated. It contains a fieldID for Table 3. What I need is a query that will get me a list as follows: InstanceID field1 field4 1 John Howard 2 Sally 3 Fred The problem is, in the query I currently have, I do not get Fred because there is no entry in Table3 for fieldID 1 and InstanceID 6. So, the very best list I have been able to get thus far is InstanceID field1 field4 1 John Howard 2 Sally In essence, if there is an entry in Table1 for Field 2, and there is not an entry in Table 3 that has the instanceID contained in field2 and the field ID contained in field3, I don't get the data from field1. I have looked at joins till I'm blue in the face, and I can't see a way to handle the case when table3 has no entry.

    Read the article

  • How do I get PHP variables from this MySQL query?

    - by CT
    I am working on an Asset Database problem using PHP / MySQL. In this script I would like to search my assets by an asset id and have it return all related fields. First I query the database asset table and find the asset's type. Then depending on the type I run 1 of 3 queries. <?php //make database connect mysql_connect("localhost", "asset_db", "asset_db") or die(mysql_error()); mysql_select_db("asset_db") or die(mysql_error()); //get type of asset $type = mysql_query(" SELECT asset.type From asset WHERE asset.id = 93120 ") or die(mysql_error()); switch ($type){ case "Server": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,server.manufacturer ,server.model ,server.serial_number ,server.esc ,server.user ,server.prev_user ,server.warranty FROM asset LEFT JOIN server ON server.id = asset.id WHERE asset.id = 93120 "); break; case "Laptop": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,laptop.manufacturer ,laptop.model ,laptop.serial_number ,laptop.esc ,laptop.user ,laptop.prev_user ,laptop.warranty FROM asset LEFT JOIN laptop ON laptop.id = asset.id WHERE asset.id = 93120 "); break; case "Desktop": //do some stuff that involves a mysql query mysql_query(" SELECT asset.id ,asset.company ,asset.location ,asset.purchase_date ,asset.purchase_order ,asset.value ,asset.type ,asset.notes ,desktop.manufacturer ,desktop.model ,desktop.serial_number ,desktop.esc ,desktop.user ,desktop.prev_user ,desktop.warranty FROM asset LEFT JOIN desktop ON desktop.id = asset.id WHERE asset.id = 93120 "); break; } ?> So far I am able to get asset.type into $type. How would I go about getting the rest of the variables (laptop.model to $model, asset.notes to $notes and so on)? Thank you.

    Read the article

  • Insert array to mysql database php

    - by ganjan
    Hi. I want to add an array to my db. I have set up a function that checks if a value in the db (ex. health and money) has changed. If the value is diffrent from the original I add the new value to the $db array. Like this $db['money'] = $money_input + $money_db;. function modify_user_info($conn, $money_input, $health_input){ (...) if ($result = $conn->query($query)) { while ($user = $result->fetch_assoc()) { $money_db = $user["money"]; $health_db = $user["health"]; } $result->close(); //lag array til db med kolonnene som skal fylles ut som keys i array if ($user["money"] != $money_input){ $db['money'] = $money_input + $money_db; //0 - 20 if (!preg_match("/^[[0-9]{0,20}$/i", $db['money'])){ echo "error"; return false; } } if ($user["health"] != $health_input){ $db['health'] = $health_input + $health_db; //0 - 4 if (!preg_match("/^[[0-9]{0,4}$/i", $db['health'])){ echo "error"; return false; } if (($db['health'] < 1) or ($db['health'] > 1000)) { echo "error"; return false; } } The keys in $db represent colums in my database. Now I want to make a function that takes the keys in the array $db and insert them in the db. Something like this ? $query = "INSERT INTO `main_log` ( `id` , "; foreach(range(0, x) as $num) { $query .= array_key.", "; } $query = substr($query, 0, -3); $query .= " VALUES ('', "; foreach(range(0, x) as $num) { $query .= array_value.", "; } $query = substr($query, 0, -3); $query .= ")";

    Read the article

  • Compare output of program to correct program using bash script, without using text files

    - by Doug
    I've been trying to compare the output of a program to known correct output by using a bash script without piping the output of the program to a file and then using diff on the output file and a correct output file. I've tried setting the variables to the output and correct output and I believe it's been successful but I can't get the string comparison to work correctly. I may be wrong about the variable setting so it could be that. What I've been writing: TEST=`./convert testdata.txt < somesampledata.txt` CORRECT="some correct output" if [ "$TEST"!="$CORRECT" ]; then echo "failed" fi

    Read the article

  • sql statement question. Need to query 3 tables in one go!

    - by Stefan
    Hey there, I have an sql database. In this database is 3 tables I need to query. The first table has all the item info called item and the other two tables has data for votes and comments called userComment and the third for votes called userItem I currently have a function which uses this sql query to get the latest more popular (in terms of both votes and comments): $sql = "SELECT itemID, COUNT(*) AS cnt FROM ( SELECT `itemID` FROM `userItem` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY UNION ALL SELECT `itemID` FROM `userComment` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY AND `itemID` > 0 ) q GROUP BY `itemID` ORDER BY cnt DESC"; I know how to change this for either by votes alone or comments.... HOWEVER - I need to query the database to only return the itemID's of the ones which have specific conditions in only the item table these are WHERE categoryID = 'xx' AND typeID = 'xx' If the sql ninja could please help me on this one? Do I have to first return the results from the above query and the for each in the array fetched then check each against the item table and see if it fits the conditions to build a new array - or is that overkill? Thanks, Stefan

    Read the article

  • Convert charset in mysql query

    - by Yousf
    Hi, I have a question about converting charset from inside mysql query. I have a 2 databases. One for the website (joomla), the other for forum (IPB). I am doing query from inside joomla, which by default have "SET NAMES UTF8". I want to query a table inside the forum databases. A table called "ibf_topics". This table has latin1 encoding. I do the following to select anything from the not-utf8 table. //convert connection to handle latin1. $query = "SET NAMES latin1"; $db->setQuery($query); $db->query(); $query = "select id, title from other_database.ibf_topics"; $db->setQuery($query); $db->query(); //read result into an array. //return connection to handle UTF8. $query = "SET NAMES UTF8"; $db->setQuery($query); $db->query(); After that, when I want to use the selected tile, I use the following: echo iconv("CP1256", "UTF-8", $topic['title']) The question is, is there anyway to avoid all this hassle? For now, I can't change forum database to UTF8 and I can't change joomla database to latin1 :S

    Read the article

  • what is a fast way to output h5py dataset to text?

    - by user362761
    I am using the h5py python package to read files in HDF5 format. (e.g. somefile.h5) I would like to write the contents of a dataset to a text file. For example, I would like to create a text file with the following contents: 1,20,31,75,142,324,78,12,3,90,8,21,1 I am able to access the dataset in python using this code: import h5py f = h5py.File('/Users/Me/Desktop/thefile.h5', 'r') group = f['/level1/level2/level3'] dset = group['dsetname'] My naive approach is too slow, because my dataset has over 20000 entries: # write all values to file for index in range(len(dset)): # do not add comma after last value if index == len(dset)-1: txtfile.write(repr(dset[index])) else: txtfile.write(repr(dset[index])+',') txtfile.close() return None Is there a faster way to write this to a file? Perhaps I could convert the dataset into a NumPy array or even a Python list, and then use some file-writing tool? (I could experiment with concatenating the values into a larger string before writing to file, but I'm hoping there's something entirely more elegant)

    Read the article

  • How to column-ify an output from a certain program?

    - by mbaitoff
    I have a program that generates and outputs a sequence of simple sample math homework tasks, like: 1 + 1 = ... 3 + 3 = ... 2 + 5 = ... 3 + 7 = ... 4 + 2 = ... a sequence can be quite long, and I'd like to save space when this sequence is printed by converting it as follows: 1 + 1 = ... 3 + 7 = ... 3 + 3 = ... 4 + 2 = ... 2 + 5 = ... that is, wrapping the lines into the two or more columns. I was expecting the column linux utility to do the job using the -c N option witn N=2, however, it still outputs the lines in one column whatever the N is. How would I do the column-ifying of the sequence of lines?

    Read the article

  • How can I use rows in a lookup table as columns in a MySQL query?

    - by TomH
    I'm trying to build a MySQL query that uses the rows in a lookup table as the columns in my result set. LookupTable id | AnalysisString 1 | color 2 | size 3 | weight 4 | speed ScoreTable id | lookupID | score | customerID 1 | 1 | A | 1 2 | 2 | C | 1 3 | 4 | B | 1 4 | 2 | A | 2 5 | 3 | A | 2 6 | 1 | A | 3 7 | 2 | F | 3 I'd like a query that would use the relevant lookupTable rows as columns in a query so that I can get a result like this: customerID | color | size | weight | speed 1 A C D 2 A A 3 A F The kicker of the problem is that there may be additional rows added to the LookupTable and the query should be dynamic and not have the Lookup IDs hardcoded. That is, this will work: SELECT st.customerID, (SELECT st1.score FROM ScoreTable st1 WHERE lookupID=1 AND st.customerID = st1.customerID) AS color, (SELECT st1.score FROM ScoreTable st1 WHERE lookupID=2 AND st.customerID = st1.customerID) AS size, (SELECT st1.score FROM ScoreTable st1 WHERE lookupID=3 AND st.customerID = st1.customerID) AS weight, (SELECT st1.score FROM ScoreTable st1 WHERE lookupID=4 AND st.customerID = st1.customerID) AS speed FROM ScoreTable st GROUP BY st.customerID Until there is a fifth row added to the LookupTable . . . Perhaps I'm breaking the whole relational model and will have to resolve this in the backend PHP code? Thanks for pointers/guidance. tom

    Read the article

  • httpd high cpu usage slowing down server response

    - by max
    my client has a image sharing website with about 100.000 visitor per day it has been slowed down considerably since this morning when i checked processes i've notice high cpu usage from http .... some has suggested ddos attack ... i'm not a webmaster and i've no idea whts going on top top - 20:13:30 up 5:04, 4 users, load average: 4.56, 4.69, 4.59 Tasks: 284 total, 3 running, 281 sleeping, 0 stopped, 0 zombie Cpu(s): 12.1%us, 0.9%sy, 1.7%ni, 69.0%id, 16.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15875096k used, 162056k free, 360468k buffers Swap: 4194288k total, 888k used, 4193400k free, 14050008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4151 apache 20 0 277m 84m 3784 R 50.2 0.5 0:01.98 httpd 4115 apache 20 0 210m 16m 4480 S 18.3 0.1 0:00.60 httpd 12885 root 39 19 4296 692 308 S 13.0 0.0 11:09.53 gzip 4177 apache 20 0 214m 20m 3700 R 12.3 0.1 0:00.37 httpd 2219 mysql 20 0 4257m 198m 5668 S 11.0 1.3 42:49.70 mysqld 3691 apache 20 0 206m 14m 6416 S 1.7 0.1 0:03.38 httpd 3934 apache 20 0 211m 17m 4836 S 1.0 0.1 0:03.61 httpd 4098 apache 20 0 209m 17m 3912 S 1.0 0.1 0:04.17 httpd 4116 apache 20 0 211m 17m 4476 S 1.0 0.1 0:00.43 httpd 3867 apache 20 0 217m 23m 4672 S 0.7 0.1 1:03.87 httpd 4146 apache 20 0 209m 15m 3628 S 0.7 0.1 0:00.02 httpd 4149 apache 20 0 209m 15m 3616 S 0.7 0.1 0:00.02 httpd 12884 root 39 19 22336 2356 944 D 0.7 0.0 0:19.21 tar 4054 apache 20 0 206m 12m 4576 S 0.3 0.1 0:00.32 httpd another top top - 15:46:45 up 5:08, 4 users, load average: 5.02, 4.81, 4.64 Tasks: 288 total, 6 running, 281 sleeping, 0 stopped, 1 zombie Cpu(s): 18.4%us, 0.9%sy, 2.3%ni, 56.5%id, 21.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15792196k used, 244956k free, 360924k buffers Swap: 4194288k total, 888k used, 4193400k free, 13983368k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4622 apache 20 0 209m 16m 3868 S 54.2 0.1 0:03.99 httpd 4514 apache 20 0 213m 20m 3924 R 50.8 0.1 0:04.93 httpd 4627 apache 20 0 221m 27m 4560 R 18.9 0.2 0:01.20 httpd 12885 root 39 19 4296 692 308 S 18.9 0.0 11:51.79 gzip 2219 mysql 20 0 4257m 199m 5668 S 18.3 1.3 43:19.04 mysqld 4512 apache 20 0 227m 33m 4736 R 5.6 0.2 0:01.93 httpd 4520 apache 20 0 213m 19m 4640 S 1.3 0.1 0:01.48 httpd 4590 apache 20 0 212m 19m 3932 S 1.3 0.1 0:00.06 httpd 4573 apache 20 0 210m 16m 3556 R 1.0 0.1 0:00.03 httpd 4562 root 20 0 15164 1388 952 R 0.7 0.0 0:00.08 top 98 root 20 0 0 0 0 S 0.3 0.0 0:04.89 kswapd0 100 root 39 19 0 0 0 S 0.3 0.0 0:02.85 khugepaged 4579 apache 20 0 209m 16m 3900 S 0.3 0.1 0:00.83 httpd 4637 apache 20 0 209m 15m 3668 S 0.3 0.1 0:00.03 httpd ps aux [root@server ~]# ps aux | grep httpd root 2236 0.0 0.0 207524 10124 ? Ss 15:09 0:03 /usr/sbin/http d -k start -DSSL apache 3087 2.7 0.1 226968 28232 ? S 20:04 0:06 /usr/sbin/http d -k start -DSSL apache 3170 2.6 0.1 221296 22292 ? R 20:05 0:05 /usr/sbin/http d -k start -DSSL apache 3171 9.0 0.1 225044 26768 ? R 20:05 0:17 /usr/sbin/http d -k start -DSSL apache 3188 1.5 0.1 223644 24724 ? S 20:05 0:03 /usr/sbin/http d -k start -DSSL apache 3197 2.3 0.1 215908 17520 ? S 20:05 0:04 /usr/sbin/http d -k start -DSSL apache 3198 1.1 0.0 211700 13000 ? S 20:05 0:02 /usr/sbin/http d -k start -DSSL apache 3272 2.4 0.1 219960 21540 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3273 2.0 0.0 211600 12804 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3279 3.7 0.1 229024 29900 ? S 20:06 0:05 /usr/sbin/http d -k start -DSSL apache 3280 1.2 0.0 0 0 ? Z 20:06 0:01 [httpd] <defun ct> apache 3285 2.9 0.1 218532 21604 ? S 20:06 0:04 /usr/sbin/http d -k start -DSSL apache 3287 30.5 0.4 265084 65948 ? R 20:06 0:43 /usr/sbin/http d -k start -DSSL apache 3297 1.9 0.1 216068 17332 ? S 20:06 0:02 /usr/sbin/http d -k start -DSSL apache 3342 2.7 0.1 216716 17828 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3356 1.6 0.1 217244 18296 ? S 20:07 0:01 /usr/sbin/http d -k start -DSSL apache 3365 6.4 0.1 226044 27428 ? S 20:07 0:06 /usr/sbin/http d -k start -DSSL apache 3396 0.0 0.1 213844 16120 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3399 5.8 0.1 215664 16772 ? S 20:07 0:05 /usr/sbin/http d -k start -DSSL apache 3422 0.7 0.1 214860 17380 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3435 3.3 0.1 216220 17460 ? S 20:07 0:02 /usr/sbin/http d -k start -DSSL apache 3463 0.1 0.0 212732 15076 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3492 0.0 0.0 207660 7552 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3493 1.4 0.1 218092 19188 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3500 1.9 0.1 224204 26100 ? R 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3501 1.7 0.1 216916 17916 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3502 0.0 0.0 207796 7732 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3505 0.0 0.0 207660 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3529 0.0 0.0 207660 7524 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3531 4.0 0.1 216180 17280 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3532 0.0 0.0 207656 7464 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3543 1.4 0.1 217088 18648 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3544 0.0 0.0 207656 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3545 0.0 0.0 207656 7560 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3546 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3547 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3548 2.3 0.1 216904 17888 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3550 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3551 0.0 0.0 207660 7536 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3552 0.2 0.0 214104 15972 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3553 6.5 0.1 216740 17712 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3554 6.3 0.1 216156 17260 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3555 0.0 0.0 207796 7716 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3556 1.8 0.0 211588 12580 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3557 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3565 0.0 0.0 207660 7520 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3570 0.0 0.0 207660 7516 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3571 0.0 0.0 207660 7504 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL root 3577 0.0 0.0 103316 860 pts/2 S+ 20:08 0:00 grep httpd httpd error log [Mon Jul 01 18:53:38 2013] [error] [client 2.178.12.67] request failed: error reading the headers, referer: http://akstube.com/image/show/27023/%D9%86%DB%8C%D9%88%D8%B4%D8%A7-%D8%B6%DB%8C%D8%BA%D9%85%DB%8C-%D9%88-%D8%AE%D9%88%D8%A7%D9%87%D8%B1-%D9%88-%D9%87%D9%85%D8%B3%D8%B1%D8%B4 [Mon Jul 01 18:55:33 2013] [error] [client 91.229.215.240] request failed: error reading the headers, referer: http://akstube.com/image/show/44924 [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] Invalid method in request [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] File does not exist: /var/www/html/501.shtml [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] client denied by server configuration: /var/www/html/server-status [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] File does not exist: /var/www/html/403.shtml [Mon Jul 01 19:23:57 2013] [error] [client 151.242.14.31] request failed: error reading the headers [Mon Jul 01 19:37:16 2013] [error] [client 2.190.16.65] request failed: error reading the headers [Mon Jul 01 19:56:00 2013] [error] [client 151.242.14.31] request failed: error reading the headers Not a JPEG file: starts with 0x89 0x50 also there is lots of these in the messages log Jul 1 20:15:47 server named[2426]: client 203.88.6.9#11926: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:47 server named[2426]: client 203.88.6.9#26255: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#20093: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#8672: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:07 server named[2426]: client 203.88.6.9#39352: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#25382: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#9064: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#35375: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#61932: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#4423: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#40229: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#46128: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#62128: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#35240: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#36774: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#28361: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#14970: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20216: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#31794: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#23042: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#11333: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#41807: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20092: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#43526: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#17173: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#62412: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#63961: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#64345: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#31030: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17098: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17197: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#18114: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#59138: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:17 server named[2426]: client 203.88.6.9#28715: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:48:33 server named[2426]: client 203.88.23.9#26355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#34473: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#62658: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#51631: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:35 server named[2426]: client 203.88.23.9#54701: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#63694: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#18203: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:37 server named[2426]: client 203.88.6.10#9029: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#58981: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#29321: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:47 server named[2426]: client 119.160.127.42#42355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:49 server named[2426]: client 119.160.120.42#46285: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:53 server named[2426]: client 119.160.120.42#30696: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:54 server named[2426]: client 119.160.127.42#14038: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:55 server named[2426]: client 119.160.120.42#33586: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:56 server named[2426]: client 119.160.127.42#55114: query (cache) 'xxxmaza.com/A/IN' denied

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >