Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 460/838 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Unable to boot fedora 11

    - by csunwold
    I have been running fedora 11 for several months without a hiccup, but two days ago I ran "yum update" and installed whatever updates were available (I didn't pay attention to what they were). I was having problems with mysql so I tried "yum remove mysql" and then it removed mysql as well as quite a few unexpected dependencies. I then "yum install mysql" without a hitch and went about my way. However, when I next booted up my machine it got to "Starting preload dameon [OK]" and then it hangs with a flashing cursor on the screen. I tried following http://dailypackage.fedorabook.com/i...ling-Grub.html but it didn't seem to make any difference. I put a new hard drive with WinXP on it into the same machine that I booted to, and I tried to use Ext2 Installable File System for Windows but when I run it, it only seems to see /boot and nothing else on the hard drive. Any ideas?

    Read the article

  • yum list installed including version of all installed packages CentOS 5.4

    - by Andy
    I have a list of packages installed with yum on CentOS 5.4 [root@server ~]# yum list installed ... Installed Packages GConf2.x86_64 2.14.0-9.el5 installed ImageMagick.x86_64 6.2.8.0-4.el5_1.1 installed MAKEDEV.x86_64 3.23-1.2 installed MySQL-python.x86_64 1.2.1-1 installed I would like to download these rpms locally using yumdownloader --resolve MySQL-python-1.2.1-1.x86_64 etc. However the package formatting is different (MySQL-python.x86_64 1.2.1-1 vs MySQL-python-1.2.1-1.x86_64) so I am unable to download them using the above command. I don't want to have to parse the output of yum list installed, and I also don't want to use the contents of /var/log/yum.log* as I'll have to account for erased packages and version discrepancies. However /var/log/yum.log* does have the formatting I require... May 25 14:58:15 Installed: groff-1.18.1.1-11.1.x86_64 May 25 14:58:15 Installed: bzip2-1.0.3-4.el5_2.x86_64 Any suggestions?

    Read the article

  • I recompiled dozens of times, why is my OpenSSL Library and Header still not the same?

    - by Doug
    openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8o 01 Jun 2010 OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012 From PHP (5.4.4) info, this is the problem I have. I am dry out of ideas, and I cannot understand why it ins't working. This was my configure: ./configure '--with-apxs2=/etc/apache24/bin/apxs' '--with-mysql' '--prefix=/etc/apache24/php' '--with-config-file-path=/etc/apache24/php' '--enable-force-cgi-redirect' '--disable-cgi' '--with-zlib' '--with-gettext' '--with-curl' '--with-mcrypt' '--with-gd' '--with-pdo' '--with-pdo-mysql' '--with-mysql-sock=/var/run/mysqld/mysqld.sock' '--with-libdir=lib32' '--with-openssl=shared,/usr' '--with-mysqli'

    Read the article

  • overusage of RAM in Hypervm VPS

    - by Mac Taylor
    hey guys I have a VPS running on hypervm in proceses list i have something like this > /usr/libexec/mysqld --basedir=/usr > --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/ user : mysql which takes 150 mb RAM and then /usr/sbin/named -u named -t /var/named/chroot user : Named 50 mb RAM taken by this process how can i solve this overusage of RAM and reduce it . I have access to root and SSH

    Read the article

  • LAMP Server without single failure point + Global Server Load Balancing?

    - by José Nobile
    I want implement a LAMP Server (Linux Apache MySQL PHP) without a single failure point and with Global Server Load Balancing. I have a server in Cali, Colombia, and other server will be installed in Melbourne, Australia, user in America can use the Cali Server and in Europe, Asia, Africa or Oceania use the Melbourne Server. If any server fail (or load is excessively high), a server must answer all request. Data in MYSQL must be in sync, php files, any configuration in both server must be in sync. I read about of Google DNS Server 8.8.8.8 and 8.8.4.4 and ANY Cast, also about MySQL semisynchronous replication and MySQL Cluster, but what about other things, as crontabs, and the configurations in server? The solution can't depend of APNIC or BGP, only open source software running in Linux.

    Read the article

  • How can transfer zabbix item from different hosts and save item statistic

    - by Stepchik
    There are two server's (srv1 and srv2). Mysql server has been installed on which of them. Srv1 mysql contains database (db1). Zabbix-server get statistic throw configured agent user parameter (https://www.zabbix.com/documentation/2.0/manual/config/items/userparameters). Yesterday i has been copyed database db1 from mysql srv1 to mysql srv2. I can clone zabbix server item (https://www.zabbix.com/documentation/2.0/manual/config/items) to srv2, but lost all srv1 db1 statistic. Can you advice how keep them?

    Read the article

  • Building PHP For MacOS

    - by Eray
    I was using XAMPP and decided to uninstall it and use MacOS' in-built apache and php modules. But while uninstalling XAMPP I deleted /usr/bin/php files and other PHP-CLI files accidentally. And I decided to install newest version of PHP (5.5.12) instead of rebuilding current version (5.4.24). Downloaded it and unzip. After this executed this command as mentioned at this guide. ./configure '--with-apxs2=/usr/sbin/apxs' '--enable-cli' '--with-config-file-path=/etc' '--with-zlib=/usr' '--enable-bcmath' '--with-bz2=/usr' '--enable-calendar' '--disable-cgi' '--with-curl=/usr' '--enable-dba' '--enable-ndbm=/usr' '--enable-exif' '--enable-fpm' '--enable-ftp' '--with-gd' '--enable-gd-native-ttf' '--enable-mbregex' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pear' '--with-pdo-mysql=mysqlnd' '--with-mysql-sock=/var/mysql/mysql.sock' '--with-tidy' '--enable-wddx' '--with-xmlrpc' '--enable-zip' make make install When i check phpinfo() , it's still version 5.4.24 . This line from my httpd.conf LoadModule php5_module libexec/apache2/libphp5.so /usr/libexec/apache2/libphp5.so coming from old version and i couldn't ind libphp5.so for new version. There is no libphp5.so file inside modules dir. How can i use new PHP build with Apache ? UPDATE Results of php -v command . PHP 5.5.12 (cli) (built: May 27 2014 05:17:21) Copyright (c) 1997-2014 The PHP GroupZend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies

    Read the article

  • Add directory to $PATH if it's not already there

    - by Doug Harris
    Has anybody written a bash function to add a directory to $PATH only if it's not already there? I typically add to PATH using something like: export PATH=/usr/local/mysql/bin:$PATH If I construct my PATH in .bash_profile, then it's not read unless the session I'm in is a login session -- which isn't always true. If I construct my PATH in .bashrc, then it runs with each subshell. So if I launch a Terminal window and then run screen and then run a shell script, I get: $ echo $PATH /usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/mysql/bin:.... I'm going to try building a bash function called add_to_path() which only adds the directory if it's not there. But, if anybody has already written (or found) such a thing, I won't spend the time on it.

    Read the article

  • How to disable text overwrite mode in Netbeans (CentOS)?

    - by Kevin Lee
    Everytime I type some text, it is overwriting what I have typed. I assume that the mode is set to overwriting, I want to insert the text not overwrite it, but I can't disable it because my insert key is mixed up with my delete key so everytime I enter insert to disable the overwrite mode, it just delete what I type. So how to disable this? I'm using centOS.. and it seems that my problem is only related to Netbeans because when I type here, it is set to insert mode.. but in Netbeans, it just overwrites the codes! help!

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • How to limit network usage for concrete application in linux that is running in it?

    - by B14D3
    I'm looking for something like nice for cpu, but for network usage that will limit application network consumption to level that will configure. I have problems with xapian-replicate-server that is consuming 80 % of my network. It's causing mysql connections problem (mysql server is working on this machine too). I can't move xapian or mysql to other machine so i need to limit xapian network usage to a decent level. Is there any tool that will help me do this ?

    Read the article

  • Linux: how to restore config file using apt-get/aptitude?

    - by o_O Tync
    I've occasionally lost my config file "/etc/mysql/my.cnf", and want to restore it. The file belongs to package mysql-common which is needed for some vital functionality so I can't just purge && install it: the dependencies would be also uninstalled (or if I can ignore them temporarily, they won't be working). Is there a way to restore the config file from a package without un-ar-ing the package file? dpkg-reconfigure mysql-common did not restore it.

    Read the article

  • Character coding problem

    - by out_sider
    I have a file named index.php which using a mysql server gets a simple username. The mysql server is running on centOS and I have two different systems running apache serving as web servers. One is my own windows pc using a "wamp" solution which uses the mysql server refereed before and the other is the centOS server itself. I use this so I can develop in my laptop and run the final on the centOS box. The problem is this: Accessing centOS box I get (on hxxp://centos): out_sider 1lu?s 2oi Using wamp on windows I get (on hxxp://localhost): out_sider 1luís 2oi The mysql database is configured correctly seeing that both use the same and I used svn repository to move files from windows to centOS so the file is the same. Does anyone have any suggestions? Thanks in advnce

    Read the article

  • Object reference not set to an instance of an object- Linked List Example

    - by Zoro Roronoa
    I am seeing following errors : Object reference not set to an instance of an object! Check to determinate if the object is null before calling the method! I'am new with C#,and I made a program for Sorted Linked Lists. Here is the code where the error comes! public void Insert(double data) { Link newLink = new Link(data); Link current = first; Link previous = null; if (first == null) { first = newLink; } else { while (data > current.DData && current != null) { previous = current; current = current.Next; } previous.Next = newLink; newLink.Next = current; } } It says that the current referenc is null while (data current.DData && current != null), but I assigned it current = first; Please Help ! The rest is the complete code of the Program! class Link { double dData; Link next=null; public Link Next { get { return next; } set { next = value; } } public double DData { get { return dData; } set { dData = value; } } public Link(double dData) { this.dData = dData; } public void DisplayLink() { Console.WriteLine("Link : "+ dData); } } class SortedList { Link first; public SortedList() { first = null; } public bool IsEmpty() { return (this.first == null); } public void Insert(double data) { Link newLink = new Link(data); Link current = first; Link previous = null; if (first == null) { first = newLink; } else { while (data > current.DData && current != null) { previous = current; current = current.Next; } previous.Next = newLink; newLink.Next = current; } } public Link Remove() { Link temp = first; first = first.Next; return temp; } public void DisplayList() { Link current; current = first; Console.WriteLine("Display the List!"); while (current != null) { current.DisplayLink(); current = current.Next; } } } class SortedListApp { public void TestSortedList() { SortedList newList = new SortedList(); newList.Insert(20); newList.Insert(22); newList.Insert(100); newList.Insert(1000); newList.Insert(15); newList.Insert(11); newList.DisplayList(); newList.Remove(); newList.DisplayList(); } }

    Read the article

  • SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video

    - by pinaldave
    Importing data into database is one of the most important tasks. I often receive questions regarding what is the quickest way to insert CSV data or how to import CSV Data into SQL Server Table. Honestly the process is very simple and the script is even simpler. In today’s SQL in Sixty Seconds Video we will learn how quickly we can insert CSV data into SQL Server. The steps to import CSV are very simple. Create Table Use Bulk Insert to import the data Verify the data Done! Absolutely it is that simple. More on Importing CSV Data: SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Create a Comma Delimited List Using SELECT Clause From Table Column SQL SERVER – Comma Separated Values (CSV) from Table Column SQL SERVER – Comma Separated Values (CSV) from Table Column – Part 2 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Installation

    - by Stuart Brierley
    I have previsouly detailed the installation of MySQL, the configuration of MySQL and the installation of the ODBC Data Connector for MySQL.  The reason I needed to install and configure these servers was to provide a test environment for a BizTalk Server 2009 solution I am working on where BizTalk will be querying and populating a MySQL database. To do this I then needed to install and add the Community ODBC adapter from Two Connect: "The Community BizTalk Adapter for ODBC is based on the code that was first made available on GotDotNet a few years ago. TwoConnect has refreshed this code, added an installer, and tested it against the latest BizTalk editions. We are releasing the updates back to the BizTalk developer, user and partner community as part of our ongoing community intitiatives. This is the second adapter package that TwoConnect makes available to the community after the very succesful release of the BizTalk WSE 3 adapter a couple of years ago. This adapter is useful in all ODBC based integration scenarios. The following are the new features added and fixes made to the old code base on GotDotNet." Detailed below are the installation instructions for this adapter.  Downloading and running the installer will load up the splash screen. Next you need to select the installation location for the adapter. You then need to confirm the installation following which you will be shown the installation progress. Assuming all has gone well you should see the installation complete screen. Once the installation has completed successfully you will then need to add the adapter to your BizTalk Server.  To do this open the BizTalk Administration console, expand the Platform Settings and right click on Adapters then select New\Adapter. You should then be able to select the ODBC adapter and choose the display name for the adapter. This adapter will then be shown in the BizTalk Administration console. Next I will be looking at using the ODBC Adapter when: Generating schemas Creating a receive port Creating a send port

    Read the article

  • How to Omit the Page Number From the First Page of a Word 2013 Document Without Using Sections

    - by Lori Kaufman
    Normally, the first page, or cover page, of a document does not have a page number or other header or footer text. You can avoid putting a page number on the first page using sections, but there is an easier way to do this. If you don’t plan to use sections in any other part of your document, you may want to avoid using them completely. We will show you how to easily take the page number off the cover page and start the page numbering at one on the second page of your document by simply using a footer (or a header) and changing one setting. Click the Page Layout tab. In the Page Setup section of the Page Layout tab, click the Page Setup dialog box launcher icon in the lower, right corner of the section. On the Page Setup dialog box, click the Layout tab and select the Different first page check box in the Headers and footers section so there is a check mark in the box. Click OK. You’ll notice there is no page number on the first page of your document now. However, you might want the second page to be page one of your document, only to find it is currently page two. To change the page number on the second page to one, click the Insert tab. In the Header & Footer section of the Insert tab, click Page Number and select Format Page Numbers from the drop-down menu. On the Page Number Format dialog box, select Start at in the Page numbering section. Enter 0 in the edit box and click OK. This allows the second page of your document to be labeled as page one. You can use the drop-down menu on the Format Page Numbers button in the Header & Footer section of the Insert tab to add page numbers to your document as well. Easily insert formatted page numbers at the top or bottom of the page or in the page margins. Use the same menu to remove page numbers from your document.     

    Read the article

  • New database profiling support in ANTS Performance Profiler

    - by Ben Emmett
    In May last year, the ANTS Performance Profiler team added the ability to profile database requests your application makes to SQL Server or Oracle. The really cool thing is that you’re shown those requests in the application’s call tree, so you can see what .NET code caused those queries to run. It’s particularly helpful if you’re using an ORM which automagically generates and runs queries for you, but which doesn’t necessarily do it in the most efficient way possible. Now by popular demand, we’ve added support for profiling MySQL (or MariaDB) and PostgreSQL, so you can see queries run against those databases too. Some of you have also said that you’re using the Devart dotConnect data providers instead of the native .NET ones, so we’ve added support for those drivers too. Hope it helps! For the record, here’s a list of supported connectors (ones in bold are new): SQL Server .NET Framework Data Provider Devart dotConnect for SQL Server Oracle .NET Framework Data Provider Oracle Data Provider for .NET Devart dotConnect for Oracle MySQL / MariaDB MySQL Connector/Net Devart dotConnect for MySQL PostgreSQL Npgsql .NET Data Provider for PostgreSQL Devart dotConnect for PostgreSQL SQL Server Compact Edition .NET Framework Data Provider for SQL Server Compact Edition Devart dotConnect for SQL Server Pro Have we missed a connector or database which you’d find useful? Tell us about it in the comments or by emailing [email protected]. Ben

    Read the article

  • how to troubleshoot sql server issues

    - by joe
    i have an ASP .net application with sql server database, and i am wondering if you can give your ideas on how to troubleshoot the following issue: i can insert / update / delete from any table, but i have one page that uses transactions to insert into different tables. the c# code is correct and very simple, but it fails. i used the sql profiler to see how my app interacts with the DB, especially that the app is using stored procedures, i can catch the exec procedure statement and run it manually from SSMS and it works fine, but the same stored procedure fails from the application!!! which lead me to think that issue is coming from the user account and settings, i am no expert in sql server and wondering if anyone can explain how to verify the required settings for user account. thanks EDIT: in web.config here is how i reference my connection <connectionStrings> <add name="Conn" connectionString="Data Source=localhost;Initial Catalog=myDB;Persist Security Info=True;User ID=DbUser;Password=password1254_3" providerName="System.Data.SqlClient"> </connectionstring> EDIT: i will try to describe the process here: 1- i begin a transaction 2- i call a stored proc to insert (which succeeds) and return the scope identity ( that will be used in the next step) 3- i call another stored procedure to insert some more info + scope identity from step 2, which is a foreign key here 4- i get error, foreign key violation 5- transaction rolled back, now tables empty again... thanks

    Read the article

  • Today's Links (6/28/2011)

    - by Bob Rhubart
    Connecting People, Processes, and Content: An Online Event | Brian Dirking Dirking shares information on an Oracle Online Forum coming up on July 19. Social Relationships don't count until they count | Steve Jones "It's actually the interactions that matter to back up the social experience rather than the existence of a social link," says Jones. ORACLENERD: KScope 11: Cary Millsap Commenting on Cary Millsap's KScope presentation on Agile, Oracle ACE Chet Justice says, "I fight with methodology on a daily basis, mostly resulting in me hitting my head against the closest wall." The Sage Kings of Antiquity | Richard Veryard "Given that the empirical evidence for enterprise architecture is fairly weak, anecdotal and inconclusive, we are still more dependent than we might like on the authority of experts," says Veryard, "whether this be semi-anonymous committees (such as TOGAF) or famous consultants (such as Zachman)." Oracle Business Intelligence Blog: New BI Mobile Demos "These are short videos that showcase some of the capabilities in our mobile app," says Abhinav Agarwal. "One focuses on the Oracle BI platform, while the other showcases what is possible with the mobile app accessing Oracle Business Intelligence Applications, like Financial Analytics." MySQL HA Events in the UK, Germany & France | Oracle's MySQL Blog Oracle is running MySQL High Availability breakfast seminars in London (June 29), Düsseldorf (July 13) and Paris (September 7). "During these free seminars, we will review the various options and technologies at your disposal to implement highly available and highly scalable MySQL infrastructures, as well as best practices in terms of architectures," says Bertrand Matthelié. VENNSTER BLOG: User Experience in Fusion apps "When I heard about the Fusion Applications User Experience efforts, I was skeptical," says Oracle ACE Director Lonneke Dikmans of Vennster "My view of Oracle and User Experience has changed drastically today." Power Your Cloud with Oracle Fusion Middleware Running in over 50 cities across the globe, this event is aimed at Architects, IT Managers, and technical leaders like you who are using Fusion Middleware or trying to learn more about middleware in the context of Cloud computing.

    Read the article

  • New features in SQL Prompt 6.4

    - by Tom Crossman
    We’re pleased to announce a new beta version of SQL Prompt. We’ve been trying out a few new core technologies, and used them to add features and bug fixes suggested by users on the SQL Prompt forum and suggestions forum. You can download the SQL Prompt 6.4 beta here (zip file). Let us know what you think! New features Execute current statement In a query window, you can now execute the SQL statement under your cursor by pressing Shift + F5. For example, if you have a query containing two statements and your cursor is placed on the second statement: When you press Shift + F5, only the second statement is executed:   Insert semicolons You can now use SQL Prompt to automatically insert missing semicolons after each statement in a query. To insert semicolons, go to the SQL Prompt menu and click Insert Semicolons. Alternatively, hold Ctrl and press B then C. BEGIN…END block highlighting When you place your cursor over a BEGIN or END keyword, SQL Prompt now automatically highlights the matching keyword: Rename variables and aliases You can now use SQL Prompt to rename all occurrences of a variable or alias in a query. To rename a variable or alias, place your cursor over an instance of the variable or alias you want to rename and press F2: Improved loading dialog box The database loading dialog box now shows actual progress, and you can cancel loading databases:   Single suggestion improvement SQL Prompt no longer suggests keywords if the keyword has been typed and no other suggestions exist. Performance improvement SQL Prompt now has less impact on Management Studio start up time. What do you think? We want to hear your feedback about the beta. If you have any suggestions, or bugs to report, tell us on the SQL Prompt forum or our suggestions forum.

    Read the article

  • 12c - Invisible Columns...

    - by noreply(at)blogger.com (Thomas Kyte)
    Remember when 11g first came out and we had "invisible indexes"?  It seemed like a confusing feature - indexes that would be maintained by modifications (hence slowing them down), but would not be used by queries (hence never speeding them up).  But - after you looked at them a while, you could see how they can be useful.  For example - to add an index in a running production system, an index used by the next version of the code to be introduced later that week - but not tested against the queries in version one of the application in place now.  We all know that when you add an index - one of three things can happen - a given query will go much faster, it won't affect a given query at all, or... It will make some untested query go much much slower than it used to.  So - invisible indexes allowed us to modify the schema in a 'safe' manner - hiding the change until we were ready for it.Invisible columns accomplish the same thing - the ability to introduce a change while minimizing any negative side effects of that change.  Normally when you add a column to a table - any program with a SELECT * would start seeing that column, and programs with an INSERT INTO T VALUES (...) would pretty much immediately break (an INSERT without a list of columns in it).  Now we can add a column to a table in an invisible fashion, the column will not show up in a DESCRIBE command in SQL*Plus, it will not be returned with a SELECT *, it will not be considered in an INSERT INTO T VALUES statement.  It can be accessed by any query that asks for it, it can be populated by an INSERT statement that references it, but you won't see it otherwise.For example, let's start with a simple two column table:ops$tkyte%ORA12CR1> create table t  2  ( x int,  3    y int  4  )  5  /Table created.ops$tkyte%ORA12CR1> insert into t values ( 1, 2 );1 row created.Now, we will add an invisible column to it:ops$tkyte%ORA12CR1> alter table t add                     ( z int INVISIBLE );Table altered.Notice that a DESCRIBE will not show us this column:ops$tkyte%ORA12CR1> desc t Name              Null?    Type ----------------- -------- ------------ X                          NUMBER(38) Y                          NUMBER(38)and existing inserts are unaffected by it:ops$tkyte%ORA12CR1> insert into t values ( 3, 4 );1 row created.A SELECT * won't see it either:ops$tkyte%ORA12CR1> select * from t;         X          Y---------- ----------         1          2         3          4But we have full access to it (in well written programs! The ones that use a column list in the insert and select - never relying on "defaults":ops$tkyte%ORA12CR1> insert into t (x,y,z)                         values ( 5,6,7 );1 row created.ops$tkyte%ORA12CR1> select x, y, z from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7and when we are sure that we are ready to go with this column, we can just modify it:ops$tkyte%ORA12CR1> alter table t modify z visible;Table altered.ops$tkyte%ORA12CR1> select * from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7I will say that a better approach to this - one that is available in 11gR2 and above - would be to use editioning views (part of Edition Based Redefinition - EBR ).  I would rather use EBR over this approach, but in an environment where EBR is not being used, or the editioning views are not in place, this will achieve much the same.Read these for information on EBR:http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o10asktom-172777.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-mar/o20asktom-098897.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-may/o30asktom-082672.html

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >