Search Results

Search found 13909 results on 557 pages for 'old man'.

Page 154/557 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Reading input files in FORTRAN

    - by lollygagger
    Purpose: Create a program that takes two separate files, opens and reads them, assigns their contents to arrays, do some math with those arrays, create a new array with product numbers, print to a new file. Simple enough right? My input files have comment characters at the beginning. One trouble is, they are '#' which are comment characters for most plotting programs, but not FORTRAN. What is a simple way to tell the computer not to look at these characters? Since I have no previous FORTRAN experience, I am plowing through this with two test files. Here is what I have so far: PROGRAM gain IMPLICIT NONE REAL, DIMENSION (1:4, 1:8) :: X, Y, Z OPEN(1, FILE='test.out', & STATUS='OLD', ACTION='READ') ! opens the first file READ(1,*), X OPEN(2, FILE='test2.out', & STATUS='OLD', ACTION='READ') ! opens the second file READ(2,*), Y PRINT*, X, Y Z = X*Y ! PRINT*, Z OPEN(3, FILE='test3.out', STATUS='NEW', ACTION='WRITE') !creates a new file WRITE(3,*), Z CLOSE(1) CLOSE(2) CLOSE(3) END PROGRAM PS. Please do not overwhelm me with a bunch of code monkey gobblety gook. I am a total programming novice. I do not understand all the lingo, that is why I came here instead of searching for help in existing websites. Thanks.

    Read the article

  • ASP.NET DynamicData: Whats happening during an update?

    - by Jens A.
    I am using ASP.NET DynamicData (based on LINQ to SQL) on my site for basic scaffolding. On one table I have added additional properties, that are not stored in the table, but are retrieved from somewhere else. (Profile information for a user account, in this case). They are displayed just fine, but when editing these values and pressing "Update", they are not changed. Here's what the properties look like, the table is the standard aspnet_Users table: public String Address { get { UserProfile profile = UserProfile.GetUserProfile(UserName); return profile.Address; } set { UserProfile profile = UserProfile.GetUserProfile(UserName); profile.Address = value; profile.Save(); } } When I fired up the debugger, I've noticed that for each update the set accessor is called three times. Once with the new value, but on a newly created instance of user, then once with the old value, again on an new instance, and finally with the old value on the existing instance. Wondering a bit, I checked with the properties created by the designer, and they, too, are called three times in (almost) the same fashion. The only difference is, that the last call contains the new value for the property. I am a bit stumped here. Why three times, and why are my new properties behaving differently? I'd be grateful for any help on that matter! =)

    Read the article

  • Updating a composite primary key

    - by VBCSharp
    I am struggling with the philosophical discussions about whether or not to use composite primary keys on my SQL Server database. I have always used the surrogate keys in the past and I am challenging myself by leaving my comfort zone to try something different. I have read many discussion but can't come to any kind of solution yet. The struggle I am having is when I have to update a record with the composite PK. For example, the record in questions is like this: ContactID, RoleID, EffectiveDate, TerminationDT. The PK in this case is the ContactID, RoleID, and EffectiveDate. TerminationDT can be null. If in my UI, the user changes the RoleID and then I need to update the record. Using the surrogate key I can do an Update Table Set RoleID = 1 WHERE surrogateID = Z. However, using the Composite Key way, once one of the fields in the composite key changes I have no way to reference the old record to update it without now maintaining somewhere in the UI a reference to the old values. I do not bind datasources in my UI. I open a connection, get the data and store it in a bucket, then close the connection. What are everyone's opinions? Thanks.

    Read the article

  • Jquery UI dialog does not disappear

    - by iamrohitbanga
    I am using jquery-ui tabs and dialog functionality. Each tab has a button on the page which opens a dialog. This works for one of the tabs. However if I go the second tab, the button does not work there. When I come back to the first tab, the dialog does show up but the problem is I notice as I make switches back and forth to the first tab, it keeps on inserting new div's while the old div's have display:none set on them. I am doing this using JSP. This is how the reusable jsp looks like: <script> $(function() { var formData = null; $.ajax({ url : "addFormGenerator.html", success : function(data) { formData = data; $("#addFormDialog").html(data); $("#addFormDialog").dialog({ autoOpen : false, height : 300, width : 350, modal : true, buttons : { "Add" : function() { $(this).dialog("close"); }, Cancel : function() { $(this).dialog("close"); } }, close : function() { } }); } }); $("#addButton").button().click(function() { $("#addFormDialog").html(formData); $("#addFormDialog").dialog("open"); }); }); </script> <button id="addButton">Click here to Add New</button> <div id="addFormDialog" title="Add New"></div> This jsp fragment is included in other jsp pages as well. I was assuming as I switch between tabs the old button will be garbage collected. Can you help me understand the problem and fix it?

    Read the article

  • Error: undefined method `contact' for nil:NilClass

    - by user275801
    I have a ruby/rails/hobo system that someone wrote a couple years ago, that I need to port to the latest version of ruby/rails/hobo. It seems that ruby doesn't care about backward compatibility, so code that used to work in the old app doesn't work anymore: In the observation.rb model file, the old app has this: belongs_to :survey has_one :site, :through => :survey def create_permitted? acting_user == self.survey.contact or acting_user.administrator? end survey.rb model file has this: belongs_to :contact, :class_name => 'User', :creator => true Unfortunately the code in observation.rb doesn't work under the new ruby/rails/hobo, and it gives me the error: NoMethodError in Observations#index Showing controller: observations; dryml-tag: index-page where line #1 raised: undefined method `contact' for nil:NilClass Extracted source (around line #1): 0 Rails.root: /home/simon/ruby/frogwatch2 Application Trace | Framework Trace | Full Trace app/models/observation.rb:48:in `create_permitted?' How should the "create_permitted" method be changed? I'm finding that the documentation for ruby/rails/hobo is pretty atrocious (which is fair enough as it is free software). Also I don't even know how to begin searching for this on google (i've been trying for days). Please help! :)

    Read the article

  • Can't Add XSD to Class Project

    - by Jeff
    Background: I started with a large solution with many applications in it in VS 2008 and I'm trying to split it up. Steps to repeat: I create a new VS 2010 C# class project I right click and choose add existing item I choose the XSD file from my old project and import it. The original file is 67KB the imported file is 18KB Line 134,135 of the original file <Mapping SourceColumn="ConfigType" DataSetColumn="ConfigType" /> <Mapping SourceColumn="ConfigValue" DataSetColumn="ConfigValue" /> Line 135,136 of the resulting file <Mapping SourceColumn="ConfigType" DataSetColumn="ConfigType" /> <Mappi Part way through it's life my old project was upgrade from 2.0 to 3.5 so some of the code is. Manually copy and paste of the xsd source into the new file and updating the 2.0.0.0 to 4.0.0.0 allowed me to open it it in The GUI for editing XSD files. After fixing all the connection strings and right clicking on every query and clicking configure then finish I was able to gain access to one of the tableadapters out of 6. I'm stumped as to hoe to get this to compile. Once it compiles I'm open sourcing it so ask if you want to see the code.

    Read the article

  • Best way to implement some type of ITaggable interface

    - by Jack
    I've got a program I'm creating that reports on another certain programs backup xml files. I've gotten to the point where I need to implement some type of ITaggable interface - but am unsure how to go about it code wise. My idea is that each item (BackupClient, BackupVersion, and BackupFile) should implement an ITaggable interface for highlighting old, out of date, or non-existent files in their HTML or Excel report. The user will be able to specify tags in the settings. My question is this, how can a user dynamically specify a "tag" such as File Date 3 days old? - Background Color = Red. Actually I guess my question is more, how can I, the programmer, implement this dynamically? I was thinking Expression trees, but am unsure this is the way to go as I havn't studied them much. I know my ITaggable interface would have methods such as AddTag(T tag), RemoveTag(T tag), but what exactly specifies the criteria for the tag to be added? I realize this may be subjective, and can be marked as wiki if need be, but I truly am stuck. Any input would be greatly helpful!

    Read the article

  • Mercurial: Class library that will exist for both .NET 3.5 and 4.0?

    - by Lasse V. Karlsen
    I have a rather big class library written in .NET 3.5 that I'd like to upgrade to make available for .NET 4.0 as well. In that process, I will rip out a lot of old junk, and rewrite some code to better take advantage of the new classes and support in .NET 4.0 (like TPL.) The class libraries will thus diverge, but still be similar enough that some bug-fixes can be done to both in the same manner. How should I best organize this class library in Mercurial? I'm using Kiln (fogbugz) if that matters. I'm thinking: Named branches in one repository, can then transplant any bugfixes from one to the other Unnamed branches in one repository, can also transplant, but I think this will look messy Separate repositories, will have to reimplement the bugfixes (or use a non-mercurial-integraded compare tool to help me) What would you do? (any other alternatives that I haven't though of is welcome as well.) Note that the class libraries will diverge pretty heavily in areas, I have some remnants of old collection-type code that does something similar to Linq that I will remove, and some code that uses it that I will rewrite to use the Linq-methods instead. As such, just copying the project files and using #if NET40..#endif sections is not going to work out. Also, the 3.5 version of the class library will not be getting many new features, mostly just critical bug-fixes, so keeping both versions equally "alive" isn't really necessary. Thus, separate copies of all the files are good enough.

    Read the article

  • Random Alpha numeric generator

    - by AAA
    Hi, I want to give our users in the database a unique alpha-numeric id. I am using the code below, will this always generate a unique id? Below is the old and updated version of the code: New php: // Generate Guid function NewGuid() { $s = strtoupper(md5(uniqid("something",true))); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); echo $Guid; echo "<br><br><br>"; Old PHP: // Generate Guid function NewGuid() { $s = strtoupper(md5(uniqid("something",true))); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); echo $Guid; echo "<br><br><br>"; Will this first code guarantee uniqueness?

    Read the article

  • What else I must do allow my method to handle any type of objects

    - by NewHelpNeeder
    So to allow any type object I must use generics in my code. I have rewrote this method to do so, but then when I create an object, for example Milk, it won't let me pass it to my method. Ether there's something wrong with my generic revision, or Milk object I created is not good. How should I pass my object correctly and add it to linked list? This is a method that causes error when I insert an item: public void insertFirst(T dd) // insert at front of list { Link newLink = new Link(dd); // make new link if( isEmpty() ) // if empty list, last = newLink; // newLink <-- last else first.previous = newLink; // newLink <-- old first newLink.next = first; // newLink --> old first first = newLink; // first --> newLink } This is my class I try to insert into linked list: class Milk { String brand; double size; double price; Milk(String a, double b, double c) { brand = a; size = b; price = c; } } This is test method to insert the data: public static void main(String[] args) { // make a new list DoublyLinkedList theList = new DoublyLinkedList(); // this causes: // The method insertFirst(Comparable) in the type DoublyLinkedList is not applicable for the arguments (Milk) theList.insertFirst(new Milk("brand", 1, 2)); // insert at front theList.displayForward(); // display list forward theList.displayBackward(); // display list backward } // end main() } // end class DoublyLinkedApp Declarations: class Link<T extends Comparable<T>> {} class DoublyLinkedList<T extends Comparable<T>> {}

    Read the article

  • Please help me with a Power shell Script which rearranges Paths.

    - by Hamish Grubijan
    Hi, I have both Sybase and MSFT SQL Servers installed. There is a time when Sybase interferes with MS SQL because they have they have some overlapping commands. So, I need two scripts: A) When runs, script A backs up the current path, grabs all paths that contain sybase or SYBASE or SyBASE (you get the point) in them and move them all at the very end of the path, while preserving the order. B) When it runs, script B restores the path from back-up. Both script a and script b should affect the path immediately. So, if a.bat that calls patha.ps1, pathb.ps1 looks like so: @REM Old path here call patha.ps1 @REM At this point the effective path should be different. call pathb.ps1 @REM Effective old path again Please let me know if this does not make sense. I am not sure if call command is the best one to use. I have never used P.S. before. I can try to formulate the same thing in Python (I know S.O. users tend to ask for "What have you tried so far"). Well, at this point I am VERY slow at writing anything in Power Shell language. Please help.

    Read the article

  • select list modified on-the-fly doesn't fire onChange() for new first element.

    - by staremperor
    In the code below, I'm using jquery 1.4.1 to modify the options in a select list when the user clicks on the list (replacing the single Old item with three New items). Selecting either New 2 or New 3 correctly fires the change() method (and show the alert), but selecting "New 1" does not. What am I missing? Thanks. <html> <head> <script type="text/javascript" src="js/jquery-1.4.1.min.js"></script> <script> $(document).ready(function() { $("#dropdown").mousedown(function() { $(this).empty(); $(this).append($("<option></option>").attr("value",100).text("New 1")); $(this).append($("<option></option>").attr("value",200).text("New 2")); $(this).append($("<option></option>").attr("value",300).text("New 3")); }); $("#dropdown").change(function() { alert($(this).val()); }); }); </script> <body> <select id="dropdown"><option value="1">Old 1</option></select>

    Read the article

  • Best way to store this data?

    - by Malfist
    I have just been assigned to renovate an old website, and I get to move it from some old archaic system to drupal. The only problem is that it's a real-estate system and a lot of data is stored. Currently all the information is stored in a single table, an id represents the house and then everything else is key/value pairs. There are a possible 243 keys per estate, there are 23840 estates in the system. As you can imagine the system is slow and difficult to query. I don't think a table with 243 rows would be a very good idea, and probably worse than the current situation. I've done some investigating and here's what I've found out: Missing data does not indicate a 0 value, data is merged from two, unique sources/formats. Some guessing is involved. I have no control over the source of the data. There are 4 keys that are common to all estates, all values look like something that is commonly searched for and could be indexed There are 10 keys that are in the [90-100)% range 8 of these are information like who's selling it, and it's address. The other two seem to belong with the below range There are 80 keys that are in the [80-90)% range This range seems to mostly just list room types and how many the house has (e.g. bedrooms_possible, bathrooms, family_room_3rd, etc) This range also includes some minor information like school districts, one or two more pieces of data on the address. The 179 keys that are in the [0-80)% range include all sorts of miscellaneous information about the estate My best idea was a hybrid approach, create a table that stores important, common information and keep a smaller key/value table. How would you store this information?

    Read the article

  • Replacing mysql user authentication with openid

    - by David
    So, I'm working with a really old system which uses a person's mysql database credentials to authenticate to a web site (the database was originally only accessed from the command line, but is now accessed from a php frontend). Because of some internal reasons (and to preserve the user's history), I have to leave the old authentication intact. I've been charged with adding openid authentication to this system. Somehow I need to be able to retrieve a users mysql username and password upon logging into the site through openid (using the Zend framework, by the way). I've thought of simply requiring registration at the first login, where the user must provide their mysql credentials, but I'd rather not store the password plain text. I've also considered blanking everyone's mysql passwords, and just setting the user's mysql username manually (rather than having the user provide this, since they could provide any username). This is turning into a security nightmare. Does anyone have any suggestions for alternatives? This is running on a Linux server, by the way. Also, I can't use mysql pluggable authentication because the mysql version is 5.0 (pluggable authentication requires mysql 5.5), and no, I can't update it.

    Read the article

  • Pointers to a typed variable in C#: Interfase, Generic, object or Class? (Boxing/Unboxing)

    - by PaulG
    First of all, I apologize if this has been asked a thousand times. I read my C# book, I googled it, but I can't seem to find the answer I am looking for, or I am missing the point big time. I am very confused with the whole boxing/unboxing issue. Say I have fields of different classes, all returning typed variables (e.g. 'double') and I would like to have a variable point to any of these fields. In plain old C I would do something like: double * newVar; newVar = &oldVar; newVar = &anotherVar; ... In C#, it seems I could do an interfase, but would require that all fields be properties and named the same. Breaks apart when one of the properties doesn't have the same name or is not a property. I could also create a generic class returning double, but seems a bit absurd to create a class to represent a 'double', when a 'double' class already exists. If I am not mistaken, it doesn't even need to be generic, could be a simple class returning double. I could create an object and box the typed variable to the newly created object, but then I would have to cast every time I use it. Of course, I always have the unsafe option... but afraid of getting to unknown memory space, divide by zero and bring an end to this world. None of these seem to be the same as the old simple 'double * variable'. Am I missing something here?

    Read the article

  • How is covariance cooler than polymorphism...and not redundant?

    - by P.Brian.Mackey
    .NET 4 introduces covariance. I guess it is useful. After all, MS went through all the trouble of adding it to the C# language. But, why is Covariance more useful than good old polymorphism? I wrote this example to understand why I should implement Covariance, but I still don't get it. Please enlighten me. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Sample { class Demo { public delegate void ContraAction<in T>(T a); public interface IContainer<out T> { T GetItem(); void Do(ContraAction<T> action); } public class Container<T> : IContainer<T> { private T item; public Container(T item) { this.item = item; } public T GetItem() { return item; } public void Do(ContraAction<T> action) { action(item); } } public class Shape { public void Draw() { Console.WriteLine("Shape Drawn"); } } public class Circle:Shape { public void DrawCircle() { Console.WriteLine("Circle Drawn"); } } public static void Main() { Circle circle = new Circle(); IContainer<Shape> container = new Container<Circle>(circle); container.Do(s => s.Draw());//calls shape //Old school polymorphism...how is this not the same thing? Shape shape = new Circle(); shape.Draw(); } } }

    Read the article

  • Migrating just article contect of Joomla 1.0 to 2.5.x / 3.x?

    - by user2919408
    I have a simple website using Joomla 1.0.15, just having articles in some categories. As i want to install or remove components from admin area, i got : "You are not authorised to view this resource" or something like that. This is uncommon, this site is about 5 years old, and never got error message like that. I think my website is hacked ?? I have set safe_mode = off in php.ini, turn of sh404sef, removing .htaccess file etc ... and it still does not work. Then i try to upgrade to Joomla 2.5.x / 3.x . I found that i must migrate to Joomla 1.5.x first, then from there to 2.5.x. I got problem installing "migration.zip" component in my Joomla 1.0.x (always alert/err message pop up is shown). Is there another way to migrate the website ? May be just get the article section, category, article id and the content of Joomla 1.0.x , then import it to Joomla 2.5.x / 3.x ? I don't need components, modules, mambots (if any) of the old site. How to do it ? Thanks

    Read the article

  • How the clients (client sockets) are identified?

    - by Roman
    To my understanding by serverSocket = new ServerSocket(portNumber) we create an object which potentially can "listen" to the indicated port. By clientSocket = serverSocket.accept() we force the server socket to "listen" to its port and to "accept" a connection from any client which tries to connect to the server through the port associated with the server. When I say "client tries to connect to the server" I mean that client program executes "nameSocket = new Socket(serverIP,serverPort)". If client is trying to connect to the server, the server "accepts" this client (i.e. creates a "client socket" associated with this client). If a new client tries to connect to the server, the server creates another client socket (associated with the new client). But how the server knows if it is a "new" client or an "old" one which has already its socket? Or, in other words, how the clients are identified? By their IP? By their IP and port? By some "signatures"? What happens if an "old" client tries to use Socket(serverIP,serverIP) again? Will server create the second socket associated with this client?

    Read the article

  • Is there command-line tool to extract typedef, structure, enumeration, variable, function from a C or C++ file?

    - by FooF
    I am desiring a command-line tool to extract a definition or declaration (typedef, structure, enumeration, variable, or function) from a C or C++ source file. Also a way to replace an existing definition/declaration would be handy (after transforming the extracted definition by a user-submitted script). Is there such generic tool available, or is some resonably close approximation of such a tool? Scriptability and ability to hook-up with user created scripts or programs is of importance here, although I am academically curious of GUI programs too. Open source solutions for Unix/Linux camp are preferred (although I am curious of Windows and OS X tools too). Primary language interests are C and C++ but more generic solution would be even better (I think we do not need super accurate parsing capabilities for finding, extracting and replacing a definition in a program source file). Sample Use Cases (extra - for the curious mind): Given deeply nested structs and variable (array) initializations of these types, suppose there is a need to change a struct definition by adding or reordering fields or rewriting the variable/array definitions in more readable format without introducing errors resulting from manual labor. This would work by extracting the old initializations, then using a script/program to write the new initializations to replace the old ones. For implementing a code browsing tool - extract a definition. Decorative code generation (e.g. logging function entries / returns). Scripted code structuring (e.g. extract this and that thing and put in different place without change - version control commit comment could document the command to perform this operation to make it evident and verifiable that nothing changed). Note about tags: More accurate tag than code-generation would be code-transformation but it does not exist.

    Read the article

  • Rails 3 upgrade will_pagination wrong number of arguments (2 for 1)

    - by user1452541
    I am in the process of upgrading my rails app from 2.3.5 to 3.2.5 on ruby 1.9.3. In the old app I was using the will_paginate plugin, which I have converted to a gem. Now after the upgrade I am getting the following error : wrong number of arguments (2 for 1) A few lines from application trace: Application Trace | Framework Trace | Full Trace will_paginate (3.0.3) lib/will_paginate/active_record.rb:124:in `paginate' app/models/activity.rb:28:in `dashboard_activities' app/controllers/dashboard_controller.rb:10:in `index' actionpack (3.2.5) lib/action_controller/metal/implicit_render.rb:4:in `send_action' actionpack (3.2.5) l I believe the issue is in the old code in the activity Model where I am using pagination. Can anyone help? The code: def dashboard_activities(page, total_records, date_range1 = nil, date_range2 = nil ) unless date_range2.nil? x =[ "is_delete = false AND status = 'open' AND date(due_date) between ? and ?", date_range1, date_range2] else x =[ "is_delete = false AND status = 'open' AND date(due_date) = ? ", date_range1] end paginate(:all, :page =>page, :per_page =>total_records, :conditions => x, :order =>"due_date asc") end

    Read the article

  • Unexpected '{' with IF statement

    - by user1926208
    this is my code to move all the files in a directory to another directory which are 1 hour old or more. <?php $srcDir = 'code'; $destDir = 'code/old'; if (file_exists($destDir)) { if (is_dir($destDir)) { if (is_writable($destDir)) { if ($handle = opendir($srcDir)) { while (false !== ($file = readdir($handle))) { if (is_file($srcDir . '/' . $file)) { if(date("U",filectime($srcDir . '/' . $file) >= time() - 3600) { rename($srcDir . '/' . $file, $destDir . '/' . $file); } } } closedir($handle); } else { echo "$srcDir could not be opened.\n"; } } else { echo "$destDir is not writable!\n"; } } else { echo "$destDir is not a directory!\n"; } } else { echo "$destDir does not exist\n"; } ?> and I am getting this error: Parse error: syntax error, unexpected '{' in /home/tcity/public_html/myDir/movefiles.php on line 11

    Read the article

  • Can a pc with a 1.2 ghz dual core run VS 2013? [on hold]

    - by moo2
    Edit: This question is about a tool used for programming, Microsoft Visual Studio 2013. I already read the minimum requirements for VS 2013 before I posted or I wouldn't have asked the question. I searched on the web for an answer for a few hours before posting. Why should I go out and spend $400 on a laptop if I can spend less than $100 and use it to learn how to program? In the context of the question it is clear I was not asking for advice on what laptop to buy and it was also clear I was not asking for someone to walk me through the system requirements of VS 2013. This is a vast community and I was wondering if someone in this vast community would have experience in dealing with my question. In other words has anyone ever tried anything like it? If I had an old computer sitting around I would have tested it. At the moment all I have is a friend's Chromebook I'm borrowing. End Edit. I know the requirements for Microsoft Visual Studio 2013 state 1.6 ghz processor. I'm looking at getting a laptop for school and I found one that is old but cheap and would work for college. The Dell D430 I'm looking at getting has a 1.2ghz Core 2 Duo CPU, 80GB HD, 2GB RAM, and Windows 7 Home Premium. It's refurbished. I can get it for less than most phone's cost and from a Microsoft Authorized Refurbisher. I know it's not a skyrim rig but it's lightweight and can handle being a college laptop and doing word processing. Would Visual Studio 2013 run at all? If it is slower I'm not concerned. I just want to know if it would work and compile my assignments and run them? I'd be using this laptop for doing assignments and learning programming languages not for coding the next social media sensation.

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >