Search Results

Search found 5224 results on 209 pages for 'modify'.

Page 42/209 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • User Input That Involves A ' ' Causes A Substring Out Of Range Error

    - by Greenhouse Gases
    Hi Stackoverflow people. You have already helped me quite a bit but near the end of writing this program I have somewhat of a bug. You see in order to read in city names with a space in from a text file I use a '/' that is then replaced by the program for a ' ' (and when the serializer runs the opposite happens for next time the program is run). The problem is when a user inputs a name too add, search for, or delete that contains a space, for instance 'New York' I get a Debug Assertion Error with a substring out of range expression. I have a feeling it's to do with my correctCase function, or setElementsNull that looks at the string until it experiences a null element in the array, however ' ' is not null so I'm not sure how to fix this and I'm going a bit insane. Any help would be much appreciated. Here is my code: // U08221.cpp : main project file. #include "stdafx.h" #include <_iostream> #include <_string> #include <_fstream> #include <_cmath> using namespace std; class locationNode { public: string nodeCityName; double nodeLati; double nodeLongi; locationNode* Next; locationNode(string nameOf, double lat, double lon) { this->nodeCityName = nameOf; this->nodeLati = lat; this->nodeLongi = lon; this->Next = NULL; } locationNode() // NULL constructor { } void swapProps(locationNode *node2) { locationNode place; place.nodeCityName = this->nodeCityName; place.nodeLati = this->nodeLati; place.nodeLongi = this->nodeLongi; this->nodeCityName = node2->nodeCityName; this->nodeLati = node2->nodeLati; this->nodeLongi = node2->nodeLongi; node2->nodeCityName = place.nodeCityName; node2->nodeLati = place.nodeLati; node2->nodeLongi = place.nodeLongi; } void modify(string name) { this->nodeCityName = name; } void modify(double latlon, int mod) { switch(mod) { case 2: this->nodeLati = latlon; break; case 3: this->nodeLongi = latlon; break; } } void correctCase() // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = this->nodeCityName[0], letVal; int n = 1; // variable for name index from second letter onwards if((this->nodeCityName[0] >90) && (this->nodeCityName[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter this->nodeCityName[0] = firstLetVal; } while(this->nodeCityName[n] != NULL) { if((this->nodeCityName[n] >= 65) && (this->nodeCityName[n] <= 90)) { if(this->nodeCityName[n - 1] != 32) { letVal = this->nodeCityName[n] + 32; this->nodeCityName[n] = letVal; } } n++; } } }; Here is the main part of the program: // U08221.cpp : main project file. #include "stdafx.h" #include "Locations2.h" #include <_iostream> #include <_string> #include <_fstream> #include <_cmath> using namespace std; #define pi 3.14159265358979323846264338327950288 #define radius 6371 #define gig 1073741824 //size of a gigabyte in bytes int n = 0,x, locationCount = 0, MAX_SIZE = 35 , g = 0, i = 0, modKey = 0, xx; string cityNameInput, alter; char targetCity[35], skipKey = ' '; double lat1, lon1, lat2, lon2, dist, dummy, modVal, result; bool acceptedInput = false, match = false, nodeExists = false;// note: addLocation(), set to true to enable user input as opposed to txt file locationNode *temp, *temp2, *example, *seek, *bridge, *start_ptr = NULL; class Menu { int junction; public: /* Convert decimal degrees to radians */ public: void setElementsNull(char cityParam[]) { int y=0; while(cityParam[y] != NULL) { y++; } while(y < MAX_SIZE) { cityParam[y] = NULL; y++; } } void correctCase(string name) // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = name[0], letVal; int n = 1; // variable for name index from second letter onwards if((name[0] >90) && (name[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter name[0] = firstLetVal; } while(name[n] != NULL) { if((name[n] >= 65) && (name[n] <= 90)) { letVal = name[n] + 32; name[n] = letVal; } n++; } for(n = 0; targetCity[n] != NULL; n++) { targetCity[n] = name[n]; } } bool nodeExistTest(char targetCity[]) // see if entry is present in the database { match = false; seek = start_ptr; int letters = 0, letters2 = 0, x = 0, y = 0; while(targetCity[y] != NULL) { letters2++; y++; } while(x <= locationCount) // locationCount is number of entries currently in list { y=0, letters = 0; while(seek->nodeCityName[y] != NULL) // count letters in the current name { letters++; y++; } if(letters == letters2) // same amount of letters in the name { y = 0; while(y <= letters) // compare each letter against one another { if(targetCity[y] == seek->nodeCityName[y]) { match = true; y++; } else { match = false; y = letters + 1; // no match, terminate comparison } } } if(match) { x = locationCount + 1; //found match so terminate loop } else{ if(seek->Next != NULL) { bridge = seek; seek = seek->Next; x++; } else { x = locationCount + 1; // end of list so terminate loop } } } return match; } double deg2rad(double deg) { return (deg * pi / 180); } /* Convert radians to decimal degrees */ double rad2deg(double rad) { return (rad * 180 / pi); } /* Do the calculation */ double distance(double lat1, double lon1, double lat2, double lon2, double dist) { dist = sin(deg2rad(lat1)) * sin(deg2rad(lat2)) + cos(deg2rad(lat1)) * cos(deg2rad(lat2)) * cos(deg2rad(lon1 - lon2)); dist = acos(dist); dist = rad2deg(dist); dist = (radius * pi * dist) / 180; return dist; } void serialise() { // Serialize to format that can be written to text file fstream outfile; outfile.open("locations.txt",ios::out); temp = start_ptr; do { for(xx = 0; temp->nodeCityName[xx] != NULL; xx++) { if(temp->nodeCityName[xx] == 32) { temp->nodeCityName[xx] = 47; } } outfile << endl << temp->nodeCityName<< " "; outfile<<temp->nodeLati<< " "; outfile<<temp->nodeLongi; temp = temp->Next; }while(temp != NULL); outfile.close(); } void sortList() // do this { int changes = 1; locationNode *node1, *node2; while(changes > 0) // while changes are still being made to the list execute { node1 = start_ptr; node2 = node1->Next; changes = 0; do { xx = 1; if(node1->nodeCityName[0] > node2->nodeCityName[0]) //compare first letter of name with next in list { node1->swapProps(node2); // should come after the next in the list changes++; } else if(node1->nodeCityName[0] == node2->nodeCityName[0]) // if same first letter { while(node1->nodeCityName[xx] == node2->nodeCityName[xx]) // check next letter of name { if((node1->nodeCityName[xx + 1] != NULL) && (node2->nodeCityName[xx + 1] != NULL)) // check next letter until not the same { xx++; } else break; } if(node1->nodeCityName[xx] > node2->nodeCityName[xx]) { node1->swapProps(node2); // should come after the next in the list changes++; } } node1 = node2; node2 = node2->Next; // move to next pair in list } while(node2 != NULL); } } void initialise() { cout << "Populating List..."; ifstream inputFile; inputFile.open ("locations.txt", ios::in); char inputName[35] = " "; double inputLati = 0, inputLongi = 0; //temp = new locationNode(inputName, inputLati, inputLongi); do { inputFile.get(inputName, 35, ' '); inputFile >> inputLati; inputFile >> inputLongi; if(inputName[0] == 10 || 13) //remove linefeed from input { for(int i = 0; inputName[i] != NULL; i++) { inputName[i] = inputName[i + 1]; } } for(xx = 0; inputName[xx] != NULL; xx++) { if(inputName[xx] == 47) // if it is a '/' { inputName[xx] = 32; // replace it for a space } } temp = new locationNode(inputName, inputLati, inputLongi); if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list } while(!inputFile.eof()); cout << "Successful!" << endl << "List contains: " << locationCount << " entries" << endl; inputFile.close(); cout << endl << "*******************************************************************" << endl << "DISTANCE CALCULATOR v2.0\tAuthors: Darius Hodaei, Joe Clifton" << endl; } void menuInput() { char menuChoice = ' '; while(menuChoice != 'Q') { // Menu if(skipKey != 'X') // This is set by case 'S' below if a searched term does not exist but wants to be added { cout << endl << "*******************************************************************" << endl; cout << "Please enter a choice for the menu..." << endl << endl; cout << "(P) To print out the list" << endl << "(O) To order the list alphabetically" << endl << "(A) To add a location" << endl << "(D) To delete a record" << endl << "(C) To calculate distance between two points" << endl << "(S) To search for a location in the list" << endl << "(M) To check memory usage" << endl << "(U) To update a record" << endl << "(Q) To quit" << endl; cout << endl << "*******************************************************************" << endl; cin >> menuChoice; if(menuChoice >= 97) { menuChoice = menuChoice - 32; // Turn the lower case letter into an upper case letter } } skipKey = ' '; //Reset skipKey so that it does not skip the menu switch(menuChoice) { case 'P': temp = start_ptr; // set temp to the start of the list do { if (temp == NULL) { cout << "You have reached the end of the database" << endl; } else { // Display details for what temp points to at that stage cout << "Location : " << temp->nodeCityName << endl; cout << "Latitude : " << temp->nodeLati << endl; cout << "Longitude : " << temp->nodeLongi << endl; cout << endl; // Move on to next locationNode if one exists temp = temp->Next; } } while (temp != NULL); break; case 'O': { sortList(); // pass by reference??? cout << "List reordered alphabetically" << endl; } break; case 'A': char cityName[35]; double lati, longi; cout << endl << "Enter the name of the location: "; cin >> cityName; for(xx = 0; cityName[xx] != NULL; xx++) { if(cityName[xx] == 47) // if it is a '/' { cityName[xx] = 32; // replace it for a space } } if(!nodeExistTest(cityName)) { cout << endl << "Please enter the latitude value for this location: "; cin >> lati; cout << endl << "Please enter the longitude value for this location: "; cin >> longi; cout << endl; temp = new locationNode(cityName, lati, longi); temp->correctCase(); //start_ptr allignment if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list cout << "Location sucessfully added to the database! There are " << locationCount << " location(s) stored" << endl; } else { cout << "Node is already present in the list and so cannot be added again" << endl; } break; case 'D': { junction = 0; locationNode *place; cout << "Enter the name of the city you wish to remove" << endl; cin >> targetCity; setElementsNull(targetCity); correctCase(targetCity); for(xx = 0; targetCity[xx] != NULL; xx++) { if(targetCity[xx] == 47) { targetCity[xx] = 32; } } if(nodeExistTest(targetCity)) //if this node does exist { if(seek == start_ptr) // if it is the first in the list { junction = 1; } if(seek->Next == NULL) // if it is last in the list { junction = 2; } switch(junction) // will alter list accordingly dependant on where the searched for link is { case 1: start_ptr = start_ptr->Next; delete seek; --locationCount; break; case 2: place = seek; seek = bridge; seek->Next = NULL; delete place; --locationCount; break; default: bridge->Next = seek->Next; delete seek; --locationCount; break; } cout << endl << "Link deleted. There are now " << locationCount << " locations." << endl; } else { cout << "That entry does not currently exist" << endl << endl << endl; } } break; case 'C': { char city1[35], city2[35]; cout << "Enter the first city name" << endl; cin >> city1; setElementsNull(city1); correctCase(targetCity); if(nodeExistTest(city1)) { lat1 = seek->nodeLati; lon1 = seek->nodeLongi; cout << "Lati = " << seek->nodeLati << endl << "Longi = " << seek->nodeLongi << endl << endl; } cout << "Enter the second city name" << endl; cin >> city2; setElementsNull(city2); correctCase(targetCity); if(nodeExistTest(city2)) { lat2 = seek->nodeLati; lon2 = seek->nodeLongi; cout << "Lati = " << seek->nodeLati << endl << "Longi = " << seek->nodeLongi << endl << endl; } result = distance (lat1, lon1, lat2, lon2, dist); cout << "The distance between these two locations is " << result << " kilometres." << endl; } break; case 'S': { char choice; cout << "Enter search term..." << endl; cin >> targetCity; setElementsNull(targetCity); correctCase(targetCity); if(nodeExistTest(targetCity)) { cout << "Latitude: " << seek->nodeLati << endl << "Longitude: " << seek->nodeLongi << endl; } else { cout << "Sorry, that city is not currently present in the list." << endl << "Would you like to add this city now Y/N?" << endl; cin >> choice; /*while(choice != ('Y' || 'N')) { cout << "Please enter a valid choice..." << endl; cin >> choice; }*/ switch(choice) { case 'Y': skipKey = 'X'; menuChoice = 'A'; break; case 'N': break; default : cout << "Invalid choice" << endl; break; } } break; } case 'M': { cout << "Locations currently stored: " << locationCount << endl << "Memory used for this: " << (sizeof(start_ptr) * locationCount) << " bytes" << endl << endl << "You can store " << ((gig - (sizeof(start_ptr) * locationCount)) / sizeof(start_ptr)) << " more locations" << endl ; break; } case 'U': { cout << "Enter the name of the Location you would like to update: "; cin >> targetCity; setElementsNull(targetCity); correctCase(targetCity); if(nodeExistTest(targetCity)) { cout << "Select (1) to alter City Name, (2) to alter Longitude, (3) to alter Latitude" << endl; cin >> modKey; switch(modKey) { case 1: cout << "Enter the new name: "; cin >> alter; cout << endl; seek->modify(alter); break; case 2: cout << "Enter the new latitude: "; cin >> modVal; cout << endl; seek->modify(modVal, modKey); break; case 3: cout << "Enter the new longitude: "; cin >> modVal; cout << endl; seek->modify(modVal, modKey); break; default: break; } } else cout << "Location not found" << endl; break; } } } } }; int main(array<System::String ^> ^args) { Menu mm; //mm.initialise(); mm.menuInput(); mm.serialise(); }

    Read the article

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

  • Customize the SimpleMembership in ASP.NET MVC 4.0

    - by thangchung
    As we know, .NET 4.5 have come up to us, and come along with a lot of new interesting features as well. Visual Studio 2012 was also introduced some days ago. They made us feel very happy with cool improvement along with us. Performance when loading code editor is very good at the moment (immediate after click on the solution). I explore some of cool features at these days. Some of them like Json.NET integrated in ASP.NET MVC 4.0, improvement on asynchronous action, new lightweight theme on Visual Studio, supporting very good on mobile development, improvement on authentication… I reviewed them, and found out that in this version of .NET Microsoft was not only developed new feature that suggest from community but also focused on improvement performance of existing features or components. Besides that, they also opened source more projects, like Entity Framework, Reactive Extensions, ASP.NET Web Stack… At the moment, I feel Microsoft want to open source more and more their projects. Today, I am going to dive in deep on new SimpleMembership model. It is really good because in this security model, Microsoft actually focus on development needs. As we know, in the past, they introduce some of provider supplied for coding security like MembershipProvider, RoleProvider… I don’t need to talk but everyone that have ever used it know that they were actually hard to use, and not easy to maintain and unit testing. Why? Because every time you inherit it, you need to override all methods inside it. Some people try to abstract it by introduce more method with virtual keyword, and try to implement basic behavior, so in the subclass we only need to override the method that need for their business. But to me, it’s only the way to work around. ASP.NET team and Web Matrix knew about it, so they built the new features based on existing components on .NET framework. And one of component that comes to us is SimpleMembership and SimpleRole. They implemented the Façade pattern on the top of those, and called it is WebSecurity. In the web, we can call WebSecurity anywhere we want, and make a call to inside wrapper of it. I read a lot of them on web blog, on technical news, on MSDN as well. Matthew Osborn had an excellent article about it at his blog. Jon Galloway had an article like this at here. He analyzed why old membership provider not fixed well to ASP.NET MVC and how to get over it. Those are very good to me. It introduced to me about how to doing SimpleMembership on it, how to doing it on new ASP.NET MVC web application. But one thing, those didn’t tell me was how to doing it on existing security model (that mean we already had Users and Roles on legacy system, and how we can integrate it to this system), that’s a reason I will introduce it today. I have spent couples of hours to see what’s inside this, and try to make one example to clarify my concern. And it’s lucky that I can make it working well.The first thing, we need to create new ASP.NET MVC application on Visual Studio 2012. We need to choose Internet type for this web application. ASP.NET MVC actually creates all needs components for the basic membership and basic role. The cool feature is DoNetOpenAuth come along with it that means we can log-in using facebook, twitter or Windows Live if you want. But it’s only for LocalDb, so we need to change it to fix with existing database model on SQL Server. The next step we have to make SimpleMembership can understand which database we use and show it which column need to point to for the ID and UserName. I really like this feature because SimpleMembership on need to know about the ID and UserName, and they don’t care about rest of it. I assume that we have an existing database model like So we will point it in code like The codes for it, we put on InitializeSimpleMembershipAttribute like [AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)]     public sealed class InitializeSimpleMembershipAttribute : ActionFilterAttribute     {         private static SimpleMembershipInitializer _initializer;         private static object _initializerLock = new object();         private static bool _isInitialized;         public override void OnActionExecuting(ActionExecutingContext filterContext)         {             // Ensure ASP.NET Simple Membership is initialized only once per app start             LazyInitializer.EnsureInitialized(ref _initializer, ref _isInitialized, ref _initializerLock);         }         private class SimpleMembershipInitializer         {             public SimpleMembershipInitializer()             {                 try                 {                     WebSecurity.InitializeDatabaseConnection("DefaultDb", "User", "Id", "UserName", autoCreateTables: true);                 }                 catch (Exception ex)                 {                     throw new InvalidOperationException("The ASP.NET Simple Membership database could not be initialized. For more information, please see http://go.microsoft.com/fwlink/?LinkId=256588", ex);                 }             }         }     }And decorating it in the AccountController as below [Authorize]     [InitializeSimpleMembership]     public class AccountController : ControllerIn this case, assuming that we need to override the ValidateUser to point this to existing User database table, and validate it. We have to add one more class like public class CustomAdminMembershipProvider : SimpleMembershipProvider     {         // TODO: will do a better way         private const string SELECT_ALL_USER_SCRIPT = "select * from [dbo].[User]private where UserName = '{0}'";         private readonly IEncrypting _encryptor;         private readonly SimpleSecurityContext _simpleSecurityContext;         public CustomAdminMembershipProvider(SimpleSecurityContext simpleSecurityContext)             : this(new Encryptor(), new SimpleSecurityContext("DefaultDb"))         {         }         public CustomAdminMembershipProvider(IEncrypting encryptor, SimpleSecurityContext simpleSecurityContext)         {             _encryptor = encryptor;             _simpleSecurityContext = simpleSecurityContext;         }         public override bool ValidateUser(string username, string password)         {             if (string.IsNullOrEmpty(username))             {                 throw new ArgumentException("Argument cannot be null or empty", "username");             }             if (string.IsNullOrEmpty(password))             {                 throw new ArgumentException("Argument cannot be null or empty", "password");             }             var hash = _encryptor.Encode(password);             using (_simpleSecurityContext)             {                 var users =                     _simpleSecurityContext.Users.SqlQuery(                         string.Format(SELECT_ALL_USER_SCRIPT, username));                 if (users == null && !users.Any())                 {                     return false;                 }                 return users.FirstOrDefault().Password == hash;             }         }     }SimpleSecurityDataContext at here public class SimpleSecurityContext : DbContext     {         public DbSet<User> Users { get; set; }         public SimpleSecurityContext(string connStringName) :             base(connStringName)         {             this.Configuration.LazyLoadingEnabled = true;             this.Configuration.ProxyCreationEnabled = false;         }         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             base.OnModelCreating(modelBuilder);                          modelBuilder.Configurations.Add(new UserMapping());         }     }And Mapping for User as below public class UserMapping : EntityMappingBase<User>     {         public UserMapping()         {             this.Property(x => x.UserName);             this.Property(x => x.DisplayName);             this.Property(x => x.Password);             this.Property(x => x.Email);             this.ToTable("User");         }     }One important thing, you need to modify the web.config to point to our customize SimpleMembership <membership defaultProvider="AdminMemberProvider" userIsOnlineTimeWindow="15">       <providers>         <clear/>         <add name="AdminMemberProvider" type="CIK.News.Web.Infras.Security.CustomAdminMembershipProvider, CIK.News.Web.Infras" />       </providers>     </membership>     <roleManager enabled="false">       <providers>         <clear />         <add name="AdminRoleProvider" type="CIK.News.Web.Infras.Security.AdminRoleProvider, CIK.News.Web.Infras" />       </providers>     </roleManager>The good thing at here is we don’t need to modify the code on AccountController. We only need to modify on SimpleMembership and Simple Role (if need). Now build all solutions, run it. We should see a screen like thisIf I login to Twitter button at the bottom of this page, we will be transfer to twitter authentication pageYou have to waiting for a moment Afterwards it will transfer you back to your admin screenYou can find all source codes at my MSDN code. I will really happy if you guys feel free to put some comments as below. It will be helpful to improvement my code in the future. Thank for all your readings. 

    Read the article

  • ASP.NET MVC 3: Layouts and Sections with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: Introducing Razor (July 2nd) New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (Dec 16th) Layouts and Sections with Razor (Today) In today’s post I’m going to go into more details about how Layout pages work with Razor.  In particular, I’m going to cover how you can have multiple, non-contiguous, replaceable “sections” within a layout file – and enable views based on layouts to optionally “fill in” these different sections at runtime.  The Razor syntax for doing this is clean and concise. I’ll also show how you can dynamically check at runtime whether a particular layout section has been defined, and how you can provide alternate content (or even an alternate layout) in the event that a section isn’t specified within a view template.  This provides a powerful and easy way to customize the UI of your site and make it clean and DRY from an implementation perspective. What are Layouts? You typically want to maintain a consistent look and feel across all of the pages within your web-site/application.  ASP.NET 2.0 introduced the concept of “master pages” which helps enable this when using .aspx based pages or templates.  Razor also supports this concept with a feature called “layouts” – which allow you to define a common site template, and then inherit its look and feel across all the views/pages on your site. I previously discussed the basics of how layout files work with Razor in my ASP.NET MVC 3: Layouts with Razor blog post.  Today’s post will go deeper and discuss how you can define multiple, non-contiguous, replaceable regions within a layout file that you can then optionally “fill in” at runtime. Site Layout Scenario Let’s look at how we can implement a common site layout scenario with ASP.NET MVC 3 and Razor.  Specifically, we’ll implement some site UI where we have a common header and footer on all of our pages.  We’ll also add a “sidebar” section to the right of our common site layout.  On some pages we’ll customize the SideBar to contain content specific to the page it is included on: And on other pages (that do not have custom sidebar content) we will fall back and provide some “default content” to the sidebar: We’ll use ASP.NET MVC 3 and Razor to enable this customization in a nice, clean way.  Below are some step-by-step tutorial instructions on how to build the above site with ASP.NET MVC 3 and Razor. Part 1: Create a New Project with a Layout for the “Body” section We’ll begin by using the “File->New Project” menu command within Visual Studio to create a new ASP.NET MVC 3 Project.  We’ll create the new project using the “Empty” template option: This will create a new project that has no default controllers in it: Creating a HomeController We will then right-click on the “Controllers” folder of our newly created project and choose the “Add->Controller” context menu command.  This will bring up the “Add Controller” dialog: We’ll name the new controller we create “HomeController”.  When we click the “Add” button Visual Studio will add a HomeController class to our project with a default “Index” action method that returns a view: We won’t need to write any Controller logic to implement this sample – so we’ll leave the default code as-is.  Creating a View Template Our next step will be to implement the view template associated with the HomeController’s Index action method.  To implement the view template, we will right-click within the “HomeController.Index()” method and select the “Add View” command to create a view template for our home page: This will bring up the “Add View” dialog within Visual Studio.  We do not need to change any of the default settings within the above dialog (the name of the template was auto-populated to Index because we invoked the “Add View” context menu command within the Index method).  When we click the “Add” Button within the dialog, a Razor-based “Index.cshtml” view template will be added to the \Views\Home\ folder within our project.  Let’s add some simple default static content to it: Notice above how we don’t have an <html> or <body> section defined within our view template.  This is because we are going to rely on a layout template to supply these elements and use it to define the common site layout and structure for our site (ensuring that it is consistent across all pages and URLs within the site).  Customizing our Layout File Let’s open and customize the default “_Layout.cshtml” file that was automatically added to the \Views\Shared folder when we created our new project: The default layout file (shown above) is pretty basic and simply outputs a title (if specified in either the Controller or the View template) and adds links to a stylesheet and jQuery.  The call to “RenderBody()” indicates where the main body content of our Index.cshtml file will merged into the output sent back to the browser. Let’s modify the Layout template to add a common header, footer and sidebar to the site: We’ll then edit the “Site.css” file within the \Content folder of our project and add 4 CSS rules to it: And now when we run the project and browse to the home “/” URL of our project we’ll see a page like below: Notice how the content of the HomeController’s Index view template and the site’s Shared Layout template have been merged together into a single HTML response.  Below is what the HTML sent back from the server looks like: Part 2: Adding a “SideBar” Section Our site so far has a layout template that has only one “section” in it – what we call the main “body” section of the response.  Razor also supports the ability to add additional "named sections” to layout templates as well.  These sections can be defined anywhere in the layout file (including within the <head> section of the HTML), and allow you to output dynamic content to multiple, non-contiguous, regions of the final response. Defining the “SideBar” section in our Layout Let’s update our Layout template to define an additional “SideBar” section of content that will be rendered within the <div id=”sidebar”> region of our HTML.  We can do this by calling the RenderSection(string sectionName, bool required) helper method within our Layout.cshtml file like below:   The first parameter to the “RenderSection()” helper method specifies the name of the section we want to render at that location in the layout template.  The second parameter is optional, and allows us to define whether the section we are rendering is required or not.  If a section is “required”, then Razor will throw an error at runtime if that section is not implemented within a view template that is based on the layout file (which can make it easier to track down content errors).  If a section is not required, then its presence within a view template is optional, and the above RenderSection() code will render nothing at runtime if it isn’t defined. Now that we’ve made the above change to our layout file, let’s hit refresh in our browser and see what our Home page now looks like: Notice how we currently have no content within our SideBar <div> – that is because the Index.cshtml view template doesn’t implement our new “SideBar” section yet. Implementing the “SideBar” Section in our View Template Let’s change our home-page so that it has a SideBar section that outputs some custom content.  We can do that by opening up the Index.cshtml view template, and by adding a new “SiderBar” section to it.  We’ll do this using Razor’s @section SectionName { } syntax: We could have put our SideBar @section declaration anywhere within the view template.  I think it looks cleaner when defined at the top or bottom of the file – but that is simply personal preference.  You can include any content or code you want within @section declarations.  Notice above how I have a C# code nugget that outputs the current time at the bottom of the SideBar section.  I could have also written code that used ASP.NET MVC’s HTML/AJAX helper methods and/or accessed any strongly-typed model objects passed to the Index.cshtml view template. Now that we’ve made the above template changes, when we hit refresh in our browser again we’ll see that our SideBar content – that is specific to the Home Page of our site – is now included in the page response sent back from the server: The SideBar section content has been merged into the proper location of the HTML response : Part 3: Conditionally Detecting if a Layout Section Has Been Implemented Razor provides the ability for you to conditionally check (from within a layout file) whether a section has been defined within a view template, and enables you to output an alternative response in the event that the section has not been defined.  This provides a convenient way to specify default UI for optional layout sections.  Let’s modify our Layout file to take advantage of this capability.  Below we are conditionally checking whether the “SideBar” section has been defined without the view template being rendered (using the IsSectionDefined() method), and if so we render the section.  If the section has not been defined, then we now instead render some default content for the SideBar:  Note: You want to make sure you prefix calls to the RenderSection() helper method with a @ character – which will tell Razor to execute the HelperResult it returns and merge in the section content in the appropriate place of the output.  Notice how we wrote @RenderSection(“SideBar”) above instead of just RenderSection(“SideBar”).  Otherwise you’ll get an error. Above we are simply rendering an inline static string (<p>Default SideBar Content</p>) if the section is not defined.  A real-world site would more likely refactor this default content to be stored within a separate partial template (which we’d render using the Html.RenderPartial() helper method within the else block) or alternatively use the Html.Action() helper method within the else block to encapsulate both the logic and rendering of the default sidebar. When we hit refresh on our home-page, we will still see the same custom SideBar content we had before.  This is because we implemented the SideBar section within our Index.cshtml view template (and so our Layout rendered it): Let’s now implement a “/Home/About” URL for our site by adding a new “About” action method to our HomeController: The About() action method above simply renders a view back to the client when invoked.  We can implement the corresponding view template for this action by right-clicking within the “About()” method and using the “Add View” menu command (like before) to create a new About.cshtml view template.  We’ll implement the About.cshtml view template like below. Notice that we are not defining a “SideBar” section within it: When we browse the /Home/About URL we’ll see the content we supplied above in the main body section of our response, and the default SideBar content will rendered: The layout file determined at runtime that a custom SideBar section wasn’t present in the About.cshtml view template, and instead rendered the default sidebar content. One Last Tweak… Let’s suppose that at a later point we decide that instead of rendering default side-bar content, we just want to hide the side-bar entirely from pages that don’t have any custom sidebar content defined.  We could implement this change simply by making a small modification to our layout so that the sidebar content (and its surrounding HTML chrome) is only rendered if the SideBar section is defined.  The code to do this is below: Razor is flexible enough so that we can make changes like this and not have to modify any of our view templates (nor make change any Controller logic changes) to accommodate this.  We can instead make just this one modification to our Layout file and the rest happens cleanly.  This type of flexibility makes Razor incredibly powerful and productive. Summary Razor’s layout capability enables you to define a common site template, and then inherit its look and feel across all the views/pages on your site. Razor enables you to define multiple, non-contiguous, “sections” within layout templates that can be “filled-in” by view templates.  The @section {} syntax for doing this is clean and concise.  Razor also supports the ability to dynamically check at runtime whether a particular section has been defined, and to provide alternate content (or even an alternate layout) in the event that it isn’t specified.  This provides a powerful and easy way to customize the UI of your site - and make it clean and DRY from an implementation perspective. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • AIX Checklist for stable obiee deployment

    - by user554629
    Common AIX configuration issues     ( last updated 27 Aug 2012 ) OBIEE is a complicated system with many moving parts and connection points.The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators. The information in this article is time sensitive, and updated as I discover new  issues or details. What makes OBIEE different? When Tech Support suggests AIX component upgrades to a stable, locked-down production AIX environment, it is common to get "push back".  "Why is this necessary?  We aren't we seeing issues with other software?"It's a fair question that I have often struggled to answer; here are the talking points: OBIEE is memory intensive.  It is the entire purpose of the software to trade memory for repetitive, more expensive database requests across a network. OBIEE is implemented in C++ and is very dependent on the C++ runtime to behave correctly. OBIEE is aggressively thread efficient;  if atomic operations on a particular architecture do not work correctly, the software crashes. OBIEE dynamically loads third-party database client libraries directly into the nqsserver process.  If the library is not thread-safe, or corrupts process memory the OBIEE crash happens in an unrelated part of the code.  These are extremely difficult bugs to find. OBIEE software uses 99% common source across multiple platforms:  Windows, Linux, AIX, Solaris and HPUX.  If a crash happens on only one platform, we begin to suspect other factors.  load intensity, system differences, configuration choices, hardware failures.  It is rare to have a single product require so many diverse technical skills.   My role in support is to understand system configurations, performance issues, and crashes.   An analyst trained in Business Analytics can't be expected to know AIX internals in the depth required to make configuration choices.  Here are some guidelines. AIX C++ Runtime must be at  version 11.1.0.4$ lslpp -L | grep xlC.aixobiee software will crash if xlC.aix.rte is downlevel;  this is not a "try it" suggestion.Nov 2011 11.1.0.4 version  is appropriate for all AIX versions ( 5, 6, 7 )Download from here:https://www-304.ibm.com/support/docview.wss?uid=swg24031426 No reboot is necessary to install, it can even be installed while applications are using the current version.Restart the apps, and they will pick up the latest version. AIX 5.3 Technology Level 12 is required when running on Power5,6,7 processorsAIX 6.1 was introduced with the newer Power chips, and we have seen no issues with 6.1 or 7.1 versions.Customers with an unstable deployment, dozens of unexplained crashes, became stable after the upgrade.If your AIX system is 5.3, the minimum TL level should be at or higher than this:$ oslevel -s  5300-12-03-1107IBM typically supports only the two latest versions of AIX ( 6.1 and 7.1, for example).  AIX 5.3 is still supported and popular running in an LPAR. obiee userid limits$ ulimit -Ha  ( hard limits )$ ulimit -a   ( default limits )core file size (blocks)     unlimiteddata seg size (kbytes)      unlimitedfile size (blocks)          unlimitedmax memory size (kbytes)    unlimitedopen files                  10240 cpu time (seconds)          unlimitedvirtual memory (kbytes)     unlimitedIt is best to establish the values in /etc/security/limitsroot user is needed to observe and modify this file.If you modify a limit, you will need to relog in to change it again.  For example,$ ulimit -c 0$ ulimit -c 2097151cannot modify limit: Operation not permitted$ ulimit -c unlimited$ ulimit -c0There are only two meaningful values for ulimit -c ; zero or unlimited.Anything else is likely to produce a truncated core file that cannot be analyzed. Deploy 32-bit or 64-bit ?Early versions of OBIEE offered 32-bit or 64-bit choice to AIX customers.The 32-bit choice was needed if a database vendor did not supply a 64-bit client library.That's no longer an issue and beginning with OBIEE 11, 32-bit code is no longer shipped.A common error that leads to "out of memory" conditions to to accept the 32-bit memory configuration choices on 64-bit deployments.  The significant configuration choices are: Maximum process data (heap) size is in an AIX environment variableLDR_CNTRL=IGNOREUNLOAD@LOADPUBLIC@PREREAD_SHLIB@MAXDATA=0x... Two thread stack sizes are made in obiee NQSConfig.INI[ SERVER ]SERVER_THREAD_STACK_SIZE = 0;DB_GATEWAY_THREAD_STACK_SIZE = 0; Sort memory in NQSConfig.INI[ GENERAL ]SORT_MEMORY_SIZE = 4 MB ;SORT_BUFFER_INCREMENT_SIZE = 256 KB ; Choosing a value for MAXDATA:0x080000000  2GB Default maximum 32-bit heap size ( 8 with 7 zeros )0x100000000  4GB 64-bit breaking even with 32-bit ( 1 with 8 zeros )0x200000000  8GB 64-bit double 32-bit max0x400000000 16GB 64-bit safetyUsing 2GB heap size for a 64-bit process will almost certainly lead to an out-of-memory situation.Registers are twice as big ... consume twice as much memory in the heap.Upgrading to a 4GB heap for a 64-bit process is just "breaking even" with 32-bit.A 32-bit process is constrained by the 32-bit virtual addressing limits.  Heap memory is used for dynamic requirements of obiee software, thread stacks for each of the configured threads, and sometimes for shared libraries. 64-bit processes are not constrained in this way;  extra heap space can be configured for safety against a query that might create a sudden requirement for excessive storage.  If the storage is not available, this query might crash the whole server and disrupt existing users.There is no performance penalty on AIX for configuring more memory than required;  extra memory can be configured for safety.  If there are no other considerations, start with 8GB.Choosing a value for Thread Stack size:zero is the value documented to select an appropriate default for thread stack size.  My preference is to change this to an absolute value, even if you intend to use the documented default;  it provides better documentation and removes the "surprise" factor.There are two thread types that can be configured. GATEWAY is used by a thread pool to call a database client library to establish a DB connection.The default size is 256KB;  many customers raise this to 512KB ( no performance penalty for over-configuring ). This value must be set to 1 MB if Teradata connections are used. SERVER threads are used to run queries.  OBIEE uses recursive algorithms during the analysis of query structures which can consume significant thread stack storage.  It's difficult to provide guidance on a value that depends on data and complexity.  The general notion is to provide more space than you think you need,  "double down" and increase the value if you run out, otherwise inspect the query to understand why it is too complex for the thread stack.  There are protections built into the software to abort a single user query that is too complex, but the algorithms don't cover all situations.256 KB  The default 32-bit stack size.  Many customers increased this to 512KB on 32-bit.  A 64-bit server is very likely to crash with this value;  the stack contains mostly register values, which are twice as big.512 KB  The documented 64-bit default.  Some early releases of obiee didn't set this correctly, resulting in 256KB stacks.1 MB  The recommended 64-bit setting.  If your system only ever uses 512KB of stack space, there is no performance penalty for using 1MB stack size.2 MB  Many large customers use this value for safety.  No performance penalty.nqscheduler does not use the NQSConfig.INI file to set thread stack size.If this process crashes because the thread stack is too small, use this to set 2MB:export OBI_BACKGROUND_STACK_SIZE=2048 Shared libraries are not (shared) When application libraries are loaded at run-time, AIX makes a decision on whether to load the libraries in a "public" memory segment.  If the filesystem library permissions do not have the "Read-Other" permission bit, AIX loads the library into private process memory with two significant side-effects:* The libraries reduce the heap storage available.      Might be significant in 32-bit processes;  irrelevant in 64-bit processes.* Library code is loaded into multiple real pages for execution;  one copy for each process.Multiple execution images is a significant issue for both 32- and 64-bit processes.The "real memory pages" saved by using public memory segments is a minor concern.  Today's machines typically have plenty of real memory.The real problem with private copies of libraries is that they consume processor cache blocks, which are limited.   The same library instructions executing in different real pages will cause memory delays as the i-cache ( instruction cache 128KB blocks) are refreshed from real memory.   Performance loss because instructions are delayed is something that is difficult to measure without access to low-level cache fault data.   The machine just appears to be running slowly for no observable reason.This is an easy problem to detect, and an easy problem to correct.Detection:  "genld -l" AIX command produces a list of the libraries used by each process and the AIX memory address where they are loaded.32-bit public segment is 13 ( "dxxxxxxx" ).   private segments are 2-a.64-bit public segment is 9 ( "9xxxxxxxxxxxxxxx") ; private segment is 8.genld -l | grep -v ' d| 9' | sort +2provides a list of privately loaded libraries. Repair: chmod o+r <libname>AIX shared libraries will have a suffix of ".so" or ".a".Another technique is to change all libraries in a selected directory to repair those that might not be currently loaded.   The usual directories that need repair are obiee code, httpd code and plugins, database client libraries and java.chmod o+r /shr/dir/*.a /shr/dir/*.so Configure your system for diagnosticsProduction systems shouldn't crash, and yet bad things happen to good software.If obiee software crashes and produces a core, you should configure your system for reliable transfer of the failing conditions to Oracle Tech Support.  Here's what we need to be able to diagnose a core file from your system.* fullcore enabled. chdev -lsys0 -a fullcore=true* core naming enabled. chcore -n on -d* ulimit must not truncate core. see item 3.* pstack.sh is used to capture core documentation.* obidoc is used to capture current AIX configuration.* snapcore  AIX utility captures core and libraries. Use the proper syntax. $ snapcore -r corename executable-fullpath   /tmp/snapcore will contain the .pax.Z output file.  It is compressed.* If cores are directed to a common directory, ensure obiee userid can write to the directory.  ( chcore -p /cores -d ; chmod 777 /cores )The filesystem must have sufficient space to hold a crashing obiee application.Use:  df -k  Check the "Free" column ( not "% Used" )  8388608 is 8GB. Disable Oracle Client Library signal handlingThe Oracle DB Client Library is frequently distributed with the sqlplus development kit.By default, the library enables a signal handler, which will document a call stack if the application crashes.   The signal handler is not needed, and definitely disruptive to obiee diagnostics.   It needs to be disabled.   sqlnet.ora is typically located at:   $ORACLE_HOME/network/admin/sqlnet.oraAdd this line at the top of the file:   DIAG_SIGHANDLER_ENABLED=FALSE Disable async query in the RPD connection pool.This might be an obiee 10.1.3.4 issue only ( still checking  )."async query" must be disabled in the connection pools.It was designed to enable query cancellation to a database, and turned out to have too many edge conditions in normal communication that produced random corruption of data and crashes.  Please ensure it is turned off in the RPD. Check AIX error report (errpt).Errors external to obiee applications can trigger crashes.  $ /bin/errpt -aHardware errors ( firmware, adapters, disks ) should be reported to IBM support.All application core files are recorded by AIX;  the most recent ones are listed first. Reserved for something important to say.

    Read the article

  • How to safely reboot via First Boot script

    - by unixman
    With the cost and performance benefits of the SPARC T4 and SPARC T5 systems undeniably validated, the banking sector is actively moving to Solaris 11.  I was recently asked to help a banking customer of ours look at migrating some of their Solaris 10 logic over to Solaris 11.  While we've introduced a number of holistic improvements in Solaris 11, in terms of how we ease long-term software lifecycle management, it is important to appreciate that customers may not be able to move all of their Solaris 10 scripts and procedures at once; there are years of scripts that reflect fine-tuned requirements of proprietary banking software that gets layered on top of the operating system. One of these requirements is to go through a cycle of reboots, after the system is installed, in order to ensure appropriate software dependencies and various configuration files are in-place. While Solaris 10 introduced a facility that aids here, namely SMF, many of our customers simply haven't yet taken the time to take advantage of this - proceeding with logic that, while functional, without further analysis has an appearance of not being optimal in terms of taking advantage of all the niceties bundled in Solaris 11 at no extra cost. When looking at Solaris 11, we recognize that one of the vehicles that bridges the gap between getting the operating system image payload delivered, and the customized banking software installed, is a notion of a First Boot script.  I had a working example of this at one of the Oracle OpenWorld sessions a few years ago - we've since improved our documentation and have introduced sections where this is described in better detail.   If you're looking at this for the first time and you've not worked with IPS and SMF previously, you might get the sense that the tasks are daunting.   There is a set of technologies involved that are jointly engineered in order to make the process reliable, predictable and extensible. As you go down the path of writing your first boot script, you'll be faced with a need to wrap it into a SMF service and then packaged into a IPS package. The IPS package would then need to be placed onto your IPS repository, in order to subsequently be made available to all of your AI (Automated Install) clients (i.e. the systems that you're installing Solaris and your software onto).     With this blog post, I wanted to create a single place that outlines the entire process (simplistically), and provide a hint of how a good old "at" command may make the requirement of forcing an initial reboot handy. The syntax and references to commands here is based on running this on a version of Solaris 11 that has been updated since its initial release in 2011 (i.e. I am writing this on Solaris 11.1) Assuming you've built an AI server (see this How To article for an example), you might be asking yourself: "Ok, I've got some logic that I need executed AFTER Solaris is deployed and I need my own little script that would make that happen. How do I go about hooking that script into the Solaris 11 AI framework?"  You might start here, in Chapter 13 of the "Installing Oracle Solaris 11.1 Systems" guide, which talks about "Running a Custom Script During First Boot".  And as you do, you'll be confronted with command that might be unfamiliar to you if you're new to Solaris 11, like our dear new friend: svcbundle svcbundle is an aide to creating manifests and profiles.  It is awesome, but don't let its awesomeness overwhelm you. (See this How To article by my colleague Glynn Foster for a nice working example).  In order to get your script's logic integrated into the Solaris 11 deployment process, you need to wrap your (shell) script into 2 manifests -  a SMF service manifest and a IPS package manifest.  ....and if you're new to XML, well then -- buckle up We have some examples of small first boot scripts shown here, as templates to build upon. Necessary structure of the script, particularly in leveraging SMF interfaces, is key. I won't go into that here as that is covered nicely in the doc link above.    Let's say your script ends up looking like this (btw: if things appear to be cut-off in your browser, just select them, copy and paste into your editor and it'll be grabbed - the source gets captured eventhough the browser may not render it "correctly" - ah, computers). #!/bin/sh # Load SMF shell support definitions . /lib/svc/share/smf_include.sh # If nothing to do, exit with temporary disable completed=`svcprop -p config/completed site/first-boot-script-svc:default` [ "${completed}" = "true" ] && \ smf_method_exit $SMF_EXIT_TEMP_DISABLE completed "Configuration completed" # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. Its name is in column one. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ {print $1}'` beadm create ${bename}.orig echo "Original boot environment saved as ${bename}.orig" # ---- Place your one-time configuration tasks here ---- # For example, if you have to pull some files from your own pre-existing system: /usr/bin/wget -P /var/tmp/ $PULL_DOWN_ADDITIONAL_SCRIPTS_FROM_A_CORPORATE_SYSTEM /usr/bin/chmod 755 /var/tmp/$SCRIPTS_THAT_GOT_PULLED_DOWN_IN_STEP_ABOVE # Clearly the above 2 lines represent some logic that you'd have to customize to fit your needs. # # Perhaps additional things you may want to do here might be of use, like # (gasp!) configuring ssh server for root login and X11 forwarding (for testing), and the like... # # Oh and by the way, after we're done executing all of our proprietary scripts we need to reboot # the system in accordance with our operational software requirements to ensure all layered bits # get initialized properly and pull-in their own modules and components in the right sequence, # subsequently. # We need to set a "time bomb" reboot, that would take place upon completion of this script. # We already know that *this* script depends on multi-user-server SMF milestone, so it should be # safe for us to schedule a reboot for 5 minutes from now. The "at" job get scheduled in the queue # while our little script continues thru the rest of the logic. /usr/bin/at now + 5 minutes <<REBOOT /usr/bin/sync /usr/sbin/reboot REBOOT # ---- End of your customizations ---- # Record that this script's work is done svccfg -s site/first-boot-script-svc:default setprop config/completed = true svcadm refresh site/first-boot-script-svc:default smf_method_exit $SMF_EXIT_TEMP_DISABLE method_completed "Configuration completed"  ...and you're happy with it and are ready to move on. Where do you go and what do you do? The next step is creating the IPS package for your script. Since running the logic of your script constitutes a service, you need to create a service manifest. This is described here, in the middle of Chapter 13 of "Creating an IPS package for the script and service".  Assuming the name of your shell script is first-boot-script.sh, you could end up doing the following: $ cd some_working_directory_for_this_project$ mkdir -p proto/lib/svc/manifest/site$ mkdir -p proto/opt/site $ cp first-boot-script.sh proto/opt/site  Then you would create the service manifest  file like so: $ svcbundle -s service-name=site/first-boot-script-svc \ -s start-method=/opt/site/first-boot-script.sh \ -s instance-property=config:completed:boolean:false -o \ first-boot-script-svc-manifest.xml   ...as described here, and place it into the directory hierarchy above. But before you place it into the directory, make sure to inspect the manifest and adjust the appropriate service dependencies.  That is to say, you want to properly specify what milestone should be reached before your service runs.  There's a <dependency> section that looks like this, before you modify it: <dependency restart_on="none" type="service" name="multi_user_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user"/>  </dependency>  So if you'd like to have your service run AFTER the multi-user-server milestone has been reached (i.e. later, as multi-user-server has more dependencies then multi-user and our intent to reboot the system may have significant ramifications if done prematurely), you would modify that section to read:  <dependency restart_on="none" type="service" name="multi_user_server_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user-server"/>  </dependency> Save the file and validate it: $ svccfg validate first-boot-script-svc-manifest.xml Assuming there are no errors returned, copy the file over into the directory hierarchy: $ cp first-boot-script-svc-manifest.xml proto/lib/svc/manifest/site Now that we've created the service manifest (.xml), create the package manifest (.p5m) file named: first-boot-script.p5m.  Populate it as follows: set name=pkg.fmri value=first-boot-script-AT-1-DOT-0,5.11-0 set name=pkg.summary value="AI first-boot script" set name=pkg.description value="Script that runs at first boot after AI installation" set name=info.classification value=\ "org.opensolaris.category.2008:System/Administration and Configuration" file lib/svc/manifest/site/first-boot-script-svc-manifest.xml \ path=lib/svc/manifest/site/first-boot-script-svc-manifest.xml owner=root \ group=sys mode=0444 dir path=opt/site owner=root group=sys mode=0755 file opt/site/first-boot-script.sh path=opt/site/first-boot-script.sh \ owner=root group=sys mode=0555 Now we are going to publish this package into a IPS repository. If you don't have one yet, don't worry. You have 2 choices: You can either  publish this package into your mirror of the Oracle Solaris IPS repo or create your own customized repo.  The best practice is to create your own customized repo, leaving your mirror of the Oracle Solaris IPS repo untouched.  From this point, you have 2 choices as well - you can either create a repo that will be accessible by your clients via HTTP or via NFS.  Since HTTP is how the default Solaris repo is accessed, we'll go with HTTP for your own IPS repo.   This nice and comprehensive How To by Albert White describes how to create multiple internal IPS repos for Solaris 11. We'll zero in on the basic elements for our needs here: We'll create the IPS repo directory structure hanging off a separate ZFS file system, and we'll tie it into an instance of pkg.depotd. We do this because we want our IPS repo to be accessible to our AI clients through HTTP, and the pkg.depotd SMF service bundled in Solaris 11 can help us do this. We proceed as follows: # zfs create rpool/export/MyIPSrepo # pkgrepo create /export/MyIPSrepo # svccfg -s pkg/server add MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg pkg application # svccfg -s pkg/server:MyIPSrepo setprop pkg/port=10081 # svccfg -s pkg/server:MyIPSrepo setprop pkg/inst_root=/export/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg general framework # svccfg -s pkg/server:MyIPSrepo addpropvalue general/complete astring: MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpropvalue general/enabled boolean: true # svccfg -s pkg/server:MyIPSrepo setprop pkg/readonly=true # svccfg -s pkg/server:MyIPSrepo setprop pkg/proxy_base = astring: http://your_internal_websrvr/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo setprop pkg/threads = 200 # svcadm refresh application/pkg/server:MyIPSrepo # svcadm enable application/pkg/server:MyIPSrepo Now that the IPS repo is created, we need to publish our package into it: # pkgsend publish -d ./proto -s /export/MyIPSrepo first-boot-script.p5m If you find yourself making changes to your script, remember to up-rev the version in the .p5m file (which is your IPS package manifest), and re-publish the IPS package. Next, you need to go to your AI install server (which might be the same machine) and modify the AI manifest to include a reference to your newly created package.  We do that by listing an additional publisher, which would look like this (replacing the IP address and port with your own, from the "svccfg" commands up above): <publisher name="firstboot"> <origin name="http://192.168.1.222:10081"/> </publisher>  Further down, in the  <software_data action="install">  section add: <name>pkg:/first-boot-script</name> Make sure to update your Automated Install service with the new AI manifest via installadm update-manifest command.  Don't forget to boot your client from the network to watch the entire process unfold and your script get tested.  Once the system makes the initial reboot, the first boot script will be executed and whatever logic you've specified in it should be executed, too, followed by a nice reboot. When the system comes up, your service should stay in a disabled state, as specified by the tailing lines of your SMF script - this is normal and should be left as is as it helps provide an auditing trail for you.   Because the reboot is quite a significant action for the system, you may want to add additional logic to the script that actually places and then checks for presence of certain lock files in order to avoid doing a reboot unnecessarily. You may also want to, alternatively, remove the SMF service entirely - if you're unsure of the potential for someone to try and accidentally enable that service -- eventhough its role in life is to only run once upon the system's first boot. That is how I spent a good chunk of my pre-Halloween time this week, hope yours was just as SPARCkly^H^H^H^H fun!    

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Firewall predefined rule property cannot be modified

    - by Sami-L
    Using a stand alone Windows Server 2012 Standard edition (no Active Directory), I Tried to establish a simple remote desktop with a custom port number, but could not modify the port number in the Firewall inbound rule, when I open the inbound property I get the next message: "This is a predefined rule and some of its properties cannot be modified" I have tried to set it up like this: New rule - predifined drop down list - Remote Desktop - check mark rules - Allow the connection. but still get "This is a predefined rule and some of its properties cannot be modified" Thank you in advance.

    Read the article

  • EMC ESRS stops working when it is VMotioned

    - by makerofthings7
    EMC is on site and told me: The ESRS SAN monitoring solution will cease to function if that host is VMotioned In case anyone doesn't know, the ESRS is a dial home solution that works over IP. An EMC SecureID is required to add or modify the list of devices that are monitored. The ESRS software is installed on the customer premises. Question If ESRS truly fails to work, as the EMC engineer stated, and based on our customer experience, what is it within VMWare that is exposed to the virtualized host that allows this behavior to happen?

    Read the article

  • How to secure JBoss application server using SELinux

    - by Jakub Elias
    I want to secure RedHat 5.4 application server by SELinux (targeted policy) and have several questions 1, where can i get SELinux sources (/etc/selinux//src/policy/)There seems to be no such package on install cd .. 2, how to restrict user rights (for example user jboss could not modify /etc/my.cnf) 3, how to configure JBoss application server to work under SELinux Although i read many documents from NSA the whole topic is still not clear for me.What i want is to basically protect filesystem in case one account is broken.I cannot find any materials about securing jboss server using either chroot jail, ACLs or SELinux ....

    Read the article

  • I am looking for a good site and I don’t mind paying?. [closed]

    - by Vijeta Negi
    Hi, As a Programmer. While writing the codes I had prepared a manual and taken printed out also, However for some reason Ic with no information, The only thing I am left is this printed material which I need to modify too. I have scanned it and is in Tiff format now. Does any one know a good site where I can convert scanned manual into text format –I am looking for a good site and I don’t mind paying?.

    Read the article

  • KDE4: Share Nepomuk data

    - by Frank Becker
    Hi all, is it possible to share the nepomuk data? I have several users here and the users should be able to read and modify the tags and comments for each file with public access. Also it should be possible to tag and comment the files in each private home-directory. Is this possible with the current KDE 4.3.2? Thanks for your help Frank

    Read the article

  • Access IIS Admin without local administrator rights

    - by Carl
    We are running Microsoft Server 2003 with IIS. We would like to give our developers access to manage IIS (through IIS Admin) but do not want them to be administrators of the entire machine. Putting them in "Power Users" group does not seem to work. What permissions should we grant to our developers to allow them to manage IIS (e.g. add websites, modify app pools, etc.) without giving them full admin rights to the server?

    Read the article

  • Insufficient Permissions on UNC Path for Physical Path in IIS7

    - by Eric C
    I've got a multi-server setup where Server A is hosting the html files and Server B is running IIS 7.5. I've specified a UNC path for the Physical Path of the website on Server B. When I try to hit localhost I'm receiving the following error: Cannot read configuration file due to insufficient permissions I am able to browse and modify files in the UNC path on Server B. I'm guessing it has something to do with IIS_IUSRS of Server B not having permissions, but I'm unsure how to add them to the shared directory of Server A.

    Read the article

  • How to change default boot order ubuntu 10.04 ?

    - by Sako Christian
    Hello, How can I change default boot order in Ubuntu 10.04 from Ubuntu to Windows7? However, I already checked sudo gedit /etc/default/grub and modify the grub file to be GRUB_DEFAULT=4 and update the grup sudo update-grub I even install graph software to re order the book sudo startupmanager But still after restart the default choose for boot is Ubuntu ... Thank you, Sako Christian P.S: I am using Ubuntu 10.04 with grub version 1.98

    Read the article

  • Modifying IP Whois results?

    - by Rob
    Okay, I recently bought a dedicated server from santrex.net If you go here: http://domaintoip.com/ip.php?domain=188.72.215.27 It displays the whois information for that IP, santrex.net I want to modify my whois information, so that it displays my information instead, and doesn't link to santrex or santrex's ISP at all. Is this possible? If so, how would I go about doing it?

    Read the article

  • Windows 7: Creating a password-protected task (NOT a programming question)

    - by Matthias
    Hello, I would like to configure a task like "child control software", so it would hibernate the pc at certain times. Is it possible to prevent modification (here: pausing) of a task through requiring the entering of the admin password to modify, EVEN THOUGH the currently-logged-in (and only) user is the admin account itself? (Do you know of any child control software that does NOT require an additional account yet is able to hibernate the system at certain times?) Thanks a lot! Matthias

    Read the article

  • Nvidia Gefore 9800 GT Low FPS [closed]

    - by AskaGamer
    I have a Nvidia Geforce 9800 GT would like to improve the performance especially with regards to the FPS. Is there some setting on my Nvidia panel that I can modify to change my FPS? Does anyone know of a good overclocking tool that will work the way it should?

    Read the article

  • Are these the correct instructions to backup TFS 2010?

    - by Keith Sirmons
    Howdy, I am working on a backup plan for TFS 2010. I found this site http://msdn.microsoft.com/en-us/library/ms253070(VS.100).aspx that details a complex backup solution. Has anyone tested these procedures and can confirm they are accurate? There are a couple of steps that violate the SharePoint rule "Do Not Modify the Database!" Thank you, Keith

    Read the article

  • Overwrite old hmtl - with new html -

    - by SolarAngellis
    there is some way to modify multiple html file in multiple folders ? I want to apply a new html design template over an old html website , the website it is quite big , and doing this manually will take me quite a long time for example , Template A and Template B website Template B it is a complete website , but i want a program to overwrite parts of the code , i need title / meta tags / and elements to remain , while the rest of the template will change i know about notepad++ but i don't know how to set it up to give me the right output

    Read the article

  • How to copy remote machines text to local machines clipboard through SSH?

    - by recluze
    I work on a remote machine through ssh. I have a very large text file there (approx. 500 lines) which I usually need to modify, then copy the contents of that file and paste it in my local browser. The way I usually do this is cat filename and then select/copy the ssh output. That takes a lot of time. I was wondering if there is a utility that will put the remote file's contents in my local clipboard.

    Read the article

  • hide toolbar buttons in Chrome / Chromium

    - by romant
    Am a large keyboard user, and I've never hit back/forward/refresh or the favourite buttons on my browsers. Within safari, I can modify each of the buttons that appear. I wish in Chrome to only be able to see the address bar, and the page. Is this possible?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >