Search Results

Search found 59036 results on 2362 pages for 'fake data'.

Page 123/2362 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • How to Visualize your Audit Data with BI Publisher?

    - by kanichiro.nishida
      Do you know how many reports on your BI Publisher server are accessed yesterday ? Or, how many users accessed to the reports yesterday, or what are the average number of the users accessed to the reports during the week vs. weekend or morning vs. afternoon ? With BI Publisher 11G, now you can audit your user’s reports access and understand the state of the reporting environment at your server, each user, or each report level. At the previous post I’ve talked about what the BI Publisher’s auditing functionality and how to enable it so that BI Publisher can start collecting such data. (How to Audit and Monitor BI Publisher Reports Access?)Now, how can you visualize such auditing data to have a better understanding and gain more insights? With Fusion Middleware Audit Framework you have an option to store the auditing data into a database instead of a log file, which is the default option. Once you enable the database storage option, that means you have your auditing data (or, user report access data) in your database tables, now no brainer, you can start visualize the data, create reports, analyze, and share with BI Publisher. So, first, let’s take a look on how to enable the database storage option for the auditing data. How to Feed the Auditing Data into Database First you need to create a database schema for Fusion Middleware Audit Framework with RCU (Repository Creation Utility). If you have already installed BI Publisher 11G you should be familiar with this RCU. It creates any database schema necessary to run any Fusion Middleware products including BI stuff. And you can use the same RCU that you used for your BI or BI Publisher installation to create this Audit schema. Create Audit Schema with RCU Here are the steps: Go to $RCU_HOME/bin and execute the ‘rcu’ command Choose Create at the starting screen and click Next. Enter your database details and click Next. Choose the option to create a new prefix, for example ‘BIP’, ‘KAN’, etc. Select 'Audit Services' from the list of schemas. Click Next and accept the tablespace creation. Click Finish to start the process. After this, there should be following three Audit related schema created in your database. <prefix>_IAU (e.g. KAN_IAU) <prefix>_IAU_APPEND (e.g. KAN_IAU_APPEND) <prefix>_IAU_VIEWER (e.g. KAN_IAU_VIEWER) Setup Datasource at WebLogic After you create a database schema for your auditing data, now you need to create a JDBC connection on your WebLogic Server so the Audit Framework can access to the database schema that was created with the RCU with the previous step. Connect to the Oracle WebLogic Server administration console: http://hostname:port/console (e.g. http://report.oracle.com:7001/console) Under Services, click the Data Sources link. Click ‘Lock & Edit’ so that you can make changes Click New –> ‘Generic Datasource’ to create a new data source. Enter the following details for the new data source:  Name: Enter a name such as Audit Data Source-0.  JNDI Name: jdbc/AuditDB  Database Type: Oracle  Click Next and select ‘Oracle's Driver (Thin XA) Versions: 9.0.1 or later’ as Database Driver (if you’re using Oracle database), and click Next. The Connection Properties page appears. Enter the following information: Database Name: Enter the name of the database (SID) to which you will connect. Host Name: Enter the hostname of the database.  Port: Enter the database port.  Database User Name: This is the name of the audit schema that you created in RCU. The suffix is always IAU for the audit schema. For example, if you gave the prefix as ‘BIP’, then the schema name would be ‘KAN_IAU’.  Password: This is the password for the audit schema that you created in RCU.   Click Next. Accept the defaults, and click Test Configuration to verify the connection. Click Next Check listed servers where you want to make this JDBC connection available. Click ‘Finish’ ! After that, make sure you click ‘Activate Changes’ at the left hand side top to take the new JDBC connection in effect. Register your Audit Data Storing Database to your Domain Finally, you can register the JNDI/JDBC datasource as your Auditing data storage with Fusion Middleware Control (EM). Here are the steps: 1. Login to Fusion Middleware Control 2. Navigate to Weblogic Domain, right click on ‘bifoundation…..’, select Security, then Audit Store. 3. Click the searchlight icon next to the Datasource JNDI Name field. 4.Select the Audit JNDI/JDBC datasource you created in the previous step in the pop-up window and click OK. 5. Click Apply to continue. 6. Restart the whole WebLogic Servers in the domain. After this, now the BI Publisher should start feeding all the auditing data into the database table called ‘IAU_BASE’. Try login to BI Publisher and open a couple of reports, you should see the activity audited in the ‘IAU_BASE’ table. If not working, you might want to check the log file, which is located at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/AdminServer-diagnostic.log to see if there is any error. Once you have the data in the database table, now, it’s time to visualize with BI Publisher reports! Create a First BI Publisher Auditing Report Register Auditing Datasource as JNDI datasource First thing you need to do is to register the audit datasource (JNDI/JDBC connection) you created in the previous step as JNDI data source at BI Publisher. It is a JDBC connection registered as JNDI, that means you don’t need to create a new JDBC connection by typing the connection URL, username/password, etc. You can just register it using the JNDI name. (e.g. jdbc/AuditDB) Login to BI Publisher as Administrator (e.g. weblogic) Go to Administration Page Click ‘JNDI Connection’ under Data Sources and Click ‘New’ Type Data Source Name and JNDI Name. The JNDI Name is the one you created in the WebLogic Console as the auditing datasource. (e.g. jdbc/AuditDB) Click ‘Test Connection’ to make sure the datasource connection works. Provide appropriate roles so that the report developers or viewers can share this data source to view reports. Click ‘Apply’ to save. Create Data Model Select Data Model from the tool bar menu ‘New’ Set ‘Default Data Source’ to the audit JNDI data source you have created in the previous step. Select ‘SQL Query’ for your data set Use Query Builder to build a query or just type a sql query. Either way, the table you want to report against is ‘IAU_BASE’. This IAU_BASE table contains all the auditing data for other products running on the WebLogic Server such as JPS, OID, etc. So, if you care only specific to BI Publisher then you want to filter by using  ‘IAU_COMPONENTTYPE’ column which contains the product name (e.g. ’xmlpserver’ for BI Publisher). Here is my sample sql query. select     "IAU_BASE"."IAU_COMPONENTTYPE" as "IAU_COMPONENTTYPE",      "IAU_BASE"."IAU_EVENTTYPE" as "IAU_EVENTTYPE",      "IAU_BASE"."IAU_EVENTCATEGORY" as "IAU_EVENTCATEGORY",      "IAU_BASE"."IAU_TSTZORIGINATING" as "IAU_TSTZORIGINATING",    to_char("IAU_TSTZORIGINATING", 'YYYY-MM-DD') IAU_DATE,    to_char("IAU_TSTZORIGINATING", 'DAY') as IAU_DAY,    to_char("IAU_TSTZORIGINATING", 'HH24') as IAU_HH24,    to_char("IAU_TSTZORIGINATING", 'WW') as IAU_WEEK_OF_YEAR,      "IAU_BASE"."IAU_INITIATOR" as "IAU_INITIATOR",      "IAU_BASE"."IAU_RESOURCE" as "IAU_RESOURCE",      "IAU_BASE"."IAU_TARGET" as "IAU_TARGET",      "IAU_BASE"."IAU_MESSAGETEXT" as "IAU_MESSAGETEXT",      "IAU_BASE"."IAU_FAILURECODE" as "IAU_FAILURECODE",      "IAU_BASE"."IAU_REMOTEIP" as "IAU_REMOTEIP" from    "KAN3_IAU"."IAU_BASE" "IAU_BASE" where "IAU_BASE"."IAU_COMPONENTTYPE" = 'xmlpserver' Once you saved a sample XML for this data model, now you can create a report with this data model. Create Report Now you can use one of the BI Publisher’s layout options to design the report layout and visualize the auditing data. I’m a big fan of Online Layout Editor, it’s just so easy and simple to create reports, and on top of that, all the reports created with Online Layout Editor has the Interactive View with automatic data linking and filtering feature without any setting or coding. If you haven’t checked the Interactive View or Online Layout Editor you might want to check these previous blog posts. (Interactive Reporting with BI Publisher 11G, Interactive Master Detail Report Just A Few Clicks Away!) But of course, you can use other layout design option such as RTF template. Here are some sample screenshots of my report design with Online Layout Editor.     Visualize and Gain More Insights about your Customers (Users) ! Now you can visualize your auditing data to have better understanding and gain more insights about your reporting environment you manage. It’s been actually helping me personally to answer the  questios like below.  How many reports are accessed or opened yesterday, today, last week ? Who is accessing which report at what time ? What are the time windows when the most of the reports access happening ? What are the most viewed reports ? Who are the active users ? What are the # of reports access or user access trend for the last month, last 6 months, last 12 months, etc ? I was talking with one of the best concierge in the world at this hotel the other day, and he was telling me that the best concierge knows about their customers inside-out therefore they can provide a very private service that is customized to each customer to meet each customer’s specific needs. Well, this is true when it comes to how to administrate and manage your reporting environment, right ? The best way to serve your customers (report users, including both viewers and developers) is to understand how they use, what they use, when they use. Auditing is not just about compliance, but it’s the way to improve the customer service. The BI Publisher 11G Auditing feature enables just that to help you understand your customers better. Happy customer service, be the best reporting concierge! p.s. please share with us on what other information would be helpful for you for the auditing! Always, any feedback is a great value and inspiration for us!  

    Read the article

  • SQL SERVER – Installing SQL Server Data Tools and SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. If you have installed SQL Server, but are missing the Data Tools or Reporting Services Double-click the SQL Server 2012 installation media. Click the Installation link on the left to view the Installation options. Click the top link New SQL Server stand-alone installation or add features to an existing installation. Follow the SQL Server Setup wizard until you get to the Installation Type screen. At that screen, select Add features to an existing instance of SQL Server 2012. Click Next to move to the Feature Selection page. Select Reporting Services – Native and SQL Server Data Tools. If the Management Tools have not been installed, go ahead and choose them as well. Continue through the wizard and reboot the computer at the end of the installation if instructed to do so. Configure Reporting Services If you installed Reporting Services during the installation of the SQL Server instance, SSRS will be configured automatically for you. If you install SSRS later, then you will have to go back and configure it as a subsequent step. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools > Reporting Services Configuration Manager > Connect on the Reporting Services Configuration Connection dialog box. On the left-hand side of the Reporting Services Configuration Manager, click Database. Click the Change Database button on the right side of the screen. Select Create a new report server database and click Next. Click through the rest of the wizard accepting the defaults. This wizard creates two databases: ReportServer, used to store report definitions and security, and ReportServerTempDB which is used as scratch space when preparing reports for user requests. Now click Web Service URL on the left-hand side of the Reporting Services Configuration Manager. Click the Apply button to accept the defaults. If the Apply button has been grayed out, move on to the next step. This step sets up the SSRS web service. The web service is the program that runs in the background that communicates between the web page, which you will set up next, and the databases. The final configuration step is to select the Report Manager URL link on the left. Accept the default settings and click Apply. If the Apply button was already grayed out, this means the SSRS was already configured. This step sets up the Report Manager web site where you will publish reports. You may be wondering if you also must install a web server on your computer. SQL Server does not require that the Internet Information Server (IIS), the Microsoft web server, be installed to run Report Manager. Click Exit to dismiss the Reporting Services Configuration Manager dialog box. Tomorrow’s Post Tomorrow’s blog post will show how to create your first report using the Report Wizard. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas. P.S. Sorry for the cross-post. I wrote this up on SO and then remembered that there was a game dev-focused one. Curious to see if I get different answers one place than the other....

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • How to display Sharepoint Data in a Windows Forms Application

    - by Michael M. Bangoy
    In this post I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 5. Open the Form1 in design view and from the Toolbox menu Add a Button, TextBox, Label and DataGridView on the form. 6. Next double click on the Load Button, this will open the code view of the form. Add Using statement to reference the Sharepoint Client Library then create two method for the Load Site Title and LoadList. See below:   using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client;   namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "theurlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void getsitetitle()         {             SP.ClientContext context = new SP.ClientContext(_context);             SP.Web _site = context.Web;             context.Load(_site);             context.ExecuteQuery();             txttitle.Text = _site.Title;             context.Dispose();         }                 private void loadlist()         {             using (SP.ClientContext _clientcontext = new SP.ClientContext(_context))             {                 SP.Web _web = _clientcontext.Web;                 SP.ListCollection _lists = _clientcontext.Web.Lists;                 _clientcontext.Load(_lists);                 _clientcontext.ExecuteQuery();                 DataTable dt = new DataTable();                 DataColumn column;                 DataRow row;                 column = new DataColumn();                 column.DataType = Type.GetType("System.String");                 column.ColumnName = "List Title";                 dt.Columns.Add(column);                 foreach (SP.List listitem in _lists)                 {                     row = dt.NewRow();                     row["List Title"] = listitem.Title;                     dt.Rows.Add(row);                 }                 dataGridView1.DataSource = dt;             }                   }       private void cmdload_Click(object sender, EventArgs e)         {             getsitetitle();             loadlist();          }     } } 7. That’s it. Hit F5 to run the application then click the Load Button. Your screen should like the one below. Hope this helps.

    Read the article

  • Where to store front-end data for "object calculator"

    - by Justin Grahn
    I recently have completed a language library that acts as a giant filter for food items, and flows a bit like this :Products -> Recipes -> MenuItems -> Meals and finally, upon submission, creates an Order. I have also completed a database structure that stores all the pertinent information to each class, and seems to fit my needs. The issue I'm having is linking the two. I imagined all of the information being local to each instance of the product, where there exists one backend user who edits and manipulates data, and multiple front end users who select their Meal(s) to create an Order. Ideally, all of the front end users would have all of this information stored locally within the library, and would update the library on startup from a database. How should I go about storing the data so that I can load it into the library every time the user opens the application? Do I package a database onboard and just load and populate every time? The only method I can currently conceive of doing this, even if I only have 500 possible Product objects, would require me to foreach the list for every Product that I need to match to a Recipe and so on and so forth every time I relaunch the program, which seems like a lot of wasteful loading. Here is a general flow of my architecture: Products: public class Product : IPortionable { public Product(string n, uint pNumber = 0) { name = n; productNumber = pNumber; } public string name { get; set; } public uint productNumber { get; set; } } Recipes: public Recipe(string n, decimal yieldAmt, Volume.Unit unit) { name = n; yield = new Volume(yieldAmt, unit); yield.ConvertUnit(); } /// <summary> /// Creates a new ingredient object /// </summary> /// <param name="n">Name</param> /// <param name="yieldAmt">Recipe Yield</param> /// <param name="unit">Unit of Yield</param> public Recipe(string n, decimal yieldAmt, Weight.Unit unit) { name = n; yield = new Weight(yieldAmt, unit); } public Recipe(Recipe r) { name = r.name; yield = r.yield; ingredients = r.ingredients; } public string name { get; set; } public IMeasure yield; public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable,IMeasure>(); MenuItems: public abstract class MenuItem : IScalable { public static string title = null; public string name { get; set; } public decimal maxPortionSize { get; set; } public decimal minPortionSize { get; set; } public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable, IMeasure>(); and Meal: public class Meal { public Meal(int guests) { guestCount = guests; } public int guestCount { get; private set; } //TODO: Make a new MainCourse class that holds pasta and Entree public Dictionary<string, int> counts = new Dictionary<string, int>(){ {MainCourse.title, 0}, {Side.title , 0}, {Appetizer.title, 0} }; public List<MenuItem> items = new List<MenuItem>(); The Database just stores and links each of these basic names and amounts together usings ID's (RecipeID, ProductID and MenuItemID)

    Read the article

  • T-SQL (SCD) Slowly Changing Dimension Type 2 using a merge statement

    - by AtulThakor
    Working on stored procedure recently which loads records into a data warehouse I found that the existing record was being expired using an update statement followed by an insert to add the new active record. Playing around with the merge statement you can actually expire the current record and insert a new record within one clean statement. This is how the statement works, we do the normal merge statement to insert a record when there is no match, if we match the record we update the existing record by expiring it and deactivating. At the end of the merge statement we use the output statement to output the staging values for the update,  we wrap the whole merge statement within an insert statement and add new rows for the records which we inserted. I’ve added the full script at the bottom so you can paste it and play around.   1: INSERT INTO ExampleFactUpdate 2: (PolicyID, 3: Status) 4: SELECT -- these columns are returned from the output statement 5: PolicyID, 6: Status 7: FROM 8: ( 9: -- merge statement on unique id in this case Policy_ID 10: MERGE dbo.ExampleFactUpdate dp 11: USING dbo.ExampleStag s 12: ON dp.PolicyID = s.PolicyID 13: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 14: INSERT (PolicyID,Status) 15: VALUES (s.PolicyID, s.Status) 16: WHEN MATCHED --if it already exists 17: AND ExpiryDate IS NULL -- and the Expiry Date is null 18: THEN 19: UPDATE 20: SET 21: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 22: dp.Active = 0 -- and deactivate the existing record 23: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 24: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 25: WHERE -- we'll filter using a where clause 26: MergeAction = 'Update'; -- here   Complete source for example 1: if OBJECT_ID('ExampleFactUpdate') > 0 2: drop table ExampleFactUpdate 3:  4: Create Table ExampleFactUpdate( 5: ID int identity(1,1), 3: go 6: PolicyID varchar(100), 7: Status varchar(100), 8: EffectiveDate datetime default getdate(), 9: ExpiryDate datetime, 10: Active bit default 1 11: ) 12:  13:  14: insert into ExampleFactUpdate( 15: PolicyID, 16: Status) 17: select 18: 1, 19: 'Live' 20:  21: /*Create Staging Table*/ 22: if OBJECT_ID('ExampleStag') > 0 23: drop table ExampleStag 24: go 25:  26: /*Create example fact table */ 27: Create Table ExampleStag( 28: PolicyID varchar(100), 29: Status varchar(100)) 30:  31: --add some data 32: insert into ExampleStag( 33: PolicyID, 34: Status) 35: select 36: 1, 37: 'Lapsed' 38: union all 39: select 40: 2, 41: 'Quote' 42:  43: select * 44: from ExampleFactUpdate 45:  46: select * 47: from ExampleStag 48:  49:  50: INSERT INTO ExampleFactUpdate 51: (PolicyID, 52: Status) 53: SELECT -- these columns are returned from the output statement 54: PolicyID, 55: Status 56: FROM 57: ( 58: -- merge statement on unique id in this case Policy_ID 59: MERGE dbo.ExampleFactUpdate dp 60: USING dbo.ExampleStag s 61: ON dp.PolicyID = s.PolicyID 62: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 63: INSERT (PolicyID,Status) 64: VALUES (s.PolicyID, s.Status) 65: WHEN MATCHED --if it already exists 66: AND ExpiryDate IS NULL -- and the Expiry Date is null 67: THEN 68: UPDATE 69: SET 70: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 71: dp.Active = 0 -- and deactivate the existing record 72: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 73: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 74: WHERE -- we'll filter using a where clause 75: MergeAction = 'Update'; -- here 76:  77:  78: select * 79: from ExampleFactUpdate 80: 

    Read the article

  • Passing data between the VirtualBox Host and the Guest

    - by Fat Bloke
    Here's a good question: "How can you figure out the VM name from within the VM itself?" While this data is not automatically available, the general purpose, and very powerful VirtualBox "GuestProperty" APIs can be used from the host and guest to pass arbitrary data, in key/value pairs format, in and out of the guest. Note that this does require that the VirtualBox Guest Additions have been installed in the guest. To play with this, try using the "VBoxManage" command line on your VirtualBox host machine, and "VBoxControl" in the guest. Host syntax VBoxManage guestproperty get <vmname>|<uuid> <property> [--verbose] VBoxManage guestproperty set <vmname>|<uuid> <property> [<value> [--flags <flags>]] VBoxManage guestproperty enumerate <vmname>|<uuid> [--patterns <patterns>] VBoxManage guestproperty wait <vmname>|<uuid> <patterns> [--timeout <msec>] [--fail-on-timeout]   Guest syntax VBoxControl.exe guestproperty        get <property> [-verbose] VBoxControl.exe guestproperty        set <property> [<value> [-flags <flags>]] VBoxControl.exe guestproperty        enumerate [-patterns <patterns>] VBoxControl.exe guestproperty        wait <patterns>                                      [-timestamp <last timestamp>]                                      [-timeout <timeout in ms>  So to solve our problem above, we set the vm name in the Host system on an arbitrary key like this: $ VBoxManage guestproperty set "Windows 7 (x64)" /MyData/VMname "Windows 7 (x64)" And within the guest we can use: C:\Program Files\Oracle\VirtualBox Guest Additions>VBoxControl.exe guestproperty get /MyData/VMname Oracle VM VirtualBox Guest Additions Command Line Management Interface Version 4.1.14 (C) 2008-2012 Oracle Corporation All rights reserved. Value: Windows 7 (x64) The GuestProperty API is pretty powerful, so for the interested, get more info in the User Manual. - FB 

    Read the article

  • How to save data from model without any association in cakephp [on hold]

    - by Abhishek
    I have base model in that i use dataManipulation method for updation in my code ,so i want to save data in receipt, receiptline model also in OpeningBankStatement.but i create association for receipt and receiptline not for OpeningBankStatement. So i want to save data this OpeningBankStatement model without any association.my demo code is. Array ( [Receipt] => Array ( [ID] => 566 [ObjectType] => 84 [TXNName] => bbnm [TXNDate] => 03-06-2014 [BranchID] => 1 [Narration1] => 267 [Narration] => Cheque Received [ExecutiveID] => 805 [AccountType] => 104 [Account] => 68 [ReferenceNo] => [TXNCurrencyID] => 3 [ExchangeRate] => 1.00000 [ManualAdiustment] => 0 [RevisionNumber] => 1 [CompanyID] => 1 [Status] => 633 ) [ReceiptLine] => Array ( [0] => Array ( [TXNID] => 566 [LineNo] => 0 [LineType_072] => 429 [BranchID] => 1 [AccountID] => 68 [ContactID] => [Amount] => 0 [CancelAmount] => 0 [OpenAmount] => 0 [Narration] => Cheque Received [CreatedBy] => 229 [ModifiedBy] => 229 [CreatedDate] => 2014-06-03 00:00:00 [ModifiedDate] => 2014-06-03 00:00:00 [Status] => 1 [RevisionNumber] => 1 [RowState] => [tmpInstrumentDate] => ) [1] => Array ( [LineNo] => 0 [RowState] => 436 [TXNID] => 0 [BranchID] => 1 [ContactID] => [AccountID] => 68 [Narration] => Cheque Received [Amount] => 0 [RevisionNumber] => 1 [LineType_072] => 460 [CancelAmount] => 0 [OpenAmount] => 0 [Status] => 1 ) ) [OpeningBankStatement] => Array ( [ObjectType] => 131 [TXNSeries] => 1 [TXNNo] => 12345 [TXNName] => bbnm [TXNDate] => 03-06-2014 [CompanyID] => 1 [AccountID] => 68 [ExecutiveID] => 805 [Narration] => Cheque Received [ReferenceNo] => [ParentObjectType] => 84 [ParentTXNID] => 1 [CancelledBy] => 1 [CancelledDate] => 2014-02-02 [CancellationRemarks] => hfg [Status] => 1 [RevisionNumber] => 1 ) ) By any dyanamic model association or callback method it solve? suggest solution.

    Read the article

  • Using data input from pop-up page to current with partial refresh

    - by dpDesignz
    I'm building a product editor webpage using visual C#. I've got an image uploader popping up using fancybox, and I need to get the info from my fancybox once submitted to go back to the first page without clearing any info. I know I need to use ajax but how would I do it? <%@ Page Language="C#" AutoEventWireup="true" CodeFile="uploader.aspx.cs" Inherits="uploader" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title></title> </head> <body style="width:350px; height:70px;"> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <div> <div style="width:312px; height:20px; background-color:Gray; color:White; padding-left:8px; margin-bottom:4px; text-transform:uppercase; font-weight:bold;">Uploader</div> <asp:FileUpload id="fileUp" runat="server" /> <asp:Button runat="server" id="UploadButton" text="Upload" onclick="UploadButton_Click" /> <br /><asp:Label ID="txtFile" runat="server"></asp:Label> <div style="width:312px; height:15px; background-color:#CCCCCC; color:#4d4d4d; padding-right:8px; margin-top:4px; text-align:right; font-size:x-small;">Click upload to insert your image into your product</div> </div> </form> </body> </html> CS so far using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Configuration; // Add to page using System.Web.UI; using System.Web.UI.WebControls; using System.Data; // Add to the page using System.Data.SqlClient; // Add to the page using System.Text; // Add to Page public partial class uploader : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void UploadButton_Click(object sender, EventArgs e) { if (fileUp.HasFile) try { fileUp.SaveAs("\\\\london\\users\\DP006\\Websites\\images\\" + fileUp.FileName); string imagePath = fileUp.PostedFile.FileName; } catch (Exception ex) { txtFile.Text = "ERROR: " + ex.Message.ToString(); } finally { } else { txtFile.Text = "You have not specified a file."; } } }

    Read the article

  • DB2 Integrity Checks and Exception Tables

    - by imthefirestartr
    I am working on planning a migration of a DB2 8.1 database from a horrible IBM encoding to UTF-8 to support further languages etc. I am encountering an issue that I am stuck on. A few notes on this migration: We are using db2move to export and load the data and db2look to get the details fo the database (tablespaces, tables, keys etc). We found the loading process worked nicely with db2move import, however, the data takes 7 hours to load and this was unacceptable downtime when we actually complete the conversion on the main database. We are now using db2move load, which is much faster as it seems to simply throw the data in without integrity checks. Which leads to my current issue. After completing the db2move load process, several tables are in a check pending state and require integrity checks. Integrity checks are done via the following: set integrity for . immediate checked This works for most tables, however, some tables give an error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL3603N Check data processing through the SET INTEGRITY statement has found integrity violation involving a constraint with name "blah.SQL120124110232400". SQLSTATE=23514 The internets tell me that the solution to this issue is to create an exception table based on the actual table and tell the SET INTEGRITY command to send any exceptions to that table (as below): db2 create table blah_EXCEPTION like blah db2 SET INTEGRITY FOR blah IMMEDIATE CHECKED FOR EXCEPTION IN blah USE blah_EXCEPTION NOW, here is the specific issue I am having! The above forces all the rows with issues to the specified exception table. Well that's just super, buuuuuut I can not lose data in this conversion, its simply unacceptable. The internets and IBM has a vague description of sending the violations to the exception tables and then "dealing with the data" that is in the exception table. Unfortunately, I am not clear what this means and I was hoping that some wise individual knows and could help me out and let me know how I can retrieve this data from these tables and place the data in the original/proper table rather than these exception tables. Let me know if you have any questions. Thanks!

    Read the article

  • How to copy files from HDD to HDD with integrity checking

    - by RafaelM
    I am moving data from an almost dead HDD to an external USB drive using linux , because for some reason Windows cannot see the data. I want to copy a large amount of data over from the HDD to the USB drive with integrity checking. I thought about copying everything over and then checking with md5summer but this would take a reaally long time because its a lot of data and this is not a very powerful PC. What tool can use to do this on Linux?

    Read the article

  • Suggestions for sharing and using data between Ubuntu and Windows 7 dual boot

    - by wizjany
    Note: TL;DR, scroll to bottom for summary. I recently set up my computer for a dual boot between Ubuntu 9.10 and Windows 7. My current drive setup is as follows. | A1 | A2 | | B1 | B2 | B3 | A1: 100 mb, windows 7 "System Reserved" boot partition A2: 230 gb, data section, this one needs to be shared between the operating systems B1: 125 gb, windows 7 OS B2: 123 gb, ubuntu OS B3: 2gb, linux swap space Pretty much i want to have my documents, music, pictures, videos, etc accessible from both operating systems. My first attempt involved making the data (a2) partition NTFS, and moving my home folder from ubuntu to the data partition. However, as I read NTFS does not work nice with permission, and it messed up my home folder. My next idea is one of the following: 1) format the data partition to ext2/3/4 and move my home folder from linux there, and get a driver to read ext partitions in windows 7. The problem with that is that most of the ext drivers/software are not compatible with windows 7 or do not integrate with windows explorer (I really don't want to open a separate software window just to access my data, plus it's probably not compatible with other software.) http://www.fs-driver.org/ looks promising, but I'm not sure how it works with ext4 and windows 7 (not officially supported, when trying in vista compatibility mode, it tells me I need to format the ext drive to use it). My next idea, 2) keep the home folder in ubuntu where it is, but create symlinks for the Documents, Music, etc folders to an NTFS formatted Data (A2) partition, and add those locations to the windows 7 libraries. I'm not totally sure how the permissions would work out, but it should be fine since it's only the documents, music, etc and not the important config files in the rest of /home/user/. Correct me if I'm wrong. Currently, symlinks is my best idea, although i'm not sure how it will work. Any suggestions, additions to my ideas, links, pointers, whatever would be greatly appreciated. Even if it means i should reformat both my drives and repartition (2 250gb drives if you want to suggest a setup for that), I won't be too opposed if that's the best suggestion (I've gone through the format/install/format/reinstall process 5 times over the past 3 days, once more won't hurt me). TL;DR, summary: I have two hard drives. One is partitioned for Ubuntu and Windows 7, the second one I want accessible to both operating systems to store documents, music, pictures, videos, etc. Suggestions on how to set up the data drive please =) P.S. bonus if I can get an apache server document root folder working between the two OS's as well (permissions could become very complicated, so don't worry too much about that) P.P.S. Related question, but data viewing is one way: http://superuser.com/questions/84586/partition-scheme-and-size-for-dual-boot-windows-7-and-ubuntu-9-10-with-separate-p

    Read the article

  • Maximum speed of data transmission

    - by JavaRocky
    All major data carriers these days use fiber optic links these days. And data transmission occurs at the speed of light, is that correct? If data transmission does indeed occur at the speed of light, is it also correct to assume that data transmission will never ever be faster?

    Read the article

  • WebHost Manager - Apache's VHost isn't matching the DNS entry

    - by Trans
    I've used CPanel's WebHost Manager to create a new host on my VPS. I then used my HOSTS file to point fake.com to the relevant IP address. The problem I'm having now is, Apache isn't recognizing the VHost,or something, as it's just loading the default entry and 404'ing every document I try to GET. Here's the VHost entry NameVirtualHost 0.0.0.209:80 NameVirtualHost 0.0.0.211:80 <VirtualHost 0.0.0.209:80> ServerName fake.com ServerAlias www.fake.com DocumentRoot /home/fakecom/public_html ServerAdmin [email protected] ## User fakecom # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup fakecom fakecom </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup fakecom fakecom </IfModule> CustomLog /usr/local/apache/domlogs/fake.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/fake.com combined Options -ExecCGI -Includes RemoveHandler cgi-script .cgi .pl .plx .ppl .perl ScriptAlias /cgi-bin/ /home/fakecom/public_html/cgi-bin/ </VirtualHost> I've Google'd this profusely and all that's being returned is 'DNS errors. Wait for it to propagate'. That's obviously not my problem, since I'm using HOSTS. What else could be causing this? :/ EDIT: Forgot to mention. http://fake.com/~fakecom/test.html loads just fine. So the fake.com is pointing to the right IP.

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • T-SQL for autogrowth of multiple data files

    - by ddono25
    I can't seem to figure out the problems with my script to alter SQL Server 2008 database and file growth. There are two data files and a log file, all which need to have Autogrowth ON. Does this look completely wrong? Thanks! USE MASTER GO ALTER DATABASE BigDB MODIFY FILE ( NAME = BIGDBPPE, FILENAME = "H:\MSSQL\Data\BigDB.mdf", MAXSIZE = UNLIMITED, FILEGROWTH = 2000MB) USE MASTER GO ALTER DATABASE BigDB MODIFY FILE ( NAME = BIGDBPPE1, FILENAME = "K:\MSSQL\Data\BigDB_data1.ndf", MAXSIZE = UNLIMITED, FILEGROWTH = 2000MB) USE MASTER GO ALTER DATABASE BigDB MODIFY FILE ( NAME = BIGDBPPE_log, FILENAME = "O:\MSSQL\Data\BigDB_log.ldf", MAXSIZE = UNLIMITED, FILEGROWTH = 200MB) GO

    Read the article

  • Extract structured data from many MS Word files

    - by Mark
    I have ~160 MS Word files that contain structured data. The data is formatted identically across all files and resides in a tabular format. I'd like to extract the data into a database, XML or just an aggregate table without opening each of the files independently. Is there a tool or method I can use to extract this data?

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

  • SQL SERVER – Size of Index Table for Each Index – Solution 2

    - by pinaldave
    Earlier I had ran puzzle where I asked question regarding size of index table for each index in database over here SQL SERVER – Size of Index Table – A Puzzle to Find Index Size for Each Index on Table. I had received good amount answers and I had blogged about that here SQL SERVER – Size of Index Table for Each Index – Solution. As a comment to that blog I have received another very interesting comment and that provides near accurate answers to original question. Many thanks to Rama Mathanmohan for providing wonderful solution. SELECT OBJECT_NAME(i.OBJECT_ID) AS TableName, i.name AS IndexName, i.index_id AS IndexID, 8 * SUM(a.used_pages) AS 'Indexsize(KB)' FROM sys.indexes AS i JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id JOIN sys.allocation_units AS a ON a.container_id = p.partition_id GROUP BY i.OBJECT_ID,i.index_id,i.name ORDER BY OBJECT_NAME(i.OBJECT_ID),i.index_id Let me know if you have any better script for the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Data Storage, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Storage and SQL Server Capacity Planning and configuration – SharePoint Server 2

    - by pinaldave
    Just a day ago, I was asked how do you plan SQL Server Storage Capacity. Here is the excellent article published by Microsoft regarding SQL Server capacity planning for SharePoint 2010. This article touches all the vital areas of this subject. Here are the bullet points for the same. Gather storage and SQL Server space and I/O requirements Choose SQL Server version and edition Design storage architecture based on capacity and IO requirements Determine memory requirements Understand network topology requirements Configure SQL Server Validate storage performance and reliability Read the original article published by Microsoft here: Storage and SQL Server Capacity Planning and configuration – SharePoint Server 2010. The question to all the SharePoint developers and administrator that if they use the whitepapers and articles to decide the capacity or they just start with application and as they progress they plan the storage? Please let me know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology Tagged: SharePoint

    Read the article

  • Manchester UG Presentation Video

    In July I was invited to speak at the UK SQL Server UG event in Manchester.  I spoke about Excel being a good data mining client.  I was a little rushed at the end as Chris Testa-ONeill told me I had only 5 minutes to go when I had only been talking for 10 minutes.  Apparently I have a reputation for running over my time allocation.  At the event we also had a product demo from SQL Sentry around their BI monitoring dashboard solution.  This includes SSIS but the main thrust was SSAS Then came Chris with a look at Analysis Services.  If you have never heard Chris talk then take the opportunity now, he is a top class presenter and I am often found sat at the back of his classes. Here is the video link

    Read the article

  • Oracle President Mark Hurd Highlights How Data-driven HR Decisions Help Maximize Business Performance

    - by Scott Ewart
    HR Intelligence Can Help Companies Win the Race for Talent Today during a keynote at Taleo World 2012, Oracle President Mark Hurd outlined the ways that executives can use HR intelligence to help them make better business decisions, shape the future of their organizations and improve the bottom line. He highlighted that talent management is one of the top three focus areas for CEOs, and explained how HR intelligence can help drive decisions to meet business objectives. Hurd urged HR leaders to use data to make fact-based decisions about hiring, talent management and succession to drive strategic growth. To win the race for talent, Hurd explained that organizations need powerful technology that provides fact-based valuable insight that is needed to proactively manage talent, drive strategic initiatives that promote innovation, and enhance business performance. To view the full story and press release, click here.

    Read the article

  • Introducing a new Umbraco datatype for Multi-lingual websites.

    - by Vizioz Limited
    Over the last 6 months we have been building various multi-lingual sites for different clients and for some of the clients they have 1 to 1 relationships between some or all of their pages.Within Umbraco, you can copy a page ( or whole tree of pages ) and keep a relationship between each of the pages and their new copy, this allows content editors to subscribe to change notifications that Umbraco can create if one of the linked pages is changed.Unfortunately one thing that is missing in Umbraco is any way to see which pages are related to each other and to have a quick and easy way to jump between the related pages.We created a datatype that solves these problems and thought we would release it as an open source project ( which we are still maintaining )Currently you can:1) See current relationships2) Add relationships3) Limit the number of relationships that can be added ( by the data type )4) See the Country flag ( assuming a culture has been set on each of your top level site nodes for each country site )5) Link between the documents6) Change or delete the linksAn example where multiple languages are allowed:An example where only 2 languages exist (1 relationship):You can download the datatype from the Umbraco project page:Vizioz Relationships for UmbracoPlease do let us know what you think :)

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >