Search Results

Search found 6455 results on 259 pages for 'excel vba guy'.

Page 102/259 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Is it possible to have a conditional formatting cell "visually cycle" through all the formats that evaluated true?

    - by Ben
    Like the title says, "In Excel, when a cell has multiple conditional formatting rules that evaluate true, is it possible to have the cell "visually cycle" through all the formats that evaluated true? If not, suggestions on what to do would be appreciated!" I'm creating an employee schedule for a business that has multiple job areas that need to have an employee assigned to cover. The schedule is currently set up with the date on the top row, employee list down the left column, and the employee's assigned "job area" cross-referencing with the date on the top row. Originally it was set up where if every required "job area" didn't have someone assigned to it, the date would (via conditional formatting) change to red. I've set it up now that if a condition isn't met, the date will change to the color of the "job area" that doesn't have an employee assigned to it. However, there are cases where multiple job areas don't have an employee assigned, but the date will only change color based on the first condition that isn't met. It'd be nice if there was some way for the date cell to cycle through the different colors that correspond to the job areas where no one is assigned. I have a hunch that's not possible though. If it is possible, I'd love to know how to do it. And if it isn't, if anyone has any suggestions on how I can modify the Excel sheet to make it easier to identify the job areas that don't have anyone assigned to them, I would appreciate it. FYI This schedule goes out months in advance.

    Read the article

  • Change date in a SQL query to reference a cell in Excel

    - by Adil
    I have the following code that returns the needed data into excel and manually changing the date will change the returned data; however, I'd like to reference a cell with a formula that will make the query a bit more user friendly. I've tried using my limited knowledge of referencing a cell but none have worked. This information is in cell A1 and the query is placed in cell A2 with the following equation: =wwQuery("STKAP03", $A$1) SET QUOTED_IDENTIFIER OFF SELECT * FROM OPENQUERY(INSQL, "SELECT DateTime, [40_MOTORS.MI436423.CIN], [40_MOTORS.MI436425.CIN] FROM WideHistory WHERE [40_MOTORS.MI436423.CIN] IS NOT NULL AND wwRetrievalMode = 'Delta' AND wwVersion = 'Latest' AND DateTime >='20120409 07:00:00' These two dates/times I'd like to reference cells on a different sheet AND DateTime <= '20120416 07:00:00'")

    Read the article

  • Excel IP address and subnet to network and inverse mask [closed]

    - by Steve Dailey
    We need a script, marco or something in excel where we can take list like below interface Vlan100 ip address 192.168.1.3 255.255.255.0 interface Vlan101 ip address 192.168.2.3 255.255.255.128 interface Vlan102 ip address 192.168.2.130 255.255.255.128 interface Vlan103 ip address 192.168.3.3 255.255.255.240 etc... and produce a list like below ospf 1 undo silent-interface Vlan-interface100 undo silent-interface Vlan-interface101 undo silent-interface Vlan-interface102 undo silent-interface Vlan-interface103 area 0.0.0.0 network 192.168.1.0 0.0.0.255 network 192.168.2.0 0.0.0.127 network 192.168.2.128 0.0.0.127 network 192.168.3.0 0.0.0.15 so it will need to take an ip address/subnet mask and convert them to network number/inverse mask. I believe I can handle the Vlan manipulation with a substitution so no need to spend time on that.

    Read the article

  • Converting a DWG/DXF to CSV or Excel

    - by Menno Gouw
    I'm using ZWcad and i need to get the coordinates of hundreds of blocks into a excel sheet or .CSV file so i can import that into the GPS hardware. I know there are plenty of tools for autocad, i probably can even write one myself but as far as ZWcad goes i seem to be out of options. However ZWcad saves to DWG too, and exports to all the other familiar cad extensions. So i was wondering if i would just save the blocks i need to export to a certain file there might be a tool/program to convert that directly into .CSV.

    Read the article

  • Pivot Table grand total across columns

    - by Jon
    I'm using Excel 2010 and Power Pivot. I'm trying to calculate confidence and velocity for a development team. I'm extracting some information from our time and defect system each day and building a data set. What I need to do with Excel is do the calculations. So each day I add to my data set 1 row per task in the current project, estimate for that task and the time spent on that task. What I want to calculate is the estimate/actual for each task but also for each person. The trouble is that each day the actual is cumulative so I need to pick out the maximum value for each task. The estimate should remain unchanged. I can make this work at the task level with a calculated measure (=MAX(worked)/MAX(estimate)) but I don't know how to total this up for a person. I need the sum of the max worked for each task. So a dataset might look like: Name Task Estimate Worked N1 T1 3 1 N2 T2 3 1 N3 T3 4 1 N1 T1 3 2 N2 T4 5 1 N3 T3 4 2 N1 T5 1 2 N2 T6 2 3 N3 T7 3 2 What I want to see is for task T1 2 days were worked against an estimate of 3 days - so 2/3. For person N1 I want to see that they worked a total of 4 days against an estimate of 4 days so 4/4. For person N2 they worked 5 days for an estimate of 10 days. Any ideas on how I can achieve this?

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • SharePoint 2010 and Excel Calculation Services

    - by Cj Anderson
    I'm curious what the requirements are for Excel Calculation Services in Sharepoint 2010. I found an architecture document but it doesn't list out the specific requirements. (architecture I understand that you can install all the services on one server but it isn't recommended. It then talks about how you can scale up the application servers, and web front ends. What should the hardware look like for both the application server and web front ends? Do I need to setup a standalone box for each application server?

    Read the article

  • How to use the outcome of a formula as the value for Vlookup or another IF formula

    - by Steven
    Ok I will try to explain my issue effectively. I am making a GPA sheet in which the value out of 100 is computer in to a GPA value and then in to a letter. In cell N5 i have the value of all their grades (formula: =H3+H4+H5) Now in cell (j6) I have a formula which is giving them a number depending on the value calculated in N5 (Formula: =IF(AND(N5>=60,N5<=63.999),"2.0",IF(AND(N5>=64,N5<=66.999),"2.25",IF(AND(N5>=67,N5<=69.999),"2.4",IF(AND(N5>=70,N5<=73.999),"2.5",IF(AND(N5>=74,N5<=76.999),"2.75",IF(AND(N5>=77,N5<=79.999),"2.9",IF(AND(N5>=80,N5<=83.999),"3.0",IF(AND(N5>=84,N5<=86.999),"3.25",IF(AND(N5>=87,N5<=89.999),"3.4",IF(AND(N5>=90,N5<=93.999),"3.50",IF(AND(N5>=94,N5<=96.999),"3.75",IF(AND(N5>=97,N5<=100),"4",IF(AND(N5<=59.999),"0"))))))))))))) Still no problem... as the values I was looking for comes out (example 84.2 shows up as 3.25 as I wanted). However here comes the problem.... I have tried to use the outcome in J6 to do Vlookup or another if formula, however excel does not seem to recognize the value in J6. For example: =VLOOKUP(j6,B3:C15,2,FALSE)... this returns N/A however if I enter =VLOOKUP(3.25,B3:C15,2,FALSE) it gives me what im looking for. It seems that excel will not register the outcome of my formula as a number. What can I do please?

    Read the article

  • How can I check cells for number series?

    - by Stephen Younger
    I have a bit of a problem evaluating an excel cell. Example: M M M M M M M M M 1 2 3 4 5 6 7 8 9 2;5;7 1;9 3;5;7;9 I have a number of excel cells which contain numbers (months). In the first column I have a series of numbers. I want to use conditional formatting to color the corresponding cells in the right columns. If correctly colored I would get something like this: M M M M M M M M M 1 2 3 4 5 6 7 8 9 2;5;7 X X X 1;9 X X 3;5;7;9 X X X X The formula I have now is this: IF(ISNUMBER(FIND(L$22;$K23));$H23;"") but the problem is that cells are colored too which contain part of a number. If I enter 10;15 as input I get this: M M M M M M M M M M M M M M M 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 10;15 X X X X because 1 and 5 are found too. I only want column 10 and 15 to be marked. How can I change the formula or the input?

    Read the article

  • Excel file has disappeared from sharepoint document library

    - by user40389
    Two days ago an excel came up missing from a document library. This document library only had this file and nows it's gone. When I go to All Site Content-Document Librarys it shows that there is still one file in the library. Seems like there is something screwy. Is there anything I can do to get this item to reappear? MOSS2007 Recycle: Must be default never changed anything and really don't know how to find the settings for this Document Library Settings Versioning Settings: content approval: no document version history: create major versions require check out: yes

    Read the article

  • Exception from HRESULT: 0x800A03EC

    - by Daniel
    Any help is appreciated: I'm developing a C#.Net app in VS2010 that interacts with Excel. The app works correctly on my local machine. Uploading to a remote Windows 2003 server however, breaks the app. Originally, I received the following message Retrieving the COM class factory for component with CLSID {00024500-0000-0000-C000-000000000046} failed due to the following error: 80070005 After Googling the problem (which suggested a permissions problem) i tried this: Installing Excel 2007 Going into Component Services on the remote server and following the instructions here: http://blog.crowe.co.nz/archive/2006/03/02/589.aspx Now I get this message on the same operation: Exception from HRESULT: 0x800A03EC Google searches seem to be suggesting that this is a version match error. However, both the local machine and the remote server use Excel 2007. Any suggestions would be very welcome. Thanks in advance. -Daniel

    Read the article

  • How can I tell if a CSV is in UTF-7 or UTF-8

    - by dru-zod
    Excel seems to save CSV files in (what I think is) UTF-7, despite the fact that most information I have read suggest that in general, you should not UTF-7. Indeed, other applications (Text pad, which lets me choose) save things in UTF-8 (or Unicode etc, but UTF-7 is not even an option). Using .NET, I read the stream, and have to provide the encoding. If I get it wrong, accented characters are replaced with question marks. If I try and let StreamReader work it out (using detectEncodingFromByteOrderMarks), it gets it wrong (at least, it does if the file has been saved in Excel). It is unlikely that anything other then Excel will be used, so I could just assume UTF-7. Are there any other options? I need to support French (accented), German, Dutch, and Norwegian characters.

    Read the article

  • Exporting SQL Server table to CSV issue commas, tabs and quotes

    - by cyberpine
    After we export to flat file CSV, columns with commas, quotes and tabs cause problems in Excel. The vendor needs to read the file in Excel to make manual changes and then needs it in a flat file format CSV format to load using PL/SQL into an Oracle table. I can remove those characters from the table in SQL Server, but is there a smarter way? Does it make sense to save to CSV when done in Excel and will that cause problems when attempting to load the file into Oracle anyway? Also, we need the first row to have column names.. any SQL way to generate all the files in one swoop (the the tiles in the first row) rather than using export to flat file?

    Read the article

  • reading excell file in vb.net

    - by Mark
    can anyone help me on how to know EOF of excel using vb.net? i have this code but it crash when i try delete the proceeding rows from row 6 to downward. my problem is, my code was still reading a null values of rows that i deleted in excel.. this is my code: Dim xlsConn As New OleDbConnection Dim xlsAdapter As New OleDbDataAdapter Dim xlsDataSet As New DataSet xlsConn = New OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & pathName & " ; Extended Properties=Excel 8.0") strSQL = "SELECT * FROM [Sheet1$]" xlsAdapter.SelectCommand = New OleDbCommand(strSQL, xlsConn) xlsDataSet.Clear() xlsAdapter.Fill(xlsDataSet) ListView1.Items.Clear() Dim listItem As ListViewItem For ctr As Integer = 0 To xlsDataSet.Tables(0).Rows.Count - 1 listItem = ListView1.Items.Add(xlsDataSet.Tables(0).Rows(ctr).Item("EmpNo").ToString) listItem.SubItems.Add(xlsDataSet.Tables(0).Rows(ctr).Item("EmpName").ToString) Next Can anyone help me to fix this bugs!

    Read the article

  • How would you start automating my job? - Part 2

    - by Jurily
    (Followup to this question) After surviving the first wave of incoming shipments (9 hours of copy/paste), I now believe I have all the requirements. Here is the updated workflow: Monkey collects email attachments (4 Excel spreadsheets, 1 PDF) Monkey creates central database, does complex calculations (right now this is also an Excel spreadsheet) Monkey sends data to two bosses, who set the retail prices independently; first one to reply wins Monkey sends order form to our other warehouses, also Excel Monkey sends spreadsheets to VIP customers, carefully sanitized and formatted (4 different discount categories) Jurily enters the data into the accounting system. I've given up on automating this part, there's too much business logic involved, and the database is a pile of sh^W legacy My question: What technologies would you use for a quick and dirty solution? I'm mostly sold on C#, but coming from a Linux/C++ background, I'm horribly confused about my choices in Microsoft-land. For bonus points: How would you redesign the whole system from the ground up? P.S. in case you were wondering, my job title is System Administrator.

    Read the article

  • Shimmed Automation addin

    - by Sandy
    I am developing an Excel addin that has an udf and calls it as a worksheet function. It is a com addin where I am using the IDTExtensibility2 interface and set the class interface type as autodual and then shim it as in the link http://blogs.msdn.com/andreww/archive/2006/07/23/excel-interop-types-in-shimmed-automation-add-ins.aspx Things work fine in my development machine but I am having a tough time packaging the assembly. The addin registers but the formula does not appear in the formula bar of Excel after the installation. Can someone help me with the right way to get it done ? I use VS 2008 and Office 2007 for my testing. Thanks in advance.

    Read the article

  • DataGrid In Java Struts Web Application

    - by Anand
    Hi After scouring the web I have edited my question from the one below to what it is now. Ok I seem to understand that I don't need all the capabilities of excel right now. I think i am satisfied having a data grid to display data. Basically i am working on Struts 2 and I wat my jsp page to have an excel like feel and hence looks like even a datagrid is sufficient. I came across This Technology I am not sure whether I must go ahead and use it. Any other suggestions, alternatives are welcome The older version of the question "I have a java web application running on windows currently. I may host it in future in a Linux Server. My application allows people to upload data. I want to display the data they have uploaded in an excel file and render it in a portion of my webpage. How do I go about this ?"

    Read the article

  • Validate data before uploading through SSIS

    - by The King
    I have a SSIS package to upload data from Excel file into an Sql Server 2005 table. The excel file will have varied lines of data ranging from 20k - 30k lines. The upload works fine, when all the data are correct. But obviously fails when there is a small problem even in a single row. Examples like mandatory values presented null, inconvertable values (data type mismatch) etc. I want to validate the excel file before the upload and want to tell the user which row and column has got the error... Any idea as to how to accomplish this, without consuming much time and resources. Thanks

    Read the article

  • What encoding to use for exporting to CSV?

    - by Michael Borgwardt
    I'm developing a java app that exports data to CSV files, intended to be opened in Excel by end users. We just noticed that the export function uses Java's platform default encoding. This causes umlaut characters to be lost and unit test to fail on the build server (which is configured to have US-ASCII as its platform default encoding exactly to catch such potential problems). The question is: which would be the best encoding to use? How does Excel determine what encoding to use? Does it use something platform-specific that presumably matches Java's platform default? I'm currently leaning towards hardcoding Cp1252 - that should cover the target machines (the deployment environment is actually specified) and would fix the test problem. From googling around, Excel does not seem to handle UTF-8 well, so that's out, and sticking to the platform default encoding would require some sort of workaround hack for the tests.

    Read the article

  • How to read, edit and write xls files, and then export to SQL Server

    - by tuanvt
    I have an excel file that have the list of contacts( about 10 k of them) that I need to push into my SQL Server database. So, I am writing an .net windows program using visual studio 2008 to read the files, generate random password for each contact, and then push these information in to my SQL Server database. It was easy to handle excel file in 2003 but now my computer have office 2007 in it and things seem to changed. I am digging on Microsoft.Office.Interop.Excel but it is seem to be a lot more complicated than before.

    Read the article

  • Using Large Arrays in VB.NET

    - by Tim
    I want to extract large amounts of data from Excel, manipulate it and put it back. I have found the best way to do this is to extract the data from an Excel Range in to a large array, change the contents on the array and write it back to the Excel Range. I am now rewriting the application using VB.NET 2008/2010 and wish to take advantage of any new features. Currently I have to loop through the contents of the array to find elements with certain values; also sorting large arrays is cumbersome. I am looking to use the new features, including LINQ to manipulate the data in my array. Does anybody have any advice on the easiest ways to filter / query, sort etc. data in a large array. Also what are the reasonable limits to the size of the array? ~Many Thanks

    Read the article

  • Getting a Specified Cast is not valid while importing data from Excel using Linq to SQL

    - by niceoneishere
    This is my second post. After learning from my first post how fantastic is to use Linq to SQL, I wanted to try to import data from a Excel sheet into my SQL database. First My Excel Sheet: it contains 4 columns namely ItemNo ItemSize ItemPrice UnitsSold I have a created a database table with the following fields table name ProductsSold Id int not null identity --with auto increment set to true ItemNo VarChar(10) not null ItemSize VarChar(4) not null ItemPrice Decimal(18,2) not null UnitsSold int not null Now I created a dal.dbml file based on my database and I am trying to import the data from excel sheet to db table using the code below. Everything is happening on click of a button. private const string forecast_query = "SELECT ItemNo, ItemSize, ItemPrice, UnitsSold FROM [Sheet1$]"; protected void btnUpload_Click(object sender, EventArgs e) { var importer = new LinqSqlModelImporter(); if (fileUpload.HasFile) { var uploadFile = new UploadFile(fileUpload.FileName); try { fileUpload.SaveAs(uploadFile.SavePath); if(File.Exists(uploadFile.SavePath)) { importer.SourceConnectionString = uploadFile.GetOleDbConnectionString(); importer.Import(forecast_query); gvDisplay.DataBind(); pnDisplay.Visible = true; } } catch (Exception ex) { Response.Write(ex.Source.ToString()); lblInfo.Text = ex.Message; } finally { uploadFile.DeleteFileNoException(); } } } // Now here is the code for LinqSqlModelImporter public class LinqSqlModelImporter : SqlImporter { public override void Import(string query) { // importing data using oledb command and inserting into db using LINQ to SQL using (var context = new WSDALDataContext()) { using (var myConnection = new OleDbConnection(base.SourceConnectionString)) using (var myCommand = new OleDbCommand(query, myConnection)) { myConnection.Open(); var myReader = myCommand.ExecuteReader(); while (myReader.Read()) { context.ProductsSolds.InsertOnSubmit(new ProductsSold() { ItemNo = myReader.GetString(0), ItemSize = myReader.GetString(1), ItemPrice = myReader.GetDecimal(2), UnitsSold = myReader.GetInt32(3) }); } } context.SubmitChanges(); } } } can someone please tell me where am I making the error or if I am missing something, but this is driving me nuts. When I debugged I am getting this error when casting from a number the value must be a number less than infinity I really appreciate it

    Read the article

  • looking for advise on importing excel into mysql with php

    - by Ole Media
    Alright, see if I can pick your brains from you all. I'm currently working on a project where all the information comes from different clients, the only thing in common is that the received data is done with excel. The excel spread sheet that they present is just a bunch of references and codes, and the problem than I'm facing is that I need the references and codes to be entered in certain format in order for the website to work. The perfect situation will be to go to each client and teach how I would need the data, but I can't do that because of the large number of clients, and more importantly I will be interrupting their work flow. Each client has its own codes and reference model and they are not willing to change their process The good news is that there is a standard pattern for the codes, but I'm talking close to 200 thousand codes with a bunch of combination. They way that we are currently solving the problem is that we have a person who checks each excel sheet received, runs a few macros, and manually fixes those codes in which the macro was not able to fix. The person that is doing this, is already burn out and frustrated and I would like to automatize this process with php. Suggestions?

    Read the article

  • DSOFramer closing Excel doc in another window. If unsaved data in file, dsoframer fails to open with

    - by Steve
    I'm using Microsoft's DSOFramer control to allow me to embed an Excel file in my dialog so the user can choose his sheet, then select his range of cells; it's used with an import button on my dialog. The problem is that when I call the DSOFramer's OPEN function, if I have Excel open in another window, it closes the Excel document (but leaves Excel running). If the document it tries to close has unsaved data, I get a dialog boxclosing Excel doc in another window. If unsaved data in file, dsoframer fails to open with a messagebox: "Attempt to access invalid address". I built the source, and stepped through, and its making a call in its CDsoDocObject::CreateFromFile function, calling BindToObject on an object of class IMoniker. The HR is 0x8001010a "The message filter indicated that the application is busy". On that failure, it tries to InstantiateDocObjectServer by classid of CLSID Microsoft Excel Worksheet... this fails with an HRESULT of 0x80040154 "Class not registered". The InstantiateDocObjectServer just calls CoCreateInstance on the classid, first with CLSCTX_LOCAL_SERVER, then (if that fails) with CLSCTX_INPROC_SERVER. I know DSOFramer is a popular sample project for embedding Office apps in various dialogs and forms. I'm hoping someone else has had this problem and might have some insight on how I can solve this. I really don't want it to close any other open Excel documents, and I really don't want it to error-out if it can't close the document due to unsaved data. Update 1: I've tried changing the classid that's passed in to "Excel.Application" (I know that class will resolve), but that didn't work. In CDsoDocObject, it tries to open key "HKEY_CLASSES_ROOT\CLSID{00024500-0000-0000-C000-000000000046}\DocObject", but fails. I've visually confirmed that the key is not present in my registry; The key is present for the guid, but there's no DocObject subkey. It then produces an error message box: "The associated COM server does not support ActiveX document embedding". I get similar (different key, of course) results when I try to use the Excel.Workbook programid. Update 2: I tried starting a 2nd instance of Excel, hoping that my automation would bind to it (being the most recently invoked) instead of the problem Excel instance, but it didn't seem to do that. Results were the same. My problem seems to have boiled down to this: I'm calling the BindToObject on an object of class IMoniker, and receiving 0x8001010A (RPC_E_SERVERCALL_RETRYLATER) "The message filter indicated that the application is busy". I've tried playing with the flags passed to the BindToObject (via the SetBindOptions), but nothing seems to make any difference. Update 3: It first tries to bind using an IMoniker class. If that fails, it calls CoCreateInstance for the clsid as a "fallback" method. This may work for other MS Office objects, but when it's Excel, the class is for the Worksheet. I modified the sample to CoCreateInstance _Application, then got the workbooks, then called the Workbooks::Open for the target file, which returns a Worksheet object. I then returned that pointer and merged back with the original sample code path. All working now.

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >