Search Results

Search found 312 results on 13 pages for 'formulas'.

Page 11/13 | < Previous Page | 7 8 9 10 11 12 13  | Next Page >

  • crystal reports : cannot determine the queries necessary to get data for this report

    - by phill
    I've recently come across the following error in one of my crystal reports following an accounting system update. Group #1: ? - A : This group section cannot be printed because its condition field is nonexistent or invalid. Format the section to choose another condition field. I've verified every database field being used to ensure it still exists for consistency and checked the formulas for that section. No dice. So to hopefully fix the problem, i remove the section using the Section expert. I run the same database checks. It then complains with the same error for Group #5. So i remove that as well. now I have a new and unusual error when i attempt to run 'Show Query'. the error is: "Cannot determine the queries necessary to get data for this report" I have tried to logon/logoff the database and verify database. No complaints until i try to run, show query. When I attempt to run the report, it also throws the same error. any ideas? Am I approaching this incorrectly? this is done in crystal reports 10. note: this report is run with the sql sa user to eliminate any permission issues.

    Read the article

  • Attempting to find a formula for tessellating rectangles onto a board, where middle square can't be

    - by timemirror
    I'm working on a spatial stacking problem... at the moment I'm trying to solve in 2D but will eventually have to make this work in 3D. I divide up space into n x n squares around a central block, therefore n is always odd... and I'm trying to find the number of locations that a rectangle of any dimension less than n x n (eg 1x1, 1x2, 2x2 etc) can be placed, where the middle square is not available. So far I've got this.. total number of rectangles = ((n^2 + n)^2 ) / 4 ..also the total number of squares = (n (n+1) (2n+1)) / 6 However I'm stuck in working out a formula to find how many of those locations are impossible as the middle square would be occupied. So for example: [] [] [] [] [x] [] [] [] [] 3 x 3 board... with 8 possible locations for storing stuff as mid square is in use. I can use 1x1 shapes, 1x2 shapes, 2x1, 3x1, etc... Formula gives me the number of rectangles as: (9+3)^2 / 4 = 144/4 = 36 stacking locations However as the middle square is unoccupiable these can not all be realized. By hand I can see that these are impossible options: 1x1 shapes = 1 impossible (mid square) 2x1 shapes = 4 impossible (anything which uses mid square) 3x1 = 2 impossible 2x2 = 4 impossible etc Total impossible combinations = 16 Therefore the solution I'm after is 36-16 = 20 possible rectangular stacking locations on a 3x3 board. I've coded this in C# to solve it through trial and error, but I'm really after a formula as I want to solve for massive values of n, and also to eventually make this 3D. Can anyone point me to any formulas for these kind of spatial / tessellation problem? Also any idea on how to take the total rectangle formula into 3D very welcome! Thanks!

    Read the article

  • Visual Studio Conversion Suite

    - by KingPop
    I have this as my conversion program for the "Length", how can I do it the simpliest way instead of keeping the if, elseif, else too much, i do not have much experience and trying to improve my programming skills on visual studio 2008. Basically, I get annoyed with the formulas because I don't know if it is right, I use google but doesn't help because i don't know how to get it right when the program converts from type to type. Public Class Form2 Dim Metres As Integer Dim Centimetres As Integer Dim Inches As Integer Dim Feet As Integer Dim Total As Integer Private Sub Form2_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load ErrorMsg.Hide() End Sub Private Sub btnConvert_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnConvert.Click Metres = 1 Centimetres = 0.01 Inches = 0.0254 Feet = 0.3048 txtTo.Text = 0 If txtFrom.Text <> "" Then If IsNumeric(txtFrom.Text) And IsNumeric(txtTo.Text) Then If cbFrom.Text = "Metres" And cbTo.Text = "Centimetres" Then Total = txtFrom.Text * Metres txtTo.Text = Total ElseIf cbFrom.Text = "Metres" And cbTo.Text = "Inches" Then Total = txtFrom.Text * 100 txtTo.Text = Total ElseIf cbFrom.Text = "Metres" And cbTo.Text = "Feet" Then ElseIf cbFrom.Text = "Centimetres" And cbTo.Text = "Metres" Then ElseIf cbFrom.Text = "Centimetres" And cbTo.Text = "Inches" Then ElseIf cbFrom.Text = "Centimetres" And cbTo.Text = "Feet" Then ElseIf cbFrom.Text = "Inches" And cbTo.Text = "Metres" Then ElseIf cbFrom.Text = "Inches" And cbTo.Text = "Centimetres" Then ElseIf cbFrom.Text = "Inches" And cbTo.Text = "Feet" Then ElseIf cbFrom.Text = "Feet" And cbTo.Text = "Metres" Then ElseIf cbFrom.Text = "Feet" And cbTo.Text = "Centimetres" Then ElseIf cbFrom.Text = "Feet" And cbTo.Text = "Inches" Then End If End If End If End Sub End Class This is the source for what I have done at the moment.

    Read the article

  • Project Euler considered harmful

    - by xxxxxxx
    Hi, I've done some Project Euler problems and I was able to solve them very fast because I already knew all the theory behind them. I learned that "accidentaly" because I also had to learn it for university. I used to also solve olympiad problems, I wasn't very good but I was solving some of the problems. I've reached the conclusion that Project Euler problems are taken out of their context(and olympiad problems as well). That's why they are hard. Mathematics and it's theory is taught in order to make the problems easy. However, Project Euler apparently makes an invitation at making them hard again. Why ? I honestly think this is a complete waste of time. Mathematicians had centuries at their disposal in order to solve math problems and they developed theories to explain properly why certain things happen. I think olympiad problems and Project Euler problems are really useless. What's your take on Project Euler ? Do you get something out of it or do you just find formulas on some websites and implement the code fast and then get the result and solve the problem ?

    Read the article

  • Improving Performance of Crystal Reports using Stored Procedures

    - by mjh41
    Recently I updated a Crystal Report that was doing all of its work on the client-side (Selects, formulas, etc) and changed all of the logic to be done on the server-side through Stored Procedures using an Oracle 11g database. Now the report is only being used to display the output of the stored procedures and nothing else. Everything I have read on this subject says that utilizing stored procedures should greatly reduce the running time of the report, but it still takes roughly the same amount of time to retrieve the data from the server. Is there something wrong with the stored procedure I have written, or is the issue in the Crystal Report itself? Here is the stored procedure code along with the package that defines the necessary REF CURSOR. CREATE OR REPLACE PROCEDURE SP90_INVENTORYDATA_ALL ( invdata_cur IN OUT sftnecm.inv_data_all_pkg.inv_data_all_type, dCurrentEndDate IN vw_METADATA.CASEENTRCVDDATE%type, dCurrentStartDate IN vw_METADATA.CASEENTRCVDDATE%type ) AS BEGIN OPEN invdata_cur FOR SELECT vw_METADATA.CREATIONTIME, vw_METADATA.RESRESOLUTIONDATE, vw_METADATA.CASEENTRCVDDATE, vw_METADATA.CASESTATUS, vw_METADATA.CASENUMBER, (CASE WHEN vw_METADATA.CASEENTRCVDDATE < dCurrentStartDate AND ( (vw_METADATA.CASESTATUS is null OR vw_METADATA.CASESTATUS != 'Closed') OR TO_DATE(vw_METADATA.RESRESOLUTIONDATE, 'MM/DD/YYYY') >= dCurrentStartDate) then 1 else 0 end) InventoryBegin, (CASE WHEN (to_date(vw_METADATA.RESRESOLUTIONDATE, 'MM/DD/YYYY') BETWEEN dCurrentStartDate AND dCurrentEndDate) AND vw_METADATA.RESRESOLUTIONDATE is not null AND vw_METADATA.CASESTATUS is not null then 1 else 0 end) CaseClosed, (CASE WHEN vw_METADATA.CASEENTRCVDDATE BETWEEN dCurrentStartDate AND dCurrentEndDate then 1 else 0 end) CaseCreated FROM vw_METADATA WHERE vw_METADATA.CASEENTRCVDDATE <= dCurrentEndDate ORDER BY vw_METADATA.CREATIONTIME, vw_METADATA.CASESTATUS; END SP90_INVENTORYDATA_ALL; And the package: CREATE OR REPLACE PACKAGE inv_data_all_pkg AS TYPE inv_data_all_type IS REF CURSOR RETURN inv_data_all_temp%ROWTYPE; END inv_data_all_pkg;

    Read the article

  • Mathematical annotations in a PDF file

    - by kvaruni
    I like to annotate papers I read in a digital way. Numerous programs exist to help in this process. For example, on OS X one can use programs such as Skim or even Preview. However, making annotations is dreadful when one wishes to add mathematical annotations, such as formulas or greek letters. A cumbersome "solution" is to select the desired symbol one by one using the Special Characters palette, though this considerably slows down the annotation process. Is there any way to add mathematical annotations to a PDF? The only two limitations that I would impose on a solution is that 1) the mathematical text needs to be selectable, i.e. it must be text and 2) I want to limit the number of programs I need to make the process as painless as possible. Some of the more promising solutions I have tried include generating LaTeX with LaTeXiT, but it seems to be impossible to add a PDF on top of another PDF. Another attempt was to use jsMath to generate the symbols and copy-paste these as annotation using one of the jsMath fonts. This results in unreadable, incorrect characters.

    Read the article

  • Formula parsing / evaluation routine or library with generic DLookup functionality

    - by tbone
    I am writing a .Net application where I must support user-defined formulas that can perform basic mathematics, as well as accessing data from any arbitrary table in the database. I have the math part working, using JScript Eval(). What I haven't decided on is what a nice way is to do the generic table lookups. For example, I may have a formula something like: Column: BonusAmount Formula: {CurrentSalary} * 1.5 * {[SystemSettings][Value][SettingName=CorpBonus AND Year={Year}]} So, in this example I would replace {xxx} and {Year} with the value of Column xxx from the current table, and I would replace the second part with the value of (select Value from SystemSettings WHERE SettingName='CorpBonus' AND Year=2008) So, basically, I am looking for something very much like the MS Access DLookup function: DLookup ( expression, domain, [criteria] ) DLookup("[UnitPrice]", "Order Details", "OrderID = 10248") But, I also need to overall parsing routine that can tell whether to just look up in the current row, or to look into another table. Would also be nice to support aggregate functions (ie: DAvg, DMax, etc), as well as all the weird edge cases handled. So I wonder if anyone knows of any sort of an existing library, or has a nice routine that can handle this formula parsing and database lookup / aggregate function resolution requirements.

    Read the article

  • How to analyze the efficiency of this algorithm Part 2

    - by Leonardo Lopez
    I found an error in the way I explained this question before, so here it goes again: FUNCTION SEEK(A,X) 1. FOUND = FALSE 2. K = 1 3. WHILE (NOT FOUND) AND (K < N) a. IF (A[K] = X THEN 1. FOUND = TRUE b. ELSE 1. K = K + 1 4. RETURN Analyzing this algorithm (pseudocode), I can count the number of steps it takes to finish, and analyze its efficiency in theta notation, T(n), a linear algorithm. OK. This following code depends on the inner formulas inside the loop in order to finish, the deal is that there is no variable N in the code, therefore the efficiency of this algorithm will always be the same since we're assigning the value of 1 to both A & B variables: 1. A = 1 2. B = 1 3. UNTIL (B > 100) a. B = 2A - 2 b. A = A + 3 Now I believe this algorithm performs in constant time, always. But how can I use Algebra in order to find out how many steps it takes to finish?

    Read the article

  • System.Windows.Forms.DataVisualization Namespace Fine in One Class but Not in Another

    - by jxpx777
    Hi. I'm getting this error The type or namespace name 'DataVisualization' does not exist in the namespace 'System.Windows.Forms' (are you missing an assembly reference?) Here is my using section of the class: using System; using System.Collections; using System.Collections.Generic; using System.Windows.Forms.DataVisualization.Charting; using System.Windows.Forms.DataVisualization.Charting.Borders3D; using System.Windows.Forms.DataVisualization.Charting.ChartTypes; using System.Windows.Forms.DataVisualization.Charting.Data; using System.Windows.Forms.DataVisualization.Charting.Formulas; using System.Windows.Forms.DataVisualization.Charting.Utilities; namespace myNamespace { public class myClass { // Usual class stuff } } The thing is that I am using the same DataVisualization includes in another class. The only thing that I can think that is different is that the classes that are giving this missing namespace error are Solution Items rather than specific to a project. The projects reference them by link. Anyone have thoughts on what the problem is? I've installed the chart component, .Net 3.5 SP1, and the Chart Add-in for Visual Studio 2008. UPDATE: I moved the items from Solution Items to be regular members of my project and I'm still seeing the same behavior. UPDATE 2: Removing the items from the Solution Items and placing them under my project worked. Another project was still referencing the files which is why I didn't think it worked previously. I'm still curious, though, why I couldn't use the namespace when the classes were Solution Items but moving them underneath a project (with no modifications, mind you) instantly made them recognizable. :\

    Read the article

  • How to get levels for Fry Graph readability formula?

    - by Vic
    Hi, I'm working in an application (C#) that applies some readability formulas to a text, like Gunning-Fog, Precise SMOG, Flesh-Kincaid. Now, I need to implement the Fry-based Grade formula in my program, I understand the formula's logic, pretty much you take 3 100-words samples and calculate the average on sentences per 100-words and syllables per 100-words, and then, you use a graph to plot the values. Here is a more detailed explanation on how this formula works. I already have the averages, but I have no idea on how can I tell my program to "go check the graph and plot the values and give me a level." I don't have to show the graph to the user, I only have to show him the level. I was thinking that maybe I can have all the values in memory, divided into levels, for example: Level 1: values whose sentence average are between 10.0 and 25+, and whose syllables average are between 108 and 132. Level 2: values whose sentence average are between 7.7 and 10.0, and .... so on But the problem is that so far, the only place in which I have found the values that define a level, are in the graph itself, and they aren't too much accurate, so if I apply the approach commented above, trying to take the values from the graph, my level estimations would be too much imprecise, thus, the Fry-based Grade will not be accurate. So, maybe any of you knows about some place where I can find exact values for the different levels of the Fry-based Grade, or maybe any of you can help me think in a way to workaround this. Thanks

    Read the article

  • how to Compute the average probe length for success and failure - Linear probe (Hash Tables)

    - by fang_dejavu
    hi everyone, I'm doing an assignment for my Data Structures class. we were asked to to study linear probing with load factors of .1, .2 , .3, ...., and .9. The formula for testing is: The average probe length using linear probing is roughly Success-- ( 1 + 1/(1-L)**2)/2 or Failure-- (1+1(1-L))/2. we are required to find the theoretical using the formula above which I did(just plug the load factor in the formula), then we have to calculate the empirical (which I not quite sure how to do). here is the rest of the requirements **For each load factor, 10,000 randomly generated positive ints between 1 and 50000 (inclusive) will be inserted into a table of the "right" size, where "right" is strictly based upon the load factor you are testing. Repeats are allowed. Be sure that your formula for randomly generated ints is correct. There is a class called Random in java.util. USE it! After a table of the right (based upon L) size is loaded with 10,000 ints, do 100 searches of newly generated random ints from the range of 1 to 50000. Compute the average probe length for each of the two formulas and indicate the denominators used in each calculationSo, for example, each test for a .5 load would have a table of size approximately 20,000 (adjusted to be prime) and similarly each test for a .9 load would have a table of approximate size 10,000/.9 (again adjusted to be prime). The program should run displaying the various load factors tested, the average probe for each search (the two denominators used to compute the averages will add to 100), and the theoretical answers using the formula above. .** how do I calculate the empirical success?

    Read the article

  • Truly declarative language?

    - by gjvdkamp
    Hi all, Does anyone know of a truly declarative language? The behaviour I'm looking for is kind of what Excel does, where I can define variables and formulas, and have the formula's result change when the input changes (without having set the answer again myself) The behaviour I'm looking for is best shown with this pseudo code: X = 10 // define and assign two variables Y = 20; Z = X + Y // declare a formula that uses these two variables X = 50 // change one of the input variables ?Z // asking for Z should now give 70 (50 + 20) I've tried this in a lot of languages like F#, python, matlab etc, but every time i try this they come up with 30 instead of 70. Wich is correct from an imperative point of view, but i'm looking for a more declerative behaviour if you know what i mean. And this is just a very simple calculation. When things get more difficult it should handle stuff like recursion and memoization automagically. The code below would obviously work in C# but it's just so much code for the job, i'm looking for something a bit more to the point without all that 'technical noise' class BlaBla{ public int X {get;set;} // this used to be even worse before 3.0 public int Y {get;set;} public int Z {get{return X + Y;}} } static void main(){ BlaBla bla = new BlaBla(); bla.X = 10; bla.Y = 20; // can't define anything here bla.X = 50; // bit pointless here but I'll do it anyway. Console.Writeline(bla.Z);// 70, hurray! } This just seems like so much code, curly braces and semicolons that add nothing. Is there a language/ application (apart from Exel) that does this? Maybe I'm no doing it right in the mentioned langauges, or I've completely missed an app that does just this. I prototyped a language/ application that does this (along with some other stuff) and am thinking of productizing it. I just can't believe it's not there yet. Don't want to waste my time. Thanks in advance, Gert-Jan

    Read the article

  • Scala and Java BigDecimal

    - by geejay
    I want to switch from Java to a scripting language for the Math based modules in my app. This is due to the readability, and functional limitations of mathy Java. For e.g, in Java I have this: BigDecimal x = new BigDecimal("1.1"); BigDecimal y = new BigDecimal("1.1"); BigDecimal z = x.multiply(y.exp(new BigDecimal("2")); As you can see, without BigDecimal operator overloading, simple formulas get complicated real quick. With doubles, this looks fine, but I need the precision. I was hoping in Scala I could do this: var x = 1.1; var y = 0.1; print(x + y); And by default I would get decimal-like behaviour, alas Scala doesn't use decimal calculation by default. Then I do this in Scala: var x = BigDecimal(1.1); var y = BigDecimal(0.1); println(x + y); And I still get an imprecise result. Is there something I am not doing right in Scala? Maybe I should use Groovy to maximise readability (it uses decimals by default)?

    Read the article

  • How to compare 2 complex spreadsheets running in parallel for consistency with each other?

    - by tbone
    I am working on converting a large number of spreadsheets to use a new 3rd party data access library (converting from third party library #1 to third party library #2). fyi: a call to a UDF (user defined function) is placed in a cell, and when that is refreshed, it pulls the data into a pivot table below the formula. Both libraries behave the same and produce the same output, except, small irregularites can arise, such as an additional field being shown in the output pivot table using library #2, which can affect formulas on the sheet if data is being read from the pivot table without using GetPivotData. So I have ~100 of these very complicated (20+ worksheets per workbook) spreadsheets that I have to convert, and run in parallel for a period of time, to see if the output using the new data access library matches the old library. Is there some clever approach to do this, so I don't have to spend a large amount of time analyzing each sheet to determine the specific elements to compare? Two rough ideas that come to mind: 1. just create a Validator workbook that has the same # of worksheets, and simply do a Worbook1!Worksheet1!A1 - Worbook2!Worksheet3!A1 for every possible cell on each sheet 2. roughly the equivalent of #1, but just traverse the cells in the 2 books using VBA, and log any cells that do not match. I don't particularly like either idea, can anyone think of something better than this, maybe some 3rd party utility I could buy?

    Read the article

  • Is it possible to filter data used by pivot table based on filtering the rows in a source table in Excel?

    - by Geoffrey Stoel
    I have developed a dashboard in Excel 2007 that uses one source table in a sheet (being filled with a query on our data warehouse) and multiple pivot tables making different cross sections on this data. I use the GETPIVOTDATA in almost a hundred formulas to give me the right value for a specific indicator in my dashboard. This all works fine. However I now have received the question to make the dashboard for 5 different segments. As you can imagine I don't want to create 5 different workbooks for this and need to maintain the dashboard logic on all of them. So my question is the following. Is it possible to automatically (through VBA or any other means) filter the results in my source table which is the source for my pivot tables and thus for my dashboard values. So schematically: DATABASE_VIEW -- SOURCE_TABLE -- 12 pivot tables -- 100 GETPIVOTDATA functions Preferably I would like to load all the segments in the source_table (one view on my database) and then filter the data in the source table, which results in filterd source_dat for my pivots. This way I can (without requerying the db) quickly change between segments in the dashboards (refreshing pivots only). Data in the source table has the column: CUSTOMER_SEGMENT available to filter upon. Any help is appreciated. Geoffrey

    Read the article

  • How to change the meaning of pointer access operator

    - by kumar_m_kiran
    Hi All, This may be very obvious question, pardon me if so. I have below code snippet out of my project, #include <stdio.h> class X { public: int i; X() : i(0) {}; }; int main(int argc,char *arv[]) { X *ptr = new X[10]; unsigned index = 5; cout<<ptr[index].i<<endl; return 0; } Question Can I change the meaning of the ptr[index] ? Because I need to return the value of ptr[a[index]] where a is an array for subindexing. I do not want to modify existing source code. Any new function added which can change the behavior is needed. Since the access to index operator is in too many places (536 to be precise) in my code, and has complex formulas inside the index subscript operator, I am not inclined to change the code in many locations. PS : 1. I tried operator overload and came to conclusion that it is not possible. 2. Also p[i] will be transformed into *(p+i). I cannot redefine the basic operator '+'. So just want to reconfirm my understanding and if there are any possible short-cuts to achieve. Else I need fix it by royal method of changing every line of code :) .

    Read the article

  • Why is Excel's 'Evaluate' method a general expression evaluator?

    - by jtolle
    A few questions have come up recently involving the Application.Evaluate method callable from Excel VBA. The old XLM macro language also exposes an EVALUATE() function. Both can be quite useful. Does anyone know why the evaluator that is exposed can handle general expressions, though? My own hunch is that Excel needed to give people a way to get ranges from string addresses, and to get the value of named formulas, and just opening a portal to the expression evaluator was the easiest way. (The help for the VBA version does say its purpose it to "convert a Microsoft Excel name to an object or a value".) But of course you don't need the ability to evaluate arbitrary expressions just to do that. (That is, Excel could provide a Name.Evaluate method or something instead.) Application.Evaluate seems kind of...unfinished. It's full behavior isn't very well documented, and there are quite a few quirks and limitations (as described by Charles Williams here: http://www.decisionmodels.com/calcsecretsh.htm) with what is exposed. I suppose the answer could be simply "why not expose it?", but I'd be interested to know what design decisions led to this feature taking the form that it does. Failing that, I'd be interested to hear other hunches.

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • Sort ranges in an array in google apps script

    - by user1637113
    I have a timesheet spreadsheet for our company and I need to sort the employees by each timesheet block (15 rows by 20 columns). I have the following code which I had help with, but the array quits sorting once it comes to a block without an employee name (I would like these to be shuffled to the bottom). Another complication I am having is there are numerous formulas in these cells and when I run it as is, it removes them. I would like to keep these intact if at all possible. Here's the code: function sortSections() { var activeSheet = SpreadsheetApp.getActiveSheet(); //SETTINGS var sheetName = activeSheet.getSheetName(); //name of sheet to be sorted var headerRows = 53; //number of header rows var pageHeaderRows = 5; //page totals to top of next emp section var sortColumn = 11; //index of column to be sorted by; 1 = column A var pageSize = 65; var sectionSize = 15; //number of rows in each section var col = sortColumn-1; var sheet = SpreadsheetApp.getActive().getSheetByName(sheetName); var data = sheet.getRange(headerRows+1, 1, sheet.getMaxRows()-headerRows, sheet.getLastColumn()).getValues(); var data3d = []; var dataLength = data.length/sectionSize; for (var i = 0; i < dataLength; i++) { data3d[i] = data.splice(0, sectionSize); } data3d.sort(function(a,b){return(((a[0][col]<b[0][col])&&a[0][col])?-1:((a[0][col]>b[0][col])?1:0))}); var sortedData = []; for (var k in data3d) { for (var l in data3d[k]) { sortedData.push(data3d[k][l]); } } sheet.getRange(headerRows+1, 1, sortedData.length, sortedData[0].length).setValues(sortedData);

    Read the article

  • C# and Excel best practices

    - by rlp
    I am doing a lot of MS Excel interop i C# (Visual Studio 2012) using Microsoft.Office.Interop.Excel. It requires a lot of tiresome manual code to include Excel formulas, doing formatting of text and numbers, and making graphs. I would like it very much if any of you have some input on how I do the task better. I have been looking at Visual Studio Tools for Office, but I am uncertain on its functions. I get it is required to make Excel add-ins, but does it help doing Excel automation? I have desperately been trying to find information on working with Excel in Visual Studio 2012 using C#. I did found some good but short tutorials. However I really would like a book an the subject to learn the field more in depth regarding functionality and best practices. Searching Amazon with my limited knowlegde only gives me book on VSTO using older versions of Visual Studio. I would not like to use VBA. My applications use Excel mainly for visualizing compiled from different sources. I also to data processing where Excel is not required. Futhermore, I can write C# but not VB.

    Read the article

  • Why is Excel 2010/2013 taking 10 seconds open any file?

    - by jbkly
    I have a fast Windows 7 PC with two SSDs and 16GB of RAM, so I'm used to programs loading very fast. But recently, for no reason I can figure out, Excel has started taking way too long to open Excel files (of any size--even blank files). This is occurring with Excel 2010 and with Excel 2013 after I upgraded, hoping to solve the problem. Here a couple scenarios: If I start Excel directly, it opens almost instantly. No problem there. If I start Excel directly, and then open any Excel file (.xls or .xlsx), it loads almost instantly. Still no problem BUT if I attempt to open any Excel file directly, with Excel not running, it consistently takes 10-11 seconds for Excel to start. I get no error messages, just a spinning cursor for 10-11 seconds, and then the file opens. During the delay while Excel is trying to start, I'm not really seeing any discernible spike in CPU or memory usage, other than explorer.exe. This problem is only occurring with Excel, not Word or any other program I'm aware of. I've searched around quite a bit on this question and found various others who have experienced it, but the solutions that worked for them are not working for me. For a few people it was a problem with scanning network drives, but my problem is purely with local files; I have no network drives, and the problem persists even with all network connections disabled. Some people suggested worksheets with corrupted formulas or links, but I'm experiencing this with ANY Excel file: even blank worksheets. Others thought it was a problem with add-ins, but I have all Excel add-ins disabled (as far as I can tell). One person solved it by disabling a "clipboard manager" process that was running in the background, but I don't have that. I've disabled as many startup and background processes as I can, but the problem persists. I've run malware scans, disk cleanup, CCleaner, and installed Excel 2013. I've deleted temporary files, enabled SuperFetch, and edited registry keys. Still can't get rid of the problem. Any ideas? My system details: Windows 7 Professional SP1 64-bit, Excel 2013 32-bit, 16GB RAM, all programs installed on SSD.

    Read the article

  • How to set 2 conditions / criterias for VLOOKUP / LOOKUP / etc in OpenOffice Calc (or Excel)

    - by MestreLion
    I have this spreadsheet that started as a silly aid for a game (Mafia Wars 2), but grew into a tricky spreadsheet question. In the game your character have 9 "slots" for weapons and armors, 1 for each "type": Light Weapon, Heavy Weapon, Body Armor, Head Armor, etc. So I made a list of all weapons and armors available in the game, 1 item per row. Example: SHOP ITEM TYPE ITEM NAME ATK DEF PRICE EQUIPPED? Marketplace Weapon Light Konrad Knife 16 5 5.500 Marketplace Weapon Light Ice Queen 19 6 8.200 Marketplace Armor Body Up Layered Polym 0 31 8.600 Marketplace Armor Body Up Full Shield 7 42 17.650 Marketplace Weapon Heavy Konrad Bullpup 53 25 24.500 Marketplace Weapon Heavy Full Moon Blow 73 12 24.500 x Marketplace Armor Body Low Knee Pads 17 26 14.200 x Marketplace Armor Body Low Army Boots 15 55 24.500 Bone Yard Weapon Light Bone Launcher 41 2 9.400 x Neon Strip Vehicle Ground Supercharged 41 34 24.500 Dead End Weapon Heavy Sharp Sickle 21 5 24.500 Dead End Armor Body Low Unholy Boots 5 36 15.000 Dead End Armor Head Hockey Mask 5 18 15.900 x Last columns is an indication of the items i have already bought and equipped (marked with "x"). What I need is a formula that, for each "slot" (item type), returns info related to the item of that kind that I am using. That would be: ITEM TYPE SHOP NAME ITEM NAME ATK DEF PRICE Weapon Light Bone Yard Bone Launcher 41 2 9.400 Weapon Heavy Marketplace Full Moon Blow 73 12 24.500 Weapon Special -- -- -- -- -- Armor Body Up -- -- -- -- -- Armor Body Low Marketplace Knee Pads 17 26 14.200 Armor Head Dead End Hockey Mask 5 18 15.900 Vehicle Ground -- -- -- -- -- Vehicle Water -- -- -- -- -- Vehicle Air -- -- -- -- -- The item types are fixed, so they can be hard coded. Each row for an item type. So, for 1st result line, it would return data from the row where both 2nd column is "Weapon Light" and last column is "x". Basically I need a LOOKUP (or VLOOKUP, or anything else) that uses 2 criteria to find a given row, the item type and the X marker. Question is: HOW? I am using OpenOffice Calc 3.2.1, but since it shares so many functions with MS Excel, answers for Excel are also fine (as long as it only uses regular formulas, no VBScript or Macros or VBA etc) Last but not least, suggestions / solutions for rearranging the data so it makes this problem easier to solve are also welcome. Thanks!

    Read the article

  • Why does Excel now give me already existing name range error on Copy Sheet?

    - by WilliamKF
    I've been working on a Microsoft Excel 2007 spreadsheet for several days. I'm working from a master template like sheet and copying it to a new sheet repeatedly. Up until today, this was happening with no issues. However, in the middle of today this suddenly changed and I do not know why. Now, whenever I try to copy a worksheet I get about ten dialogs, each one with a different name range object (shown below as 'XXXX') and I click yes for each one: A formula or sheet you want to move or copy contains the name 'XXXX', which already exists on the destination worksheet. Do you want to use this version of the name? To use the name as defined in destination sheet, click Yes. To rename the range referred to in the formula or worksheet, click No, and enter a new name in the Name Conflict dialog box. The name range objects refer to cells in the sheet. For example, E6 is called name range PRE on multiple sheets (and has been all along) and some of the formulas refer to PRE instead of $E$6. One of the 'XXXX' above is this PRE. These name ranges should only be resolved within the sheet within which they appear. This was not an issue before despite the same name range existing on multiple sheets before. I want to keep my name ranges. What could have changed in my spreadsheet to cause this change in behavior? I've gone back to prior sheets created this way and now they give the message too when copied. I tried a different computer and a different user and the same behavior is seen everywhere. I can only conclude something in the spreadsheet has changed. What could this be and how can I get back the old behavior whereby I can copy sheets with name ranges and not get any errors? Looking in the Name Manager I see that the name ranges being complained about show twice, once as scope Template and again as scope Workbook. If I delete the scope Template ones the error goes away on copy however, I get a bunch of #REF errors. If I delete the scope Workbook ones, all seems okay and the errors on copy go away too, so perhaps this is the answer, but I'm nervous about what effect this deletion will have and wonder how the Workbook ones came into existence in the first place. Will it be safe to just delete the Workbook name manager scoped entries and how might these have come into existence without my knowing it to begin with?

    Read the article

  • Advice on learning programming languages and math.

    - by Joris Ooms
    I feel like I'm getting stuck lately when it comes to learning about programming-related things; I thought I'd ask a question here and write it all down in the hope to get some pointers/advice from people. Perhaps writing it down helps me put things in perspective for myself aswell. I study Interactive Multimedia Design. This course is based on two things: graphic design on one hand, and web development on the other hand. I have quite a decent knowledge of web-related languages (the usual HTML/JS/PHP) and I'll be getting a course on ASP.NET next year. In my free time, I have learnt how to work with CodeIgniter, aswell as some diving into Ruby (and Rails) and basic iOS programming. In my first year of college I also did a class on Java (19/20 on the end result). This grade doesn't really mean anything though; I have the basics of OOP down but Java-wise, we learnt next to nothing. Considering the time I have been programming in, for example, PHP.. I can't say I'm bad at it. I'm definitely not good or great at it, but I'm decent. My teachers tell me I have the programming thing down. They just tell me I should keep on learning. So that's what I do, and I try to take in as much as possible; however, sometimes I'm unsure where to start and I have this tendency to always doubt myself. Now, for the 'question'. I want to get into iOS programming. I know iOS programming boils down to programming in Cocoa Touch and Objective-C. I also know Obj-C is a superset of C. I have done a class on C a couple of years ago, but I failed miserably. I got stuck at pointers and never really understood them.. Until like a month ago. I suddenly 'got' it. I have been working through a book on Objective-C for a week or so now, and I understand the basics (I'm at like.. chapter 6 or so). However, I keep running into similar problems as the ones I had when I did the C class: I suck at math. No, really. I come from a Latin-Modern Languages background in high school and I had nearly no math classes back then. I wanted to study Computer Science, but I failed there because of the miserable state of my mathematics knowledge. I can't explain why I'm suddenly talking about math here though, because it isn't directly related to programming.. yet it is. For example, the examples in the book I'm reading now are about programming a fraction-calculator. All good, I can do the programming when I get the formulas down.. but it takes me a full day or more to actually get to that point. I also find it hard to come up with ideas for myself. I made one small iOS app the other day and it's just a button / label kind of thing. When I press the button, it generates a random number. That's really all I could come up with. Can you 'learn' that? It probably comes down to creativity, but evidently, I'm not too great at being creative. Are there any sites or resources out there that provide something like a basic list of things you can program when you're just starting out? Maybe I'm focusing on too many things at once. I want to keep my HTML/CSS at a decent level, while learning PHP and CodeIgniter, while diving into Ruby on Rails and learning Objective-C and the iOS SDK at the same time. I just want to be good at something, I guess. The problem is that I can't seem to be happy with my PHP stuff. I want more, something 'harder'; that's why I decided to pick up the iOS thing. Like I said, I have the basics down of a lot of different languages. I can program something simple in Java, in C, in Objective-C as of this week.. but it ends there. Mostly because I can't come up with ideas for more complex applications, and also because I just doubt myself: 'Oh, that's too complex, I can never do that'. And then it ends there. To conclude my rant, let me basically rephrase my questions into a 'tl;dr' part. A. I want to get into iOS programming and I have basic knowledge of C/Objective-C. However, I struggle to come up with ideas of my own and implement them and I also suck at math which is something that isn't directly related to, yet often needed while programming. What can I do? B. I have an interest in a lot of different programming languages and I can't stop reading/learning. However, I don't feel like I'm good in anything. Should I perhaps focus on just one language for a year or longer, or keep taking it all in at the same time and hope I'll finally get them all down? C. Are there any resources out there that provide basic ideas of things I can program? I'm thinking about 'simple' command-line applications here to help me while studying C/Obj-C away from the whole iPhone SDK. Like I said, the examples in my book are mainly math-based (fraction calculator) and it's kinda hard. :( Thanks a lot for reading my post. I didn't plan it to be this long but oh well. Thanks in advance for any answers.

    Read the article

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

< Previous Page | 7 8 9 10 11 12 13  | Next Page >