Search Results

Search found 1145 results on 46 pages for 'numeric'.

Page 11/46 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • SQL SERVER – GUID vs INT – Your Opinion

    - by pinaldave
    I think the title is clear what I am going to write in your post. This is age old problem and I want to compile the list stating advantages and disadvantages of using GUID and INT as a Primary Key or Clustered Index or Both (the usual case). Let me start a list by suggesting one advantage and one disadvantage in each case. INT Advantage: Numeric values (and specifically integers) are better for performance when used in joins, indexes and conditions. Numeric values are easier to understand for application users if they are displayed. Disadvantage: If your table is large, it is quite possible it will run out of it and after some numeric value there will be no additional identity to use. GUID Advantage: Unique across the server. Disadvantage: String values are not as optimal as integer values for performance when used in joins, indexes and conditions. More storage space is required than INT. Please note that I am looking to create list of all the generic comparisons. There can be special cases where the stated information is incorrect, feel free to comment on the same. Please leave your opinion and advice in comment section. I will combine a final list and update this blog after a week. By listing your name in post, I will also give due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Data Storage, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Converting raw data type to enumerated type

    - by Jim Lahman
    There are times when an enumerated type is preferred over using the raw data type.  An example of using a scheme is when we need to check the health of x-ray gauges in use on a production line.  Rather than using a scheme like 0, 1 and 2, we can use an enumerated type: 1: /// <summary> 2: /// POR Healthy status indicator 3: /// </summary> 4: /// <remarks>The healthy status is for each POR x-ray gauge; each has its own status.</remarks> 5: [Flags] 6: public enum POR_HEALTH : short 7: { 8: /// <summary> 9: /// POR1 healthy status indicator 10: /// </summary> 11: POR1 = 0, 12: /// <summary> 13: /// POR2 healthy status indicator 14: /// </summary> 15: POR2 = 1, 16: /// <summary> 17: /// Both POR1 and POR2 healthy status indicator 18: /// </summary> 19: BOTH = 2 20: } By using the [Flags] attribute, we are treating the enumerated type as a bit mask.  We can then use bitwise operations such as AND, OR, NOT etc. . Now, when we want to check the health of a specific gauge, we would rather use the name of the gauge than the numeric identity; it makes for better reading and programming practice. To translate the numeric identity to the enumerated value, we use the Parse method of Enum class: POR_HEALTH GaugeHealth = (POR_HEALTH) Enum.Parse(typeof(POR_HEALTH), XrayMsg.Gauge_ID.ToString()); The Parse method creates an instance of the enumerated type.  Now, we can use the name of the gauge rather than the numeric identity: 1: if (GaugeHealth == POR_HEALTH.POR1 || GaugeHealth == POR_HEALTH.BOTH) 2: { 3: XrayHealthyTag.Name = Properties.Settings.Default.POR1XRayHealthyTag; 4: } 5: else if (GaugeHealth == POR_HEALTH.POR2) 6: { 7: XrayHealthyTag.Name = Properties.Settings.Default.POR2XRayHealthyTag; 8: }

    Read the article

  • System.InvalidOperationException with SQlBulkCopy

    - by Pandiya Chendur
    I got the following error when executing bulkcopy. System.InvalidOperationException The given value of type String from the data source cannot be converted to type decimal of the specified target column. I use the following code. DataTable empTable = DataTemplate.GetEmployees(); DataRow row; for (int i = 0; i < gv.Rows.Count;i++ ) { row = empTable.NewRow(); string empName = gv.DataKeys[i].Values[0].ToString(); //first key string hourSalary = gv.DataKeys[i].Values[1].ToString(); //second key row["Emp_Name"] = empName; row["Hour_Salary"] = Convert.ToDecimal(hourSalary); row["Advance_amount"] = Convert.ToDecimal(0); row["Created_Date"] = Convert.ToDateTime(System.DateTime.Now.ToString()); row["Created_By"] = Convert.ToInt64(1); row["Is_Deleted"] = Convert.ToInt64(0); empTable.Rows.Add(row); } InsertintoEmployees(empTable, "Employee"); My SQL datatypes for the above fields are: Emp_Name nvarchar(50) , Hour_Salary numeric(18, 2), Advance_amount numeric(18, 2), Created_Date datetime, Created_By numeric(18, 0), Is_Deleted numeric(18, 0) I don't know what I am doing wrong.

    Read the article

  • Implementing a 1 to many relationship with SQLite

    - by Patrick
    I have the following schema implemented successfully in my application. The application connects desk unit channels to IO unit channels. The DeskUnits and IOUnits tables are basically just a list of desk/IO units and the number of channels on each. For example a desk could be 4 or 12 channel. CREATE TABLE DeskUnits (Name TEXT, NumChannels NUMERIC); CREATE TABLE IOUnits (Name TEXT, NumChannels NUMERIC); CREATE TABLE RoutingTable (DeskUnitName TEXT, DeskUnitChannel NUMERIC, IOUnitName TEXT, IOUnitChannel NUMERIC); The RoutingTable 'table' then connects each DeskUnit channel to an IOUnit channel. For example the DeskUnit called "Desk1" channel 1 may route to IOunit name "IOUnit1" channel 2, etc. So far I hope this is pretty straightforward and understandable. The problem is, however, this is a strictly 1 to 1 relationship. Any DeskUnit channel can route to only 1 IOUnit channel. Now, I need to implement a 1 to many relationship. Where any DeskUnit channel can connect to multiple IOUnit channels. I realise I may have to rearrange the tables completely, but I am not sure the best way to go about this. I am fairly new to SQLite and databases in general so any help would be appreciated. Thanks Patrick

    Read the article

  • Program repeats each time a character is scanned .. How to stop it ?

    - by ZaZu
    Hello there, I have a program that has this code : #include<stdio.h> main(){ int input; char g; do{ printf("Choose a numeric value"); printf(">"); scanf("\n%c",&input); g=input-'0'; }while((g>=-16 && g<=-1)||(g>=10 && g<=42)||(g>=43 && g<=79)); } It basically uses ASCII manipulation to allow the program to accept numbers only .. '0' is given the value 48 by default...the ASCII value - 48 gives a ranges of numbers above (in the while statement) Anyway, whenever a user inputs numbers AND alphabets, such as : abr39293afakvmienb23 The program ignores : a,b,r .. But takes '3' as the first input. For a b and r, the code under the do loop repeats. So for the above example, I get : Choose a numeric value >Choose a numeric value> Choose a numeric value >3 Is there a way I can stop this ??? I tried using \n%c to scan the character and account for whitespace, but that didnt work :( Please help thank you very much !

    Read the article

  • Sending one record from cursor to another function Postgres

    - by PylonsN00b
    FYI: I am completely new to using cursors... So I have one function that is a cursor: CREATE FUNCTION get_all_product_promos(refcursor, cursor_object_id integer) RETURNS refcursor AS ' BEGIN OPEN $1 FOR SELECT * FROM promos prom1 JOIN promo_objects ON (prom1.promo_id = promo_objects.promotion_id) WHERE prom1.active = true AND now() BETWEEN prom1.start_date AND prom1.end_date AND promo_objects.object_id = cursor_object_id UNION SELECT prom2.promo_id FROM promos prom2 JOIN promo_buy_objects ON (prom2.promo_id = promo_buy_objects.promo_id) LEFT JOIN promo_get_objects ON prom2.promo_id = promo_get_objects.promo_id WHERE (prom2.buy_quantity IS NOT NULL OR prom2.buy_quantity > 0) AND prom2.active = true AND now() BETWEEN prom2.start_date AND prom2.end_date AND promo_buy_objects.object_id = cursor_object_id; RETURN $1; END; ' LANGUAGE plpgsql; SO then in another function I call it and need to process it: ... --Get the promotions from the cursor SELECT get_all_product_promos('promo_cursor', this_object_id) updated := FALSE; IF FOUND THEN --Then loop through your results LOOP FETCH promo_cursor into this_promotion --Preform comparison logic -this is necessary as this logic is used in other contexts from other functions SELECT * INTO best_promo_results FROM get_best_product_promos(this_promotion, this_object_id, get_free_promotion, get_free_promotion_value, current_promotion_value, current_promotion); ... SO the idea here is to select from the cursor, loop using fetch (next is assumed correct?) and put the record fetched into this_promotion. Then send the record in this_promotion to another function. I can't figure out what to declare the type of this_promotion in get_best_product_promos. Here is what I have: CREATE OR REPLACE FUNCTION get_best_product_promos(this_promotion record, this_object_id integer, get_free_promotion integer, get_free_promotion_value numeric(10,2), current_promotion_value numeric(10,2), current_promotion integer) RETURNS... It tells me: ERROR: plpgsql functions cannot take type record OK first I tried: CREATE OR REPLACE FUNCTION get_best_product_promos(this_promotion get_all_product_promos, this_object_id integer, get_free_promotion integer, get_free_promotion_value numeric(10,2), current_promotion_value numeric(10,2), current_promotion integer) RETURNS... Because I saw some syntax in the Postgres docs showed a function being created w/ a input parameter that had a type 'tablename' this works, but it has to be a tablename not a function :( I know I am so close, I was told to use cursors to pass records around. So I studied up. Please help.

    Read the article

  • How to reset keyboard for an entry field?

    - by David.Chu.ca
    I am using tag field as a flag for text fields text view fields for auto-jumping to the next field: - (BOOL)findNextEntryFieldAsResponder:(UIControl *)field { BOOL retVal = NO; for (UIView* aView in mEntryFields) { if (aView.tag == (field.tag + 1)) { [aView becomeFirstResponder]; retVal = YES; break; } } return retVal; } It works fine in terms of auto-jumping to the next field when Next key is pressed. However, my case is that the keyboards are different some fields. For example, one fields is numeric & punctuation, and the next one is default (alphabetic keys). For the numeric & punctuation keyboard is OK, but the next field will stay as the same layout. It requires user to press 123 to go back ABC keyboard. I am not sure if there is any way to reset the keyboard for a field as its keyboard defined in xib? Not sure if there is any APIs available? I guess I have to do something is the following delegate? -(void)textFieldDidBegingEditing:(UITextField*) textField { // reset to the keyboard to request specific keyboard view? .... } OK. I found a solution close to my case by slatvik: -(void) textFieldDidBeginEditing:(UITextField*) textField { textField.keyboardType = UIKeybardTypeAlphabet; } However, in the case of the previous text fields is numeric, the keyboard stays numeric when auto-jumped to the next field. Is there any way to set keyboard to alphabet mode?

    Read the article

  • Can MYSQL filter by date if date is stored as text? ex "02/10/1984"

    - by Roeland
    Hello! I am trying to modify an app for a client which has already a database of over 1000 items. The dates are stored as text in the database with the format "02/10/1984". The system allows you to add and remove fields to the catalog dynamically and it also allows the advanced search to have specific fields be allowed. The problem is that it wasn't designed with dates in mind, so when I set a field as a date, and try to search by a range the query is trying to do a AND (cfv0.value = 01/02/2004 AND cfv0.value <= 05/03/2008) . I can make it so the date range passed is a numeric time value. Is there a way that when sending the query, it takes the text fields (with the date) and converts it to numeric time value so at that point I am basically just comparing numbers which would work fine. I do not have the option to change all the current date to numeric value due to the way the dynamic fields are set up. Thanks guys!

    Read the article

  • R - Specifying colClasses in the read.csv

    - by Derek
    Hi, I am trying to specify the colClasses options in the read.csv function in R. In my data, the first column "time" is basically a character vector while the rest of the columns are numeric. data<-read.csv("test.csv" , comment.char="" , colClasses=c(time="character","numeric") , strip.white=FALSE) In the above command, I would want R to read in the "time" column as "character" and the as numeric. Although, the "data" variable did have the correct result after the command completed, R returned the following warnings. I am wondering how I could fix these warnings? Warning messages: 1: In read.table(file = file, header = header, sep = sep, quote = quote, : not all columns named in 'colClasses' exist 2: In tmp[i[i > 0L]] <- colClasses : number of items to replace is not a multiple of replacement length Thank in advance Derek

    Read the article

  • DBCC CHECKDB WITH DATA_PURITY gives out of range error

    - by Mark Allison
    Hi there, I have restore a SQL Server 2000 database onto SQL Server 2005 and then run DBCC CHECKDB WITH DATA_PURITY and I get this error: Msg 2570, Level 16, State 3, Line 2 Page (1:19558), slot 13 in object ID 181575685, index ID 1, partition ID 293374720802816, alloc unit ID 11899744092160 (type "In-row data"). Column "NumberOfShares" value is out of range for data type "numeric". Update column to a legal value. The column NumberOfShares is a numeric (19,6) data type. If I run the following select max (NumberOfShares) from AUDIT_Table select min (NumberOfShares) from AUDIT_Table I get: 22678647.839110 -1845953000.000000 These values are inside the bounds of a numeric (19,6) so I'm not sure why the DBCC check fails. Any ideas to find out why it fails? Do I need to use DBCC PAGE? How would you troubleshoot this? Thanks, Mark.

    Read the article

  • Java - JPA - Generators - @SequenceGenerator

    - by Yatendra Goel
    I am learning JPA and have confusion in the @SequenceGenerator annotation. Upto my understanding, it automatically assigns a value to numeric identity fields/properties of an entity. Q1. Does this sequence generator make use of the database's increasing numeric value generating capability or generates the number on his own? Q2. If JPA uses database auto increement feauture, then will it work with datastores that don't have auto increement feature? Q3. If JPA generate numeric value on his own, then how the JPA implementation knows which value to generate next? Does it consult with the database first to see what value was stored last so as to generate the value (last + 1). ====================================================================================== Q4. Please also throw some light on sequenceName and allocationSize properties of @SequenceGenerator annotation.

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

  • Exporting to CSV with dynamic field type handling

    - by serhio
    I have to do an export from DB to CSV. field; fileld; field... etc Have 3 types of fields: Alpha, Numeric and Bool respresented as "alphaValue",intValue and True/False. I try to encapsulate this in a fields collection, in order to export if alpha then set "", if Bool=True/False if numeric let as is. and try to build a CsvField class: Public Structure?Class CsvField(Of T As ???) End Structure Enum FieldType Alpha Bool Numeric End Enum any suggestions welcomed.

    Read the article

  • user specific persistent cck types in drupal

    - by Fion
    Is there a way to do the following. We need to have a few basic cck types that will allow users to track their chosen parameters over a length of time. For example, one cck type may be called "numeric tracker" It would have a field for labeling the type and a field for entering a number. User A might label one numeric tracker "miles driven". Then each day user A would use this type to enter a number. User B might label a numeric tracker "hours slept". Each day user B would enter a number. Is there a way to use cck in this way?

    Read the article

  • Passing report values to a query

    - by Beavis
    I'm a novice with Microsoft Access as my background is mostly .NET. I'm sure what I'm trying to accomplish is dead simple but I need some direction. I have a report and a query. The query returns a single numeric value based on a single numeric criteria. Select total from table where id = [topic] I have placed a text box on my report so I can feed the id to this query and in return get the total. It seems like DLookUp is what I want but no matter how I construct it, I get an "#Error" in the text box when I run the report. Currently my DLookUp looks like this (I just hard-coded now for simplicity): =DLookUp("[total]","myquery","[topic] = 3") How can I pass a value from a field on my report to a query so I can return the query's single numeric value? Thanks.

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • SQL SERVER – Introduction to Function SIGN

    - by pinaldave
    Yesterday I received an email from a friend asking how do SIGN function works. Well SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. DECLARE @IntVal1 INT, @IntVal2 INT,@IntVal3 INT DECLARE @NumVal1 DECIMAL(4,2), @NumVal2 DECIMAL(4,2),@NumVal3 DECIMAL(4,2) SET @IntVal1 = 9; SET @IntVal2 = -9; SET @IntVal3 = 0; SET @NumVal1 = 9.0; SET @NumVal2 = -9.0; SET @NumVal3 = 0.0; SELECT SIGN(@IntVal1) IntVal1,SIGN(@IntVal2) IntVal2,SIGN(@IntVal3) IntVal3 SELECT SIGN(@NumVal1) NumVal1,SIGN(@NumVal2) NumVal2,SIGN(@NumVal2) NumVal3   The above function will give us following result set. You will notice that when there is positive value the function gives positive values and if the values are negative it will return you negative values. Also you will notice that if the data type is  INT the return value is INT and when the value passed to the function is Numeric the result also matches it. Not every datatype is compatible with this function.  Here is the quick look up of the return types. bigint -> bigint int/smallint/tinyint -> int money/smallmoney -> money numeric/decimal -> numeric/decimal everybody else -> float What will be the best example of the usage of this function that you will not have to use the CASE Statement. Here is example of CASE Statement usage and the same replaced with SIGN function. USE tempdb GO CREATE TABLE TestTable (Date1 SMALLDATETIME, Date2 SMALLDATETIME) INSERT INTO TestTable (Date1, Date2) SELECT '2012-06-22 16:15', '2012-06-20 16:15' UNION ALL SELECT '2012-06-24 16:15', '2012-06-22 16:15' UNION ALL SELECT '2012-06-22 16:15', '2012-06-22 16:15' GO -- Using Case Statement SELECT CASE WHEN DATEDIFF(d,Date1,Date2) > 0 THEN 1 WHEN DATEDIFF(d,Date1,Date2) < 0 THEN -1 ELSE 0 END AS Col FROM TestTable GO -- Using SIGN Function SELECT SIGN(DATEDIFF(d,Date1,Date2)) AS Col FROM TestTable GO DROP TABLE TestTable GO This was interesting blog post for me to write. Let me know your opinion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Unobtrusive Maximum Input Lengths with JQuery and FluentValidation

    - by Steve Wilkes
    If you use FluentValidation and set a maximum length for a string or a maximum  value for a numeric property, JQuery validation is used to show an error message when the user inputs too many characters or a numeric value which is too big. On a recent project we wanted to use input’s maxlength attribute to prevent a user from entering too many characters rather than cure the problem with an error message, and I added this JQuery to add maxlength attributes based on JQuery validation’s data- attributes. $(function () { $("input[data-val-range-max],input[data-val-length-max]").each(function (i, e) { var input = $(e); var maxlength = input.is("[data-val-range-max]") ? input.data("valRangeMax").toString().length : input.data("valLengthMax"); input.attr("maxlength", maxlength); }); }); Presto!

    Read the article

  • Multiple vulnerabilities in Firefox web browser

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-3062 Numeric Errors vulnerability 6.8 Firefox web browser Solaris 11 11/11 SRU 9.5 Solaris 10 SPARC: 145080-11 X86: 145081-10 CVE-2012-0467 Denial of service (DoS) vulnerability 10.0 CVE-2012-0468 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-0469 Resource Management Errors vulnerability 10.0 CVE-2012-0470 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-0471 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-0473 Numeric Errors vulnerability 5.0 CVE-2012-0474 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-0477 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-0478 Permissions, Privileges, and Access Controls vulnerability 9.3 CVE-2012-0479 Identity spoofing vulnerability 4.3 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • How to reproduce the behavior of Mac OS X's dead keys on Windows 7?

    - by Pascal Qyy
    I'm French, but I've chosen to take a QWERTY keyboard for my MacBook Pro for many reasons: first of all, the AZERTY keyboard is not at all ergonomic because it has no numeric keypad, and I must use MAJ or CAPS LOCK to access to the numeric keys ; secondly, I've bought this mac for development ; and chars {, }, etc., are not directly accessible on the Apple AZERTY keyboard the last thing is that: the diacritics are VERY easy to produce on an Apple keyboard with Mac OS X : ? + c for a ç, for example, and many dead keys easy to use (e.g. ? + e, then e give you an é. So, I have no difficulties to write in my native language with this keyboard under Mac OS X. BUT, when I boot on Windows 7's Boot Camp partition, or when I use applications from it through VMware Unity, it is no longer the same comfort! Without numeric keypad, it's impossible to use it for produce specials characters (e.g.: Alt + 0231 for the ç) I've tried many solutions, like auto replacement in Microsoft Office (e.g.: ,,c being replaced by ç), but for all my diacritics, I must type a space, then a back space before the replacement work. I've also tried third party software, as Texter, but it is very buggy and don't work properly (or don't work at all) in many case! So, is there a solution somewhere, to have this Mac OS X's nice and comfortable way of producing diacritics for Windows 7? Thank in advance for your help and your time!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >