Search Results

Search found 21301 results on 853 pages for 'duplicate values'.

Page 301/853 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • SQL SERVER – Automated Type Conversion using Expressor Studio

    - by pinaldave
    Recently I had an interesting situation during my consultation project. Let me share to you how I solved the problem using Expressor Studio. Consider a situation in which you need to read a field, such as customer_identifier, from a text file and pass that field into a database table. In the source file’s metadata structure, customer_identifier is described as a string; however, in the target database table, customer_identifier is described as an integer. Legitimately, all the source values for customer_identifier are valid numbers, such as “109380”. To implement this in an ETL application, you probably would have hard-coded a type conversion function call, such as: output.customer_identifier=stringToInteger(input.customer_identifier) That wasn’t so bad, was it? For this instance, programming this hard-coded type conversion function call was relatively easy. However, hard-coding, whether type conversion code or other business rule code, almost always means that the application containing hard-coded fields, function calls, and values is: a) specific to an instance of use; b) is difficult to adapt to new situations; and c) doesn’t contain many reusable sub-parts. Therefore, in the long run, applications with hard-coded type conversion function calls don’t scale well. In addition, they increase the overall level of effort and degree of difficulty to write and maintain the ETL applications. To get around the trappings of hard-coding type conversion function calls, developers need an access to smarter typing systems. Expressor Studio product offers this feature exactly, by providing developers with a type conversion automation engine based on type abstraction. The theory behind the engine is quite simple. A user specifies abstract data fields in the engine, and then writes applications against the abstractions (whereas in most ETL software, developers develop applications against the physical model). When a Studio-built application is run, Studio’s engine automatically converts the source type to the abstracted data field’s type and converts the abstracted data field’s type to the target type. The engine can do this because it has a couple of built-in rules for type conversions. So, using the example above, a developer could specify customer_identifier as an abstract data field with a type of integer when using Expressor Studio. Upon reading the string value from the text file, Studio’s type conversion engine automatically converts the source field from the type specified in the source’s metadata structure to the abstract field’s type. At the time of writing the data value to the target database, the engine doesn’t have any work to do because the abstract data type and the target data type are just the same. Had they been different, the engine would have automatically provided the conversion. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SSIS

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITYSELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()))CREATE TABLE EmployeeLog( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE())) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITY SELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) CREATE TABLE EmployeeLog ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Modify Build Failure Work Item in TFS 2010 Build

    - by Jakob Ehn
    The default behaviour in TFS Team Build (all versions) is to create a bug work item when a build fails. This main benefit of this is that you get a work item for something that needs to be done, namely to fix the build!. When the developer responsible for the build failure has fixed the problem, he/she can associated that check-in with the work item that was created from the previous build failure. In TFS 2005/2008 you could modify the information in the created work item by changing some predefined properties in the TFSBuild.proj file:   <!-- WorkItemType The type of the work item created on a build failure. --> <WorkItemType>Bug</WorkItemType> <!-- WorkItemFieldValues Fields and values of the work item created on a build failure. Note: Use reference names for fields if you want the build to be resistant to field name changes. Reference names are language independent while friendly names are changed depending on the installed language. For example, "System.Reason" is the reference name for the "Reason" field. --> <WorkItemFieldValues>System.Reason=Build Failure;System.Description=Start the build using Team Build</WorkItemFieldValues> <!-- WorkItemTitle Title of the work item created on build failure. --> <WorkItemTitle>Build failure in build:</WorkItemTitle> <!-- DescriptionText History comment of the work item created on a build failure. --> <DescriptionText>This work item was created by Team Build on a build failure.</DescriptionText> <!-- BuildLogText Additional comment text for the work item created on a build failure. --> <BuildlogText>The build log file is at:</BuildlogText> <!-- ErrorWarningLogText Additional comment text for the work item created on a build failure. This text will only be added if there were errors or warnings. --> <ErrorWarningLogText>The errors/warnings log file is at:</ErrorWarningLogText>   In TFS 2010, with Windows Workflow, you change this by modifying the properties on the OpenWorkItem activity. The hardest part of this is to actually find where this activity is located in the build process workflow. If you open the build definition in XAML you can just search for OpenWorkItem. If you use the designer you need to click your way down to the Catch section of the Try to Compile the Project sequence: To change the default values of the created work item, select the Created Work Item activity and look at the Properties window: Note the CustomFields property which is a dictionary with key (work item field name) and value. If you add custom fields to your work item you can add a value for it here by adding a new entry in the dictionary.

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • ViewStateMode in ASP.Net 4.0

    - by sreejukg
    When asp.net introduced the concept of viewstate, it changed the way how developers maintain the state for the controls in a web page. Until then to keep the track of the control(in classic asp), it was the developer responsibility to manually assign the posted content before rendering the control again. Viewstate made allowed the developer to do it with ease. The developers are not bothered about how controls keep there state on post back. Viewstate is rendered to the browser as a hidden variable __viewstate. Since viewstate stores the values of all controls, as the number of controls in the page increases, the content of viewstate grows large. It causes some websites to load slowly. As developers we need viewstate, but actually we do not want this for all the controls in the page. Till asp.net 3.5, if viewstate is disabled from web.config (using <pages viewstate=”false”/> ..</pages>), then you can not enable it from the control level/page level. Both <%@ Page EnableViewState=”true”…. and <asp:textbox EnableViewState=”true” will not work in this case. Lot of developers demands for more control over viewstate. It will be useful if the developers are able to disable it for the entire page and enable it for only those controls that needed viewstate. With ASP.NET 4.0, this is possible, a happy news for the developers. This is achieved by introducing a new property called ViewStateMode. Let us see, What is ViewStateMode – Is a new property in asp.net 4.0, that allows developers to enable viewstate for individual control even if the parent has disabled it. This ViewStateMode property can contain either of three values Enabled- Enable view state for the control even if the parent control has view state disabled. Disabled - Disable view state for this control even if the parent control has view state enabled Inherit - Inherit the value of ViewStateMode from the parent, this is the default value. To disable view state for a page and to enable it for a specific control on the page, you can set the EnableViewState property of the page to true, then set the ViewStateMode property of the page to Disabled, and then set the ViewStateMode property of the control to Enabled. Find the example below. Page directive - <%@ Page Language="C#"  EnableViewState="True" ViewStateMode="Disabled" .......... %> Code for the control  - <asp:TextBox runat="server" ViewStateMode="Enabled" ............../> Now the viewstate will be disabled for the whole page, but enabled for the TextBox. ViewStateMode gives developers more control over the viewstate.

    Read the article

  • How to create Adhoc workflow in UCM

    - by vijaykumar.yenne
    UCM has an inbuilt workflow engine that can handle document centric workflow approval/rejection process to ensure the right set of assets go into the repository. Anybody who has gone through the documentation is aware that there are two types of work flows that can be defined using the Workflow Admin applet in UCM namely Criteria and Basic While criteria is an Automatic workflow  process based on certain metadata attributes (Security Group and One of the Metadata Fields) , basic workflow is a manual workflow that need to be initiated by the admin. Any workflow  that can be put on the white board can be translated into the UCM wokflow process and there are concepts like sub workflows, tokens, events. idoc scripting that be introduced to handle any kind of complex workflows. There is a specific Workflow Implementation guide that explains the concepts in detail. One of the standard queries i come across is how to handle adhoc workflows where at the time of contributing the content, the contributors would like to decide on the workflow to be initiated and the users to be picked for approval in each step, hence this post.This is what i want to acheive, i would like to display on my Checkin Screen on the kind of workflows that a contributor could choose from:Based on the Workflow the contributor chooses, the other metadata fields (Step One, Step Two and Step Three)  need to be filled in and these fields decide who the approvers are going to be.1. Create a criteria workflow called One_Step_Review2.create two tokens StepOne <$wfAddUser(xWorkflowStepOne, "user")$>,  OrginalAuthor  <$wfAddUser(wfGet("OriginalAuthor"), "user")$>View image3.create two steps in the work flow created (One_Step_Review)View image4. Edit Step1 of the Workflow and add the Step One token and select the review permissionView image5. In the exit conditions tab have atleast One reveiwerView image6. In the events tab add an entry event <$wfSet("OriginalAuthor",dDocAuthor)$> to capture the contributor who shall be notified in the second step of the workflowView image7. Add the second step Notify_Author to the workflow8. Add the original author token to the above step9.  Enable the workflow10. Open the configration manager applet and create a Metadata field Workflow with option list enabled and add the list of values as show hereView image11. Create another metadata field WorkflowStepOne with option list configured to the Users View. This shall display all the users registered with UCM, which when selected shall be associated with the tokens associated with the workflow. Refer the above token.View imageAs indicated in the above steps you could create multiple work flows and associate the custom metadata field values to the tokens so that the contributors can decide who can approve their  content.

    Read the article

  • Why Doesn’t Partition Elimination Work?

    - by Paul White
    Given a partitioned table and a simple SELECT query that compares the partitioning column to a single literal value, why does SQL Server read all the partitions when it seems obvious that only one partition needs to be examined? Sample Data The following script creates a table, partitioned on the char(3) column ‘Div’, and populates it with 100,000 rows of data: USE Sandpit; GO CREATE PARTITION FUNCTION PF ( char (3)) AS RANGE RIGHT FOR VALUES ( '1' , '2' , '3' , '4' , '5' , '6' , '7' , '8' , '9'...(read more)

    Read the article

  • Adding Descriptive Flex Field (DFF) through OAF Personalization

    - by Manoj Madhusoodanan
    In this blog I will explain how to add a DFF to a existing OAF page through personalization.I am using Supplier Quick Update Page ( /oracle/apps/pos/supplier/webui/SuppSummPG ). If you want to see how to create DFF please click here. In this scenario I am using a custom DFF. Following are the details. Application -> Payables ( Code: SQLAP )Name -> XXCUST_SUPPLIER_DFFTitle -> XXCUST - Supplier DFFTable Name -> AP_SUPPLIERSDFV View name -> XXCUST_SUPPLIER_DFVReference Fields -> ATTRIBUTE_CATEGORY Following are the Context Field Details. Prompt -> Supplier TypeValue Set -> XXCUST_SUP_TYPE ( Values : External and Internal )Reference Field -> ATTRIBUTE_CATEGORY Below table shows the segment details of XXCUST_SUPPLIER_DFF. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Code Segments Column Value Set Global Data Elements Identification Number ATTRIBUTE1 15 Characters External Type ATTRIBUTE2 XXCUST_EXT_SUP_TYPE Values          Domestic           International Internal Department ATTRIBUTE2 15 Characters Following steps you need to perform to create flex item in the Quick Update page. 1) Click on Personalize Page.In the Personalize Page click on Complete View. 2) Click on Create Item.( Based on where you want to place the DFF choose appropriate layout). 3) Create flex item with following details. 4) If you want to arrange the item in the page click on Reorder. Following is the output.

    Read the article

  • Customize Team Build 2010 – Part 16: Specify the relative reference path

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application Part 16: Specify the relative reference path As I have already blogged about, it is not intuitive how to specify the paths where the build server has to look for references that are stored in Source Control. It is a common practice to store 3rd party libraries in Source Control, so they are available to everyone, everyone uses the same version of the libraries and updating a library can be done centrally. In Team Build 2010 these paths are specified as a parameter for MSBuild. What we will do in this post is building the values for this parameter based on the values in an argument. You are now pretty aware how to customize the build template, so let’s do the modifications in another way. Instead of opening the xaml file in the workflow designer, we open it in the XML editor. You can open it in the XML Editor by either selecting the Open with menu (see the context menu), or by choosing the View code option. To add this functionality we need to: Specify a new argument Add the argument to the metadata Build the absolute paths for the references and add these paths to the MSBuild arguments 1. Specify a new argument Locate at the top of the document the Members (which are the arguments) of the XAML and add the following line <x:Property Name="ReferencePaths" Type="InArgument(s:String[])" /> 2. Add the argument to the metadata Then locate the line <mtbw:ProcessParameterMetadataCollection> and paste the following line <mtbw:ProcessParameterMetadata Category="Misc" Description="The list of reference paths, relative to the root path in the Workspace mapping." DisplayName="Reference paths" ParameterName="ReferencePaths" /> 3. Build the absolute paths for the references and add these paths to the MSBuild arguments Now locate the place where the assignments are done to the variables used in the agent. And add the following lines after the last Assign activity         <Sequence DisplayName="Initialize ReferencePath" sap:VirtualizedContainerService.HintSize="464,428">           <Sequence.Variables>             <Variable x:TypeArguments="x:String" Name="ReferencePathsArgument">               <Variable.Default>                 <Literal x:TypeArguments="x:String" Value="" />               </Variable.Default>             </Variable>           </Sequence.Variables>           <sap:WorkflowViewStateService.ViewState>             <scg:Dictionary x:TypeArguments="x:String, x:Object">               <x:Boolean x:Key="IsExpanded">True</x:Boolean>             </scg:Dictionary>           </sap:WorkflowViewStateService.ViewState>           <ForEach x:TypeArguments="x:String" DisplayName="Iterate through the paths" sap:VirtualizedContainerService.HintSize="287,206" mtbwt:BuildTrackingParticipant.Importance="Low" Values="[ReferencePaths]">             <ActivityAction x:TypeArguments="x:String">               <ActivityAction.Argument>                 <DelegateInArgument x:TypeArguments="x:String" Name="path" />               </ActivityAction.Argument>               <Assign x:TypeArguments="x:String" DisplayName="Build ReferencePath argument" sap:VirtualizedContainerService.HintSize="257,100" mtbwt:BuildTrackingParticipant.Importance="Low"  To="[ReferencePathsArgument]" Value="[If(String.IsNullOrEmpty(ReferencePathsArgument), &quot;&quot;, ReferencePathsArgument + &quot;;&quot;) + IO.Path.Combine(SourcesDirectory, path)]" />             </ActivityAction>           </ForEach>           <Assign DisplayName="Append the reference paths to the MSBuild Arguments" sap:VirtualizedContainerService.HintSize="287,58">             <Assign.To>               <OutArgument x:TypeArguments="x:String">[MSBuildArguments]</OutArgument>             </Assign.To>             <Assign.Value>               <InArgument x:TypeArguments="x:String">[String.Format("{0} /p:ReferencePath=""{1}""", MSBuildArguments, ReferencePathsArgument)]</InArgument>             </Assign.Value>           </Assign>         </Sequence> Now you can use the template to specify the paths relative to SourcesDirectory. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Daily tech links for .net and related technologies - Mar 18-21, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Mar 18-21, 2010 Web Development TDD kata for ASP.NET MVC controllers (part 2) -David Take Control Of Web Control ClientID Values in ASP.NET 4.0 - Scott Mitchell Inside the ASP.NET MVC Controller Factory - Dino Esposito Microsoft, jQuery, and Templating - stephen walther Cross Domain AJAX Request with YQL and jQuery - Jeffrey Way T4MVC Add-In to auto run template -Wayne Web Design Website Content Planning The Right Way - Kristin Wemmer Microsoft...(read more)

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #051

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Explanation and Understanding NOT NULL Constraint NOT NULL is integrity CONSTRAINT. It does not allow creating of the row where column contains NULL value. Most discussed questions about NULL is what is NULL? I will not go in depth analysis it. Simply put NULL is unknown or missing data. When NULL is present in database columns, it can affect the integrity of the database. I really do not prefer NULL in the database unless they are absolutely necessary. Three T-SQL Script to Create Primary Keys on Table I have always enjoyed writing about three topics Constraint and Keys, Backup and Restore and Datetime Functions. Primary Keys constraints prevent duplicate values for columns and provides a unique identifier to each column, as well it creates clustered index on the columns. 2008 Get Numeric Value From Alpha Numeric String – UDF for Get Numeric Numbers Only SQL is great with String operations. Many times, I use T-SQL to do my string operation. Let us see User Defined Function, which I wrote a few days ago, which will return only Numeric values from Alpha Numeric values. Introduction and Example of UNION and UNION ALL It is very much interesting when I get requests from blog reader to re-write my previous articles. I have received few requests to rewrite my article SQL SERVER – Union vs. Union All – Which is better for performance? with examples. I request you to read my previous article first to understand what is the concept and read this article to understand the same concept with an example. Downgrade Database for Previous Version The main questions is how they can downgrade the from SQL Server 2005 to SQL Server 2000? The answer is : Not Possible. Get Common Records From Two Tables Without Using Join Following is my scenario, Suppose Table 1 and Table 2 has same column e.g. Column1 Following is the query, 1. Select column1,column2 From Table1 2. Select column1 From Table2 I want to find common records from these tables, but I don’t want to use the Join clause because for that I need to specify the column name for Join condition. Will you help me to get common records without using Join condition? I am using SQL Server 2005. Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 A year ago I wrote a post about SQL SERVER – Retrieve – Select Only Date Part From DateTime – Best Practice where I have discussed two different methods of getting the date part from datetime. Introduction to CLR – Simple Example of CLR Stored Procedure CLR is an abbreviation of Common Language Runtime. In SQL Server 2005 and later version of it database objects can be created which are created in CLR. Stored Procedures, Functions, Triggers can be coded in CLR. CLR is faster than T-SQL in many cases. CLR is mainly used to accomplish tasks which are not possible by T-SQL or can use lots of resources. The CLR can be usually implemented where there is an intense string operation, thread management or iteration methods which can be complicated for T-SQL. Implementing CLR provides more security to the Extended Stored Procedure. 2009 Comic Slow Query – SQL Joke Before Presentation After Presentation Enable Automatic Statistic Update on Database In one of the recent projects, I found out that despite putting good indexes and optimizing the query, I could not achieve an optimized performance and I still received an unoptimized response from the SQL Server. On examination, I figured out that the culprit was statistics. The database that I was trying to optimize had auto update of the statistics was disabled. Recently Executed T-SQL Query Please refer to blog post  query to recently executed T-SQL query on database. Change Collation of Database Column – T-SQL Script – Consolidating Collations – Extention Script At some time in your DBA career, you may find yourself in a position when you sit back and realize that your database collations have somehow run amuck, or are faced with the ever annoying CANNOT RESOLVE COLLATION message when trying to join data of varying collation settings. 2010 Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology Everyone always dreams of visiting their school and college, where they have studied once. It is a great feeling to see the college once again – where you have spent the wonderful golden years of your time. College time is filled with studies, education, emotions and several plans to build a future. I consider myself fortunate as I got the opportunity to study at some of the best places in the world. Change Column DataTypes There are times when I feel like writing that I am a day older in SQL Server. In fact, there are many who are looking for a solution that is simple enough. Have you ever searched online for something very simple. I often do and enjoy doing things which are straight forward and easy to change. 2011 Three DMVs – sys.dm_server_memory_dumps – sys.dm_server_services – sys.dm_server_registry In this blog post we will see three new DMVs which are introduced in Denali. The DMVs are very simple and there is not much to describe them. So here is the simple game. I will be asking a question back to you after seeing the result of the each of the DMV and you help me to complete this blog post. A Simple Quiz – T-SQL Brain Trick If you have some time, I strongly suggest you try this quiz out as it is for sure twists your brain. 2012 List All The Column With Specific Data Types in Database 5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. Find First Non-Numeric Character from String The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. Finding Different ColumnName From Almost Identitical Tables Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. Storing Data and Files in Cloud – Dropbox – Personal Technology Tip I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer

    - by pinaldave
    Incredibly, SQL Server has so much information to share with us. Every single day, I am amazed with this SQL Server technology. Sometimes I find several interesting information by just querying few of the DMV. And when I present this info in front of my client during performance tuning consultancy, they are surprised with my findings. Today, I am going to share one of the hidden gems of DMV with you, the one which I frequently use to understand what’s going on under the hood of SQL Server. SQL Server keeps the record of most of the operations of the Query Optimizer. We can learn many interesting details about the optimizer which can be utilized to improve the performance of server. SELECT * FROM sys.dm_exec_query_optimizer_info WHERE counter IN ('optimizations', 'elapsed time','final cost', 'insert stmt','delete stmt','update stmt', 'merge stmt','contains subquery','tables', 'hints','order hint','join hint', 'view reference','remote query','maximum DOP', 'maximum recursion level','indexed views loaded', 'indexed views matched','indexed views used', 'indexed views updated','dynamic cursor request', 'fast forward cursor request') All occurrence values are cumulative and are set to 0 at system restart. All values for value fields are set to NULL at system restart. I have removed a few of the internal counters from the script above, and kept only documented details. Let us check the result of the above query. As you can see, there is so much vital information that is revealed in above query. I can easily say so many things about how many times Optimizer was triggered and what the average time taken by it to optimize my queries was. Additionally, I can also determine how many times update, insert or delete statements were optimized. I was able to quickly figure out that my client was overusing the Query Hints using this dynamic management view. If you have been reading my blog, I am sure you are aware of my series related to SQL Server Views SQL SERVER – The Limitations of the Views – Eleven and more…. With this, I can take a quick look and figure out how many times Views were used in various solutions within the query. Moreover, you can easily know what fraction of the optimizations has been involved in tuning server. For example, the following query would tell me, in total optimizations, what the fraction of time View was “reference“. As this View also includes system Views and DMVs, the number is a bit higher on my machine. SELECT (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'view reference') / (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'optimizations') AS ViewReferencedFraction Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • A small, intra-app Object to String Serializer

    - by Rick Strahl
    On a few occasions I've needed a very compact serializer for small and simple, flat object serialization, typically for storage in Cookies or a FormsAuthentication ticket in ASP.NET. XML and JSON serialization are too verbose for those scenarios so a simple property serializer that strings together the values was needed. Originally I did this by hand, but here is a class that automates the process.

    Read the article

  • Building a better .NET Application Configuration Class - revisited

    - by Rick Strahl
    Managing configuration settings is an important part of successful applications. It should be easy to ensure that you can easily access and modify configuration values within your applications. If it's not - well things don't get parameterized as much as they should. In this post I discuss a custom Application Configuration class that makes it super easy to create reusable configuration objects in your applications using a code-first approach and the ability to persist configuration information into various types of configuration stores.

    Read the article

  • Easier ASP.NET MVC Routing

    - by Steve Wilkes
    I've recently refactored the way Routes are declared in an ASP.NET MVC application I'm working on, and I wanted to share part of the system I came up with; a really easy way to declare and keep track of ASP.NET MVC Routes, which then allows you to find the name of the Route which has been selected for the current request. Traditional MVC Route Declaration Traditionally, ASP.NET MVC Routes are added to the application's RouteCollection using overloads of the RouteCollection.MapRoute() method; for example, this is the standard way the default Route which matches /controller/action URLs is created: routes.MapRoute(     "Default",     "{controller}/{action}/{id}",     new { controller = "Home", action = "Index", id = UrlParameter.Optional }); The first argument declares that this Route is to be named 'Default', the second specifies the Route's URL pattern, and the third contains the URL pattern segments' default values. To then write a link to a URL which matches the default Route in a View, you can use the HtmlHelper.RouteLink() method, like this: @ this.Html.RouteLink("Default", new { controller = "Orders", action = "Index" }) ...that substitutes 'Orders' into the {controller} segment of the default Route's URL pattern, and 'Index' into the {action} segment. The {Id} segment was declared optional and isn't specified here. That's about the most basic thing you can do with MVC routing, and I already have reservations: I've duplicated the magic string "Default" between the Route declaration and the use of RouteLink(). This isn't likely to cause a problem for the default Route, but once you get to dozens of Routes the duplication is a pain. There's no easy way to get from the RouteLink() method call to the declaration of the Route itself, so getting the names of the Route's URL parameters correct requires some effort. The call to MapRoute() is quite verbose; with dozens of Routes this gets pretty ugly. If at some point during a request I want to find out the name of the Route has been matched.... and I can't. To get around these issues, I wanted to achieve the following: Make declaring a Route very easy, using as little code as possible. Introduce a direct link between where a Route is declared, where the Route is defined and where the Route's name is used, so I can use Visual Studio's Go To Definition to get from a call to RouteLink() to the declaration of the Route I'm using, making it easier to make sure I use the correct URL parameters. Create a way to access the currently-selected Route's name during the execution of a request. My first step was to come up with a quick and easy syntax for declaring Routes. 1 . An Easy Route Declaration Syntax I figured the easiest way of declaring a route was to put all the information in a single string with a special syntax. For example, the default MVC route would be declared like this: "{controller:Home}/{action:Index}/{Id}*" This contains the same information as the regular way of defining a Route, but is far more compact: The default values for each URL segment are specified in a colon-separated section after the segment name The {Id} segment is declared as optional simply by placing a * after it That's the default route - a pretty simple example - so how about this? routes.MapRoute(     "CustomerOrderList",     "Orders/{customerRef}/{pageNo}",     new { controller = "Orders", action = "List", pageNo = UrlParameter.Optional },     new { customerRef = "^[a-zA-Z0-9]+$", pageNo = "^[0-9]+$" }); This maps to the List action on the Orders controller URLs which: Start with the string Orders/ Then have a {customerRef} set of characters and numbers Then optionally a numeric {pageNo}. And again, it’s quite verbose. Here's my alternative: "Orders/{customerRef:^[a-zA-Z0-9]+$}/{pageNo:^[0-9]+$}*->Orders/List" Quite a bit more brief, and again, containing the same information as the regular way of declaring Routes: Regular expression constraints are declared after the colon separator, the same as default values The target controller and action are specified after the -> The {pageNo} is defined as optional by placing a * after it With an appropriate parser that gave me a nice, compact and clear way to declare routes. Next I wanted to have a single place where Routes were declared and accessed. 2. A Central Place to Declare and Access Routes I wanted all my Routes declared in one, dedicated place, which I would also use for Route names when calling RouteLink(). With this in mind I made a single class named Routes with a series of public, constant fields, each one relating to a particular Route. With this done, I figured a good place to actually declare each Route was in an attribute on the field defining the Route’s name; the attribute would parse the Route definition string and make the resulting Route object available as a property. I then made the Routes class examine its own fields during its static setup, and cache all the attribute-created Route objects in an internal Dictionary. Finally I made Routes use that cache to register the Routes when requested, and to access them later when required. So the Routes class declares its named Routes like this: public static class Routes{     [RouteDefinition("Orders/{customerName}->Orders/Index")]     public const string OrdersCustomerIndex = "OrdersCustomerIndex";     [RouteDefinition("Orders/{customerName}/{orderId:^([0-9]+)$}->Orders/Details")]     public const string OrdersDetails = "OrdersDetails";     [RouteDefinition("{controller:Home}*/{action:Index}*")]     public const string Default = "Default"; } ...which are then used like this: @ this.Html.RouteLink(Routes.Default, new { controller = "Orders", action = "Index" }) Now that using Go To Definition on the Routes.Default constant takes me to where the Route is actually defined, it's nice and easy to quickly check on the parameter names when using RouteLink(). Finally, I wanted to be able to access the name of the current Route during a request. 3. Recovering the Route Name The RouteDefinitionAttribute creates a NamedRoute class; a simple derivative of Route, but with a Name property. When the Routes class examines its fields and caches all the defined Routes, it has access to the name of the Route through the name of the field against which it is defined. It was therefore a pretty easy matter to have Routes give NamedRoute its name when it creates its cache of Routes. This means that the Route which is found in RequestContext.RouteData.Route is now a NamedRoute, and I can recover the Route's name during a request. For visibility, I made NamedRoute.ToString() return the Route name and URL pattern, like this: The screenshot is from an example project I’ve made on bitbucket; it contains all the named route classes and an MVC 3 application which demonstrates their use. I’ve found this way of defining and using Routes much tidier than the default MVC system, and you find it useful too

    Read the article

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • JavaFX 2.0 at Devoxx 2011

    - by Janice J. Heiss
    JavaFX Sessions Abound JavaFX had a big presence at Devoxx 2011 as witnessed by the number of sessions this year given by leading JavaFX movers and shakers.     “JavaFX 2.0 -- A Java Developer's Guide” by Java Champions Stephen Chin and Peter Pilgrim     “JavaFX 2.0 Hands On” by Jasper Potts and Richard Bair     “Animation Bringing your User Interfaces to Life” by Michael Heinrichs and John Yoong (JavaFX development team)     “Complete Guide to Writing Custom Bindings in JavaFX 2.0” by Michael Heinrichs (JavaFX development team)     “Java Rich Clients with JavaFX 2.0” by Jasper Potts and Richard Bair     “JavaFX Properties & Bindings for Experts” (and those who want to become experts) by Michael Heinrichs (JavaFX development team)     “JavaFX Under the Hood” by Richard Bair     “JavaFX Open Mic” with Jasper Potts and Richard Bair With the release of JavaFX 2.0 and Oracle’s move towards an open development model with an open bug database already created, it’s a great time for developers to take the JavaFX plunge. One Devoxx attendee, Mark Stephens, a developer at IDRsolutions blogged about a problem he was having setting up JavaFX on NetBeans to work on his Mac. He wrote: “I’ve tried desperate measures (I even read and reread the instructions) but it did not help. Luckily, I am at Devoxx at the moment and there seem to be a lot of JavaFX gurus here (and it is running on all their Macs). So I asked them… It turns out that sometimes the software does not automatically pickup the settings like it should do if you give it the JavaFX SDK path. The solution is actually really simple (isn’t it always once you know). Enter these values manually and it will work.” He simply entered certain values and his problem was solved. He thanked Java Champion Stephen Chin, “for a great talk at Devoxx and putting me out of my misery.” JavaFX in Java Magazine Over in the November/December 2011 issue of Java Magazine, Oracle’s Simon Ritter, well known for his creative Java inventions at JavaOne, has an article up titled “JavaFX and Swing Integration” in which he shows developers how to use the power of JavaFX to migrate Swing interfaces to JavaFX. The consensus among JavaFX experts is that JavaFX is the next step in the evolution of Java as a rich client platform. In the same issue Java Champion and JavaFX maven James Weaver has an article, “Using Transitions for Animation in JavaFX 2.0”. In addition, Oracle’s Vice President of Java Client Development, Nandini Ramani, provides the keys to unlock the mysteries of JavaFX 2.0 in her Java Magazine interview. Look for the JavaFX community to grow and flourish in coming years.

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • MVC's Html.DropDownList and "There is no ViewData item of type 'IEnumerable<SelectListItem>' that has the key '...'

    - by pjohnson
    ASP.NET MVC's HtmlHelper extension methods take out a lot of the HTML-by-hand drudgery to which MVC re-introduced us former WebForms programmers. Another thing to which MVC re-introduced us is poor documentation, after the excellent documentation for most of the rest of ASP.NET and the .NET Framework which I now realize I'd taken for granted. I'd come to regard using HtmlHelper methods instead of writing HTML by hand as a best practice. When I upgraded a project from MVC 3 to MVC 4, several hidden fields with boolean values broke, because MVC 3 called ToString() on those values implicitly, and MVC 4 threw an exception until you called ToString() explicitly. Fields that used HtmlHelper weren't affected. I then went through dozens of views and manually replaced hidden inputs that had been coded by hand with Html.Hidden calls. So for a dropdown list I was rendering on the initial page as empty, then populating via JavaScript after an AJAX call, I tried to use a HtmlHelper method: @Html.DropDownList("myDropdown") which threw an exception: System.InvalidOperationException: There is no ViewData item of type 'IEnumerable<SelectListItem>' that has the key 'myDropdown'. That's funny--I made no indication I wanted to use ViewData. Why was it looking there? Just render an empty select list for me. When I populated the list with items, it worked, but I didn't want to do that: @Html.DropDownList("myDropdown", new List<SelectListItem>() { new SelectListItem() { Text = "", Value = "" } }) I removed this dummy item in JavaScript after the AJAX call, so this worked fine, but I shouldn't have to give it a list with a dummy item when what I really want is an empty select. A bit of research with JetBrains dotPeek (helpfully recommended by Scott Hanselman) revealed the problem. Html.DropDownList requires some sort of data to render or it throws an error. The documentation hints at this but doesn't make it very clear. Behind the scenes, it checks if you've provided the DropDownList method any data. If you haven't, it looks in ViewData. If it's not there, you get the exception above. In my case, the helper wasn't doing much for me anyway, so I reverted to writing the HTML by hand (I ain't scared), and amended my best practice: When an HTML control has an associated HtmlHelper method and you're populating that control with data on the initial view, use the HtmlHelper method instead of writing by hand.

    Read the article

  • What I like about WIF&rsquo;s Claims-based Authorization

    - by Your DisplayName here!
    In “traditional” .NET with its IPrincipal interface and IsInRole method, developers were encouraged to write code like this: public void AddCustomer(Customer customer) {     if (Thread.CurrentPrincipal.IsInRole("Sales"))     {         // add customer     } } In code reviews I’ve seen tons of code like this. What I don’t like about this is, that two concerns in your application get tightly coupled: business and security logic. But what happens when the security requirements change – and they will (e.g. members of the sales role and some other people from different roles need to create customers)? Well – since your security logic is sprinkled across your project you need to change the security checks in all relevant places (and make sure you don’t forget one) and you need to re-test, re-stage and re-deploy the complete app. This is clearly not what we want. WIF’s claims-based authorization encourages developers to separate business code and authorization policy evaluation. This is a good thing. So the same security check with WIF’s out-of-the box APIs would look like this: public void AddCustomer(Customer customer) {     try     {         ClaimsPrincipalPermission.CheckAccess("Customer", "Add");           // add customer     }     catch (SecurityException ex)     {         // access denied     } } You notice the fundamental difference? The security check only describes what the code is doing (represented by a resource/action pair) – and does not state who is allowed to invoke the code. As I mentioned earlier – the who is most probably changing over time – the what most probably not. The call to ClaimsPrincipalPermission hands off to another class called the ClaimsAuthorizationManager. This class handles the evaluation of your security policy and is ideally in a separate assembly to allow updating the security logic independently from the application logic (and vice versa). The claims authorization manager features a method called CheckAccess that retrieves three values (wrapped inside an AuthorizationContext instance) – action (“add”), resource (“customer”) and the principal (including its claims) in question. CheckAccess then evaluates those three values and returns true/false. I really like the separation of concerns part here. Unfortunately there is not much support from Microsoft beyond that point. And without further tooling and abstractions the CheckAccess method quickly becomes *very* complex. But still I think that is the way to go. In the next post I will tell you what I don’t like about it (and how to fix it).

    Read the article

  • Parallel LINQ - PLINQ

    - by nmarun
    Turns out now with .net 4.0 we can run a query like a multi-threaded application. Say you want to query a collection of objects and return only those that meet certain conditions. Until now, we basically had one ‘control’ that iterated over all the objects in the collection, checked the condition on each object and returned if it passed. We obviously agree that if we can ‘break’ this task into smaller ones, assign each task to a different ‘control’ and ask all the controls to do their job - in-parallel, the time taken the finish the entire task will be much lower. Welcome to PLINQ. Let’s take some examples. I have the following method that uses our good ol’ LINQ. 1: private static void Linq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers =   from num in source 12: where IsDivisibleBy(num, 2) 13: select num; 14: 15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } I’ve added comments for the major steps, but the only thing I want to talk about here is the IsDivisibleBy() method. I know I could have just included the logic directly in the where clause. I called a method to add ‘delay’ to the execution of the query - to simulate a loooooooooong operation (will be easier to compare the results). 1: private static bool IsDivisibleBy(int number, int divisor) 2: { 3: // iterate over some database query 4: // to add time to the execution of this method; 5: // the TableB has around 10 records 6: for (int i = 0; i < 10; i++) 7: { 8: DataClasses1DataContext dataContext = new DataClasses1DataContext(); 9: var query = from b in dataContext.TableBs select b; 10: 11: foreach (var row in query) 12: { 13: // Do NOTHING (wish my job was like this) 14: } 15: } 16:  17: return number % divisor == 0; 18: } Now, let’s look at how to modify this to PLINQ. 1: private static void Plinq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers = from num in source.AsParallel() 12: where IsDivisibleBy(num, 2) 13: select num; 14:  15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } That’s it, this is now in PLINQ format. Oh and if you haven’t found the difference, look line 11 a little more closely. You’ll see an extension method ‘AsParallel()’ added to the ‘source’ variable. Couldn’t be more simpler right? So this is going to improve the performance for us. Let’s test it. So in my Main method of the Console application that I’m working on, I make a call to both. 1: static void Main(string[] args) 2: { 3: // set lower and upper limits 4: int lowerLimit = 1; 5: int upperLimit = 20; 6: // call the methods 7: Console.WriteLine("Calling Linq() method"); 8: Linq(lowerLimit, upperLimit); 9: 10: Console.WriteLine(); 11: Console.WriteLine("Calling Plinq() method"); 12: Plinq(lowerLimit, upperLimit); 13:  14: Console.ReadLine(); // just so I get enough time to read the output 15: } YMMV, but here are the results that I got:    It’s quite obvious from the above results that the Plinq() method is taking considerably less time than the Linq() version. I’m sure you’ve already noticed that the output of the Plinq() method is not in order. That’s because, each of the ‘control’s we sent to fetch the results, reported with values as and when they obtained them. This is something about parallel LINQ that one needs to remember – the collection cannot be guaranteed to be undisturbed. This could be counted as a negative about PLINQ (emphasize ‘could’). Nevertheless, if we want the collection to be sorted, we can use a SortedSet (.net 4.0) or build our own custom ‘sorter’. Either way we go, there’s a good chance we’ll end up with a better performance using PLINQ. And there’s another negative of PLINQ (depending on how you see it). This is regarding the CPU cycles. See the usage for Linq() method (used ResourceMonitor): I have dual CPU’s and see the height of the peak in the bottom two blocks and now compare to what happens when I run the Plinq() method. The difference is obvious. Higher usage, but for a shorter duration (width of the peak). Both these points make sense in both cases. Linq() runs for a longer time, but uses less resources whereas Plinq() runs for a shorter time and consumes more resources. Even after knowing all these, I’m still inclined towards PLINQ. PLINQ rocks! (no hard feelings LINQ)

    Read the article

  • Heightmap generation

    - by Ziaix
    I want to implement something like this to create a heightmap: 'Place a group of coordinates evenly across a map, and give them height values within a certain range. Repeatedly create coordinates between all of those coordinates, setting their height by deriving a value that was a mean value of all the surrounding coordinates.' However, I'm not sure how I would go about it - I'm not sure how I could code the part where I place the coordinates in between the existing coordinates. Can anyone give any help/advice?

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • BIEE Answer Parameter Passing

    - by Tim Dexter
    A little off BIP topic today but I spent some time researching how to pass parameters between Answer reports and knocked up a document for a client this morning and thought, what the heck someone might find it useful. If you have a source Answer request and you want to link to another Answer in another subject area and pass values to the target request, read this.

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >