Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 170/293 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Formating Columns in Excel created by af:exportCollectionActionListener

    - by Duncan Mills
    The af:exportCollectionActionListener behavior in ADF Faces Rich client provides a very simple way of quickly dumping out the contents or selected rows in a table or treeTable to Excel. However, that simplicity comes at a price as it pretty much left up to Excel how to format the data. A common use case where you have a problem is that of ID columns which are often long numerics. You probably want to represent this data as a string, Excel however will probably have other ideas and render it as an exponent  - not what you intended. In earlier releases of the framework you could sort of work around this by taking advantage of a bug which would allow you to surround the outputText in question with invisible outputText components which provided formatting hints to Excel. Something like this: <af:column headertext="Some wide label">  <af:panelgrouplayout layout="horizontal">     <af:outputtext value="=TEXT(" visible="false">     <af:outputtext value="#{row.bigNumberValue}" rendered="true"/>    <af:outputtext value=",0)" visible="false">   </af:panelgrouplayout> </af:column> However, this bug was fixed and so it can no longer be used as a trick, the export now ignores invisible columns. So, if you really need control over the formatting there are several alternatives: First the more powerful ADF Desktop Integration (ADFdi) package which allows you to build fully transactional spreadsheets that "pull" the data and can update it. This gives you all the control that might need on formatting but it does need specific Excel Add-ins on the client to work. For more information about ADFdi have a look at this tutorial on OTN. Or you can of course look at BI Publisher or Apache POI if you're happy with output only spreadsheets

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Practical mysql schema advice for eCommerce store - Products & Attributes

    - by Gravy
    I am currently planning my first eCommerce application (mySQL & Laravel Framework). I have various products, which all have different attributes. Describing products very simply, Some will have a manufacturer, some will not, some will have a diameter, others will have a width, height, depth and others will have a volume. Option 1: Create a master products table, and separate tables for specific product types (polymorphic relations). That way, I will not have any unnecessary null fields in the products table. Option 2: Create a products table, with all possible fields despite the fact that there will be a lot of null rows Option 3: Normalise so that each attribute type has it's own table. Option 4: Create an attributes table, as well as an attribute_values table with the value being varchar regardless of the actual data-type. The products table would have a many:many relationship with the attributes table. Option 5: Common attributes to all or most products put in the products table, and specific attributes to a particular category of product attached to the categories table. My thoughts are that I would like to be able to allow easy product filtering by these attributes and sorting. I would also want the frontend to be fast, less concern over the performance of the inserting and updating of product records. Im a bit overwhelmed with the vast implementation options, and cannot find a suitable answer in terms of the best method of approach. Could somebody point me in the right direction? In an ideal world, I would like to offer the following kind of functionality - http://www.glassesdirect.co.uk/products/ to my eCommerce store. As can be seen, in the sidebar, you can select an attribute the glasses to filter them. e.g. male / female or plastic / metal / titanium etc... Alternatively, should I just dump the mySql relational database idea and learn mongodb?

    Read the article

  • Optimize SUMMARIZE with ADDCOLUMNS in Dax #ssas #tabular #dax #powerpivot

    - by Marco Russo (SQLBI)
    If you started using DAX as a query language, you might have encountered some performance issues by using SUMMARIZE. The problem is related to the calculation you put in the SUMMARIZE, by adding what are called extension columns, which compute their value within a filter context defined by the rows considered in the group that the SUMMARIZE uses to produce each row in the output. Most of the time, for simple table expressions used in the first parameter of SUMMARIZE, you can optimize performance by removing the extended columns from the SUMMARIZE and adding them by using an ADDCOLUMNS function. In practice, instead of writing SUMMARIZE( <table>, <group_by_column>, <column_name>, <expression> ) you can write: ADDCOLUMNS(     SUMMARIZE( <table>, <group by column> ),     <column_name>, CALCULATE( <expression> ) ) The performance difference might be huge (orders of magnitude) but this optimization might produce a different semantic and in these cases it should not be used. A longer discussion of this topic is included in my Best Practices Using SUMMARIZE and ADDCOLUMNS article on SQLBI, which also include several details about the DAX syntax with extended columns. For example, did you know that you can create an extended column in SUMMARIZE and ADDCOLUMNS with the same name of existing measures? It is *not* a good thing to do, and by reading the article you will discover why. Enjoy DAX!

    Read the article

  • A Look at the GridView's New Sorting Styles in ASP.NET 4.0

    Like every Web control in the ASP.NET toolbox, the GridView includes a variety of style-related properties, including CssClass, Font, ForeColor, BackColor, Width, Height, and so on. The GridView also includes style properties that apply to certain classes of rows in the grid, such as RowStyle, AlternatingRowStyle, HeaderStyle, and PagerStyle. Each of these meta-style properties offer the standard style properties (CssClass, Font, etc.) as subproperties. In ASP.NET 4.0, Microsoft added four new style properties to the GridView control: SortedAscendingHeaderStyle, SortedAscendingCellStyle, SortedDescendingHeaderStyle, and SortedDescendingCellStyle. These four properties are meta-style properties like RowStyle and HeaderStyle, but apply to column of cells rather than a row. These properties only apply when the GridView is sorted - if the grid's data is sorted in ascending order then the SortedAscendingHeaderStyle and SortedAscendingCellStyle properties define the styles for the column the data is sorted by. The SortedDescendingHeaderStyle and SortedDescendingCellStyle properties apply to the sorted column when the results are sorted in descending order. These four new properties make it easier to customize the appearance of the column by which the data is sorted. Using these properties along with a touch of Cascading Style Sheets (CSS) it is possible to add up and down arrows to the sorted column's header to indicate whether the data is sorted in ascending or descending order. Likewise, these properties can be used to shade the sorted column or make its text bold. This article shows how to use these four new properties to style the sorted column. Read on to learn more! Read More >

    Read the article

  • SQL SERVER – 2008 – Unused Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify unused indexes on any database. Please note, if you should not drop all the unused indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to User Scan, User Lookup and User Update when you are going to drop index. The generic understanding is if this values are all high and User Seek is low, the index needs tuning. The index drop script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Unused Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 o.name AS ObjectName , i.name AS IndexName , i.index_id AS IndexID , dm_ius.user_seeks AS UserSeek , dm_ius.user_scans AS UserScans , dm_ius.user_lookups AS UserLookups , dm_ius.user_updates AS UserUpdates , p.TableRows , 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(s.name) + '.' + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS 'drop statement' FROM sys.dm_db_index_usage_stats dm_ius INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = i.OBJECT_ID INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID INNER JOIN sys.schemas s ON o.schema_id = s.schema_id INNER JOIN (SELECT SUM(p.rows) TableRows, p.index_id, p.OBJECT_ID FROM sys.partitions p GROUP BY p.index_id, p.OBJECT_ID) p ON p.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = p.OBJECT_ID WHERE OBJECTPROPERTY(dm_ius.OBJECT_ID,'IsUserTable') = 1 AND dm_ius.database_id = DB_ID() AND i.type_desc = 'nonclustered' AND i.is_primary_key = 0 AND i.is_unique_constraint = 0 ORDER BY (dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups) ASC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Logparser and Powershell

    - by Michel Klomp
    Logparser in powershell One of the few examples how to use logparser in powershell is from the Microsoft.com Operations blog. This script is a good base to create more advanced logparser scripts: $myQuery = new-object -com MSUtil.LogQuery $szQuery = “Select top 10 * from r:\ex07011210.log”; $recordSet = $myQuery.Execute($szQuery) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {             $record=$recordSet.getRecord();             write-host ($record.GetValue(0) + “,”+ $record.GetValue(1)); } $recordSet.Close(); Logparser input formats The previous example uses the default logparser object, you can extent this with the logparser input formats. with this formats get information from the event-log, different types of logfiles, the Active Directory, the registry and XML files. Here are the different ProgId’s you can use. Input Format ProgId ADS MSUtil.LogQuery.ADSInputFormat BIN MSUtil.LogQuery.IISBINInputFormat CSV MSUtil.LogQuery.CSVInputFormat ETW MSUtil.LogQuery.ETWInputFormat EVT MSUtil.LogQuery.EventLogInputFormat FS MSUtil.LogQuery.FileSystemInputFormat HTTPERR MSUtil.LogQuery.HttpErrorInputFormat IIS MSUtil.LogQuery.IISIISInputFormat IISODBC MSUtil.LogQuery.IISODBCInputFormat IISW3C MSUtil.LogQuery.IISW3CInputFormat NCSA MSUtil.LogQuery.IISNCSAInputFormat NETMON MSUtil.LogQuery.NetMonInputFormat REG MSUtil.LogQuery.RegistryInputFormat TEXTLINE MSUtil.LogQuery.TextLineInputFormat TEXTWORD MSUtil.LogQuery.TextWordInputFormat TSV MSUtil.LogQuery.TSVInputFormat URLSCAN MSUtil.LogQuery.URLScanLogInputFormat W3C MSUtil.LogQuery.W3CInputFormat XML MSUtil.LogQuery.XMLInputFormat Using logparser to parse IIS logs if you use the IISW3CinputFormat you can use the field names instead of de row number to get the information from an IIS logfile, it also skips the comment rows in the logfile. $ObjLogparser = new-object -com MSUtil.LogQuery $objInputFormat = new-object -com MSUtil.LogQuery.IISW3CInputFormat $Query = “Select top 10 * from c:\temp\hb\ex071002.log”; $recordSet = $ObjLogparser.Execute($Query, $objInputFormat) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {     $record=$recordSet.getRecord();     write-host ($record.GetValue(“s-ip”) + “,”+ $record.GetValue(“cs-uri-query”)); } $recordSet.Close();

    Read the article

  • Edit in desktop application with DataGridView

    - by SAMIR BHOGAYTA
    private void DataGridView_CellContentClick(object sender, DataGridViewCellEventArgs e) { if (e.ColumnIndex == 0) { string s = DataGridView.Rows[e.RowIndex].Cells[1].FormattedValue.ToString(); srno = Convert.ToInt16(s); FormName objFrm = new FormName(s); objFrm.MdiParent = this.MdiParent; objFrm.Show(); } } //Into the New Form public FormName(string id) { uid = id; i = Convert.ToInt16(id); InitializeComponent(); } //Get Detail As per id public void GetDetail() { string detail = "SELECT fieldname1,fieldname2 FROM TableName where PrimaryKeyField = "+id+""; DataSet ds = new DataSet(); ds = (DataSet)prm.RetriveData(detail); } //RetriveData Function public object RetriveData(string query) { // If you have sql connection use SqlConnection OleDbConnection con = new OleDbConnection(constr); OleDbDataAdapter drap = new OleDbDataAdapter(query, con); con.Open(); DataSet ds = new DataSet(); drap.Fill(ds); con.Close(); return ds; }

    Read the article

  • Trace flags - TF 1117

    - by Damian
    I had a session about trace flags this year on the SQL Day 2014 conference that was held in Wroclaw at the end of April. The session topic is important to most of DBA's and the reason I did it was that I sometimes forget about various trace flags :). So I decided to prepare a presentation but I think it is a good idea to write posts about trace flags, too. Let's start then - today I will describe the TF 1117. I assume that we all know how to setup a TF using starting parameters or registry or in the session or on the query level. I will always write if a trace flag is local or global to make sure we know how to use it. Why do we need this trace flag? Let’s create a test database first. This is quite ordinary database as it has two data files (4 MB each) and a log file that has 1MB. The data files are able to expand by 1 MB and the log file grows by 10%: USE [master] GO CREATE DATABASE [TF1117]  ON  PRIMARY ( NAME = N'TF1117',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117.mdf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB ), ( NAME = N'TF1117_1',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_1.ndf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB )  LOG ON ( NAME = N'TF1117_log',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_log.ldf' ,      SIZE = 1024KB ,      MAXSIZE = 2048GB ,      FILEGROWTH = 10% ) GO Without the TF 1117 turned on the data files don’t grow all up at once. When a first file is full the SQL Server expands it but the other file is not expanded until is full. Why is that so important? The SQL Server proportional fill algorithm will direct new extent allocations to the file with the most available space so new extents will be written to the file that was just expanded. When the TF 1117 is enabled it will cause all files to auto grow by their specified increment. That means all files will have the same percent of free space so we still have the benefit of evenly distributed IO. The TF 1117 is global flag so it affects all databases on the instance. Of course if a filegroup contains only one file the TF does not have any effect on it. Now let’s do a simple test. First let’s create a table in which every row will fit to a single page: The table definition is pretty simple as it has two integer columns and one character column of fixed size 8000 bytes: create table TF1117Tab (      col1 int,      col2 int,      col3 char (8000) ) go Now I load some data to the table to make sure that one of the data file must grow: declare @i int select @i = 1 while (@i < 800) begin       insert into TF1117Tab  values (@i, @i+1000, 'hello')        select @i= @i + 1 end I can check the actual file size in the sys.database_files DMV: SELECT name, (size*8)/1024 'Size in MB' FROM sys.database_files  GO   As you can see only the first data file was  expanded and the other has still the initial size:   name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              4 There is also other methods of looking at the events of file autogrows. One possibility is to create an Extended Events session and the other is to look into the default trace file:     DECLARE @path NVARCHAR(260); SELECT    @path = REVERSE(SUBSTRING(REVERSE([path]),          CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc' FROM    sys.traces WHERE   is_default = 1; SELECT    DatabaseName,                 [FileName],                 SPID,                 Duration,                 StartTime,                 EndTime,                 FileType =                         CASE EventClass                                     WHEN 92 THEN 'Data'                                    WHEN 93 THEN 'Log'             END FROM sys.fn_trace_gettable(@path, DEFAULT) WHERE   EventClass IN (92,93) AND StartTime >'2014-07-12' AND DatabaseName = N'TF1117' ORDER BY   StartTime DESC;   After running the query I can see the file was expanded and how long did the process take which might be useful from the performance perspective.    Now it’s time to turn on the flag 1117. DBCC TRACEON(1117)   I dropped the database and recreated it once again. Then I ran the queries and observed the results. After loading the records I see that both files were evenly expanded: name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              5 I found also information in the default trace. The query returned three rows. The last one is connected to my first experiment when the TF was turned off.  The two rows shows that first file was expanded by 1MB and right after that operation the second file was expanded, too. This is what is this TF all about J  

    Read the article

  • Improving the performance of a db import process

    - by mmr
    I have a program in Microsoft Access that processes text and also inserts data in MySQL database. This operation takes 30 mins or less to finished. I translated it into VB.NET and it takes 2 hours to finish. The program goes like this: A text file contains individual swipe from a corresponding person, it contains their id, time and date of swipe in the machine, and an indicator if it is a time-in or a time-out. I process this text, segregate the information and insert the time-in and time-out per row. I also check if there are double occurrences in the database. After checking, I simply merge the time-in and time-out of the corresponding person into one row only. This process takes 2 hours to finished in VB.NET considering I have a table to compare which contains 600,000+ rows. Now, I read in the internet that python is best in text processing, i already have a test but i doubt in database operation. What do you think is the best programming language for this kind of problem? How can I speed up the process? My first idea was using python instead of VB.NET, but since people here telling me here on SO that this most probably won't help I am searching for different solutions.

    Read the article

  • Getting SQL table row counts via sysindexes vs. sys.indexes

    - by Bill Osuch
    Among the many useful SQL snippets I regularly use is this little bit that will return row counts in a table: SELECT so.name as TableName, MAX(si.rows) as [RowCount] FROM sysobjects so JOIN sysindexes si ON si.id = OBJECT_ID(so.name) WHERE so.xtype = 'U' GROUP BY so.name ORDER BY [RowCount] DESC This is handy to find tables that have grown wildly, zero-row tables that could (possibly) be dropped, or other clues into the data. Right off the bat you may spot some "non-ideal" code - I'm using sysobjects rather than sys.objects. What's the difference? In SQL Server 2005 and later, sysobjects is no longer a table, but a "compatibility view", meant for backward compatibility only. SELECT * from each and you'll see the different data that each returns. Microsoft advises that sysindexes could be removed in a future version of SQL Server, but this has never really been an issue for me since my company is still using SQL 2000. However, there are murmurs that we may actually migrate to 2008 some year, so I might as well go ahead and start using an updated version of this snippet on the servers that can handle it: SELECT so.name as TableName, ddps.row_count as [RowCount] FROM sys.objects so JOIN sys.indexes si ON si.OBJECT_ID = so.OBJECT_ID JOIN sys.dm_db_partition_stats AS ddps ON si.OBJECT_ID = ddps.OBJECT_ID  AND si.index_id = ddps.index_id WHERE si.index_id < 2  AND so.is_ms_shipped = 0 ORDER BY ddps.row_count DESC

    Read the article

  • How do I make complex SQL queries easier to write?

    - by DragonLord
    I'm finding it very difficult to write complex SQL queries involving joins across many (at least 3-4) tables and involving several nested conditions. The queries I'm being asked to write are easily described by a few sentences, but can require a deceptive amount of code to complete. I'm finding myself often using temporary views to write these queries, which seem like a bit of a crutch. What tips can you provide that I can use to make these complex queries easier? More specifically, how do I break these queries down into the steps I need to use to actually write the SQL code? Note that I'm the SQL I'm being asked to write is part of homework assignments for a database course, so I don't want software that will do the work for me. I want to actually understand the code I'm writing. More technical details: The database is hosted on a PostgreSQL server running on the local machine. The database is very small: there are no more than seven tables and the largest table has less than about 50 rows. The SQL queries are being passed unchanged to the server, via LibreOffice Base.

    Read the article

  • Output = MAXDOP 1

    - by Dave Ballantyne
    It is widely know that data modifications on table variables do not support parallelism, Peter Larsson has a good example of that here .  Whilst tracking down a performance issue,  I saw that using the OUTPUT clause also causes parallelism to not be used. By way of example,  first lets create two tables with a simple parent and child (one to one) relationship, and then populate them with 100,000 rows. Drop table ParentDrop table Childgocreate table Parent(id integer identity Primary Key,data1 char(255))Create Table Child(id integer Primary Key)goinsert into Parent(data1)Select top 1000000 NULL from sys.columns a cross join sys.columns b insert into ChildSelect id from Parentgo If we then execute update Parent set data1 =''from Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1 We should see an execution plan using parallelism such as   However,  if the OUTPUT clause is now used update Parent set data1 =''output inserted.idfrom Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1   The execution plan shows that Parallelism was not used Make of that what you will, but i thought that this was a pretty unexpected outcome. Update : Laurence Hoff has mailed me to note that when the OUTPUT results are captured to a temporary table using the INTO clause,  then parallelism is used.  Naturally if you use a table variable then there is still no parallelism  

    Read the article

  • CodePlex Daily Summary for Monday, November 04, 2013

    CodePlex Daily Summary for Monday, November 04, 2013Popular ReleasesDNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerStock Track: Version 1.2 Stable: Overhaul and re-think of the user interface in normal mode. Added stock history view in normal mode. Allows user to enter orders in normal mode. Allow advanced user to run database queries within the program. Improved sales statistics feature, able to calculate against a single category.VG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesVidCoder: 1.5.11 Beta: Added Encode Details window. Exposes elapsed time, ETA, current and average FPS, running file size, current pass and pass progress. Open it by going to Windows -> Encode Details while an encode is running. Subtitle dialog now disables the "Burn In" checkbox when it's either unavailable or it's the only option. It also disables the "Forced Only" when the subtitle type doesn't support the "Forced" flag. Updated HandBrake core to SVN 5872. Fixed crash in the preview window when a source fil...Wsus Package Publisher: Release v1.3.1311.02: Add three new Actions in Custom Updates : Work with Files (Copy, Delete, Rename), Work with Folders (Add, Delete, Rename) and Work with Registry Keys (Add, Delete, Rename). Fix a bug, where after resigning an update, the display is not refresh. Modify the way WPP sort rows in 'Updates Detail Viewer' and 'Computer List Viewer' so that dates are correctly sorted. Add a Tab in the settings form to set Proxy settings when WPP needs to go on Internet. Fix a bug where 'Manage Catalogs Subsc...uComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New DataTypesImage Point XML DropDownList XPath Templatable List New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 14873 14833 14864 14855 / 14860 14816 14823 Drag & Drop support for rows Su...SharePoint 2013 Global Metadata Navigation: SP 2013 Metadata Navigation Sandbox Solution: SharePoint 2013 Global Metadata Navigation sandbox solution version 1.0. Upload to your site collection solution store and activate. See the Documentation tab for detailed instructions.SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.1: New FeaturesAdded option Limit to current basket subtotal to HadSpentAmount discount rule Items in product lists can be labelled as NEW for a configurable period of time Product templates can optionally display a discount sign when discounts were applied Added the ability to set multiple favicons depending on stores and/or themes Plugin management: multiple plugins can now be (un)installed in one go Added a field for the HTML body id to store entity (Developer) New property 'Extra...CodeGen Code Generator: CodeGen 4.3.2: Changes in this release include: Removed old tag tokens from several example templates. Fixed a bug which was causing the default author and company names not to be picked up from the registry under .NET. Added several additional tag loop expressions: <IF FIRST_TAG>, <IF LAST_TAG>, <IF MULTIPLE_TAGS> and<IF SINGLE_TAG>. Upgraded to Synergy/DE 10.1.1b, Visual Studio 2013 and Installshield Limited Edition 2013.Dynamics CRM 2013 Easy Solution Importer: Dynamics CRM 2013 Easy Solution Importer 1.0.0.0: First Version of Easy Solution Importer contains: - Entity to handle solutions - PBL to deactivate fields in form - Business Process Flow to launch the Solution Import - Plugin to import solutions - ChartSQL Power Doc: Version 1.0.2.2 BETA 1: Fixes for issues introduced with PowerShell 4.0 with serialization/deserialization. Fixed an issue with the max length of an Excel cell being exceeded by AD groups with a large number of members.Community Forums NNTP bridge: Community Forums NNTP Bridge V54 (LiveConnect): This is the first release which can be used with the new LiveConnect authentication. Fixes the problem that the authentication will not work after 1 hour. Also a logfile will now be stored in "%AppData%\Community\CommunityForumsNNTPServer". If you have any problems please feel free to sent me the file "LogFile.txt".AutoAudit: AutoAudit 3.20f: Here is a high level list of the things I have changed between AutoAudit 2.00h and 3.20f ... Note: 3.20f corrects a minor bug found in 3.20e in the _RowHistory UDF when using column names with spaces. 1. AutoAudit has been tested on SQL Server 2005, 2008, 2008R2 and 2012. 2. Added the capability for AutoAudit to handle primary keys with up to 5 columns. 3. Added the capability for AutoAudit to save changes for a subset of the columns in a table. 4. Normalized the Audit table and created...Aricie - Friendlier Url Provider: Aricie - Friendlier Url Provider Version 2.5.3: This is mainly a maintenance release to stabilize the new Url Group paradigm. As usual, don't forget to install the Aricie - Shared extension first Highlights Fixed: UI bugs Min Requirements: .Net 3.5+ DotNetNuke 4.8.1+ Aricie - Shared 1.7.7+Aricie Shared: Aricie.Shared Version 1.7.7: This is mainly a maintenance version. Fixes in Property Editor: list import/export Min Requirements: DotNetNuke 4.8.1+ .Net 3.5+WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.9 binaries: Fixed issue with ICollectionView containg null values (AutoFilter issue)SuperSocket, an extensible socket server framework: SuperSocket 1.6 stable: Changes included in this release: Process level isolation SuperSocket ServerManager (include server and client) Connect to client from server side initiatively Client certificate validation New configuration attributes "textEncoding", "defaultCulture", and "storeLocation" (certificate node) Many bug fixes http://docs.supersocket.net/v1-6/en-US/New-Features-and-Breaking-ChangesBarbaTunnel: BarbaTunnel 8.1: Check Version History for more information about this release.NAudio: NAudio 1.7: full release notes available at http://mark-dot-net.blogspot.co.uk/2013/10/naudio-17-release-notes.htmlDirectX Tool Kit: October 2013: October 28, 2013 Updated for Visual Studio 2013 and Windows 8.1 SDK RTM Added DGSLEffect, DGSLEffectFactory, VertexPositionNormalTangentColorTexture, and VertexPositionNormalTangentColorTextureSkinning Model loading and effect factories support loading skinned models MakeSpriteFont now has a smooth vs. sharp antialiasing option: /sharp Model loading from CMOs now handles UV transforms for texture coordinates A number of small fixes for EffectFactory Minor code and project cleanup ...New ProjectsActive Directory User Home Directory and Home Drive Management: The script is great for migrations and overall user management. Questions please send an email to delagardecodeplex@hotmail.com.Computational Mathematics in TSQL: Combinatorial mathematics is easily expressed with computational assistance. The SQL Server engine is the canvas.Demo3: this is a demoEraDeiFessi: Un parser per un certo sito molto pesante e scomodo da navigareFinditbyme Local Search: Multi-lingual search engine that can index entities with multiple attributes.MVC Demo Project - 6: Demo project showing how to use partial views inside an ASP.NET MVC application.OLM to PST Converters: An Ideal Approach for Mac Users: OLM to PST converter helps you to access MAC Outlook emails with Windows platformOpen Source Grow Pack Ship: The Open Source Grow Pack Ship system is for the produce industry. It will allow companies with low funds and infrastructure to operate inside their own budget.Pauli: Pauli is a small .NET based password manager for home use. The Software helps a user to organize passwords, PIN codes, login accounts and notes.Refraction: Member element sorting. Contains reusable Visual Studio Extensibility code in the 'CodeManifold' project.Sensarium Cybernetic Art: Sensarium Cybernetic Art will use Kinect and brainwave technologies to create a true cybernetic art system. SharePoint - Tetris WebPart: Tetris webpart for SharePoint 2013Software Product Licensing: Dot License is very easy for your software product licensing. Product activation based on AES encryption, Processor ID and a single GUID key.VBS Class Framework: The 'VBS Class Framework' is an experimental project whcih aims at delivering 'Visual Basic Script Classes' for some commonly used Objects / Components (COM).VisualME7Logger: Graphical tools for logging real time performance statistics from VW and Audi automobilesVM Role Authoring Tool: VM ROLE Authoring Tool is used to author consistent VM Role gallery workloads for Windows Azure Pack and Windows Azure. wallet: Wallet Windows 8 Store Application: Windows 8 Store Application with XAML and C#Windows Azure Cache Extension Library: Windows Azure Cache Extension LibraryWindows Firewall Notifier: Windows Firewall Notifier (WFN) extends the default Windows embedded firewall behavior, allowing to visualize and handle incoming or outgoing connections.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 05, 2013

    CodePlex Daily Summary for Tuesday, November 05, 2013Popular ReleasesSocial Network Importer for NodeXL: SocialNetImporter(v.1.9.1): This new version includes: - Include me option is back - Fixed the login bug reported latelyCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...DNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerStock Track: Version 1.2 Stable: Overhaul and re-think of the user interface in normal mode. Added stock history view in normal mode. Allows user to enter orders in normal mode. Allow advanced user to run database queries within the program. Improved sales statistics feature, able to calculate against a single category. Updated database script file. Now compatible with lower version of SQL Server.VG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesVidCoder: 1.5.11 Beta: Added Encode Details window. Exposes elapsed time, ETA, current and average FPS, running file size, current pass and pass progress. Open it by going to Windows -> Encode Details while an encode is running. Subtitle dialog now disables the "Burn In" checkbox when it's either unavailable or it's the only option. It also disables the "Forced Only" when the subtitle type doesn't support the "Forced" flag. Updated HandBrake core to SVN 5872. Fixed crash in the preview window when a source fil...Wsus Package Publisher: Release v1.3.1311.02: Add three new Actions in Custom Updates : Work with Files (Copy, Delete, Rename), Work with Folders (Add, Delete, Rename) and Work with Registry Keys (Add, Delete, Rename). Fix a bug, where after resigning an update, the display is not refresh. Modify the way WPP sort rows in 'Updates Detail Viewer' and 'Computer List Viewer' so that dates are correctly sorted. Add a Tab in the settings form to set Proxy settings when WPP needs to go on Internet. Fix a bug where 'Manage Catalogs Subsc...uComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New DataTypesImage Point XML DropDownList XPath Templatable List New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 14873 14833 14864 14855 / 14860 14816 14823 Drag & Drop support for rows Su...SharePoint 2013 Global Metadata Navigation: SP 2013 Metadata Navigation Sandbox Solution: SharePoint 2013 Global Metadata Navigation sandbox solution version 1.0. Upload to your site collection solution store and activate. See the Documentation tab for detailed instructions.SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.1: New FeaturesAdded option Limit to current basket subtotal to HadSpentAmount discount rule Items in product lists can be labelled as NEW for a configurable period of time Product templates can optionally display a discount sign when discounts were applied Added the ability to set multiple favicons depending on stores and/or themes Plugin management: multiple plugins can now be (un)installed in one go Added a field for the HTML body id to store entity (Developer) New property 'Extra...CodeGen Code Generator: CodeGen 4.3.2: Changes in this release include: Removed old tag tokens from several example templates. Fixed a bug which was causing the default author and company names not to be picked up from the registry under .NET. Added several additional tag loop expressions: <IF FIRST_TAG>, <IF LAST_TAG>, <IF MULTIPLE_TAGS> and<IF SINGLE_TAG>. Upgraded to Synergy/DE 10.1.1b, Visual Studio 2013 and Installshield Limited Edition 2013.SQL Power Doc: Version 1.0.2.2 BETA 1: Fixes for issues introduced with PowerShell 4.0 with serialization/deserialization. Fixed an issue with the max length of an Excel cell being exceeded by AD groups with a large number of members.Community Forums NNTP bridge: Community Forums NNTP Bridge V54 (LiveConnect): This is the first release which can be used with the new LiveConnect authentication. Fixes the problem that the authentication will not work after 1 hour. Also a logfile will now be stored in "%AppData%\Community\CommunityForumsNNTPServer". If you have any problems please feel free to sent me the file "LogFile.txt".AutoAudit: AutoAudit 3.20f: Here is a high level list of the things I have changed between AutoAudit 2.00h and 3.20f ... Note: 3.20f corrects a minor bug found in 3.20e in the _RowHistory UDF when using column names with spaces. 1. AutoAudit has been tested on SQL Server 2005, 2008, 2008R2 and 2012. 2. Added the capability for AutoAudit to handle primary keys with up to 5 columns. 3. Added the capability for AutoAudit to save changes for a subset of the columns in a table. 4. Normalized the Audit table and created...Aricie - Friendlier Url Provider: Aricie - Friendlier Url Provider Version 2.5.3: This is mainly a maintenance release to stabilize the new Url Group paradigm. As usual, don't forget to install the Aricie - Shared extension first Highlights Fixed: UI bugs Min Requirements: .Net 3.5+ DotNetNuke 4.8.1+ Aricie - Shared 1.7.7+Aricie Shared: Aricie.Shared Version 1.7.7: This is mainly a maintenance version. Fixes in Property Editor: list import/export Min Requirements: DotNetNuke 4.8.1+ .Net 3.5+WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.9 binaries: Fixed issue with ICollectionView containg null values (AutoFilter issue)Community TFS Build Extensions: October 2013: The October 2013 release contains Scripts - a new addition to our delivery. These are a growing library of PowerShell scripts to use with VS2013. See our documentation for more on scripting. VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 Community TFS Build Manager VS2013 The Community TFS Build Managers for VS2010, 2012 and 2013 can also be found in the Visual Studio Gallery where upda...SuperSocket, an extensible socket server framework: SuperSocket 1.6 stable: Changes included in this release: Process level isolation SuperSocket ServerManager (include server and client) Connect to client from server side initiatively Client certificate validation New configuration attributes "textEncoding", "defaultCulture", and "storeLocation" (certificate node) Many bug fixes http://docs.supersocket.net/v1-6/en-US/New-Features-and-Breaking-ChangesBarbaTunnel: BarbaTunnel 8.1: Check Version History for more information about this release.New ProjectsAForge.NET FFmpeg - LGPL Version: AForge.Video.FFMPEG recompiled under LGPLv3 which can be distributed with proprietary projects, without requiring you to release all of your proprietary source.Albatross Shell: Albatross Shell is a WPF based Shell program that can host other WPF application modules.Analysis Services ServerActivity Monitor (ASSAM) 2014: Analysis Services Server Activity Monitor (ASSAM 2014) is an upgraded version of the Analysis Services Activity Monitor 2012.AsuraCode: AsuraCode 0.1Chimera - Automated Testing Suite: Chimera is an automated testing suite, intended to simplify and automate system and tests deployment. Oriented around the MVC architecture, it has web based UI.Create Sitemap for your Website (Sitemap Writer): Sitemap Writer help create sitemap for your website. It recursively look for links on your each page and add up in list to write a site map. DataSync: ????HCS Analyzer: HCS Analyzerk? toán thu? dona: chuong trình k? toán thu? donaLanguage improver: startedOrchard Abstractions: An Orchard module for providing higher-level abstractions to Orchard services.Orchard Abstractions - Examples: Examples for Orchard Abstractions (https://orchardabstractions.codeplex.com/)Role Based Views in Microsoft Dynamics CRM 2011: This tool will allow you to configure visibility and setting the default view of a logged in user based on security roles.SF.DotNet.Utilities: DotNet.UtilitiesShot: A real time C# web application framework designed in the image of ASP.NET.Soccer Managment System: Soccer Management SystemSTAR FOX XNA: Remake of a classic videogame. In this case, that videogame will be Star Fox (1993) for SNES game console.The Dargon Project: DargonTweetFeed: A small widget that enables you to display a twitter search or profile on your Sitecore siteWPF Themes HighRobotics: WPF Themes by High Robotics company????????? ????????: ?????????, ??????????? ?????????? ???????? ????????: ??????????? ????? ????? ??????????, ?????????? ????????? ??????.

    Read the article

  • What's the difference between General Ledger Transfer Program, Create Accounting and Submit Accounting?

    - by Oracle_EBS
    In Release 12, the General Ledger Transfer Program is no longer used. Use Create Accounting or Submit Accounting instead. Submit Accounting spawns the Revenue Recognition Process. The Create Accounting program does not. So if you create transactions with rules, then you would want to run Submit Accounting Process to spawn Revenue Recognition to create the distribution rows, which Create Accounting is then spawned to process to the GL. Create Accounting Submit Accounting Short Name for Concurrent Program XLAACCPB ARACCPB Specific to Receivables No Yes Runs Revenue Recognition automatically No Yes Can be run real-time for one Transaction/Receipt at a time Yes No Spawns the following Programs 1) XLAACCPB module: Create Accounting 2) XLAACCUP module: Accounting Program 3) GLLEZL module: Journal Import 1) ARTERRPM module: Revenue Recognition Master Program 2) ARTERRPW module: Revenue Recognition with parallel workers - could be numerous 3) ARREVSWP - Revenue Contingency Analyzer 4) XLAACCPB module: Create Accounting 5) XLAACCUP module: Accounting Program 5) GLLEZL module: Journal Import Keep in mind, Reports owned by application 'Subledger Accounting' cannot be seen when running the report from Receivables responsibility. You may want to request your sysadmin to attach the following SLA reports/programs to your AR responsibility as you will need these for your AR closing process: XLAPEXRPT : Subledger Period Close Exception Report - shows transactions in status final, incomplete and unprocessed. XLAGLTRN : Transfer Journal Entries to GL - transfers transactions in final status and manually created transactions to GL To add reports/programs owned by application 'Subledger Accounting' (Subledger Period Close Exception Report and Transfer Journal Entries to GL_ Add to the request group as follows: Let's use Subledger Accounting Report XLATBRPT: Open Account Balances Listing Report as an example. Responsibility: System Administrator Navigation: Security > Responsibility > Define Query the name of your Receivables Responsibility and note the Request Group (ie. Receivables All) Navigation: Security > Responsibility > Request Query the Request Group Go to Request Zone and Click on Add Record Enter the following: Type: Program Name: Open Account Balances Listing Save Responsibility: Receivables Manager Navigation: Control > Requests > Run In the list of values you should now see 'Open Account Balances Listing' report References: Note: 748999.1 How to add reports for application subledger accounting to receivables responsibiilty Note: 759534.1 R12 ARGLTP General Ledger Transfer Program Errors Out Note: 1121944.1 Understanding and Troubleshooting Revenue Recognition in Oracle Receivables

    Read the article

  • SQL SERVER – Introduction to PERCENT_RANK() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions PERCENT_RANK(). This function returns relative standing of a value within a query result set or partition. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, RANK() OVER(ORDER BY SalesOrderID) Rnk, PERCENT_RANK() OVER(ORDER BY SalesOrderID) AS PctDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO The above query will give us the following result: Now let us understand the resultset. You will notice that I have also included the RANK() function along with this query. The reason to include RANK() function was as this query is infect uses RANK function and find the relative standing of the query. The formula to find PERCENT_RANK() is as following: PERCENT_RANK() = (RANK() – 1) / (Total Rows – 1) If you want to read more about this function read here. Now let us attempt the same example with PARTITION BY clause USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) Rnk, PERCENT_RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS PctDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO Now you will notice that the same logic is followed in follow result set. I have now quick question to you – how many of you know the logic/formula of PERCENT_RANK() before this blog post? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • AceCypher is an Addictive Cypher Slide Puzzle Game

    - by Akemi Iwaya
    Are you ready for a game that will test your logical thinking skills while providing hours of fun? Then you may want to have a look at this awesome cypher slide puzzler! AceCypher is great puzzle game for those times when you only have a few minutes to play or want a fun way to pass the time while relaxing. The overall premise and style of game play for AceCypher is simple. You move individual rows (left, right) or columns (up, down) one space at a time in order to shift the positions of numbers on the game board through ’round-a-bout’ trading. The goal is to make the four numbers in the red square match the code shown in the upper right corner (including positions). Sounds simple so far, right? But the challenge comes from the random boards you will be given to work with…some will not be too hard to solve while others will tax your brain (and patience!) quite well.     

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Final Release of Silverlight Tools for Visual Studio 2010 Released

    - by dwahlin
    If you haven’t already heard the news, the final release of the Silverlight Tools for Visual Studio 2010 have been released! That’s great news for Silverlight developers and to top it off the crew up at Microsoft even snuck in a few new features including intellisense for styles (a big deal in my opinion) and the ability to easily manipulate Grid rows and columns.  One of the most time consuming (and boring) tasks experienced by developers is also covered with the new “Go To Value Definition” feature that allows you to jump directly to style definitions with ease.  That feature alone is worth the upgrade especially if you’re working with a large application that uses a lot of styles. Here’s a quick run-down of the features provided by the latest release from the Microsoft team: Support for targeting Silverlight 4 in the Silverlight designer and project system RIA Services application templates and libraries to simplify access to your data services (check out this Silverlight.tv video and whitepaper giving full details) Support for Silverlight 4 elevated trust and out-of-browser applications Enhanced support for other new Silverlight 4 features, including: Working with Implicit Styles Go To Value Definition - navigate directly from controls on your page to styles that are applied to them. Style Intellisense - easily modify styles you already have in XAML Working with Data Source Window outputs Data Source Selector - easily select and modify your data source information Grid Row and Column context menu - Add, remove, and re-sort DSW outputs and other Grid layouts Thickness Editor for editing Margins, Padding etc. Sample Data Support -  see your item templates and bindings light up at design time Working with Silverlight 4 Out-of-Browser applications Automatically launch and debug your OOB app from inside the IDE Specify XAP signing for trusted OOB apps Set the OOB window characteristics If you’d like to see some of the new features in action check out this Channel 9 video with Mark Wilson-Thomas and John Papa.

    Read the article

  • How can I create a square 10x10 grid using nested for loops in Java? [migrated]

    - by help
    I'm trying to create a 10x10 grid using for loops in Java. I'm able to create rows going up and down but not repeating. for(int i = 1; i < temperatures.length; i++) { temperatures[i] = (temperatures[i-1] + 12) / 2; //takes average of 12 and previous temp } } public void paint(Graphics g) { for(int y = 1; y < 9; y++) { g.setColor(Color.black); g.drawRect(10, 10, 10, 10); g.drawRect(10, 10, 10, 20); g.drawRect(10, 10, 10, 30); g.drawRect(10, 10, 10, 40); g.drawRect(10, 10, 10, 50); g.drawRect(10, 10, 10, 60); g.drawRect(10, 10, 10, 70); g.drawRect(10, 10, 10, 80); g.drawRect(10, 10, 10, 90); g.drawRect(10, 10, 10, 100); for(int x = 1; x < 9; x++) { g.setColor(Color.black); g.drawRect(10, 10, 10, 10); g.drawRect(10, 10, 20, 10); g.drawRect(10, 10, 30, 10); g.drawRect(10, 10, 40, 10); g.drawRect(10, 10, 50, 10); g.drawRect(10, 10, 60, 10); g.drawRect(10, 10, 70, 10); g.drawRect(10, 10, 80, 10); g.drawRect(10, 10, 90, 10); g.drawRect(10, 10, 100, 10); } } } }

    Read the article

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • Insert Special Characters & Coding in Online Forms in Firefox

    - by Asian Angel
    If you are active in forums or comment areas on different websites then you most likely use some type of special characters, HTML, or other code throughout the day. Now you can easily insert commonly used “items” with the SKeys extension for Firefox. Your New Special Text Edit Bar After installing the extension you will see the new toolbar that has been added to your browser. These are the kinds of text that can be added to online comment areas, forums, or other website areas that allow their use: Special characters HTML tags BB codes Wiki characters All that you will need to do is click on the appropriate special character or code to insert it into the website text area. The first two toolbar items are each singular in their function and insert the following types of text. A look at the special characters available for your use. The wiki code menu. The HTML menu… And the BB code menu. Here is a quick sample using the HTML menu…much better than doing it manually. This should definitely help speed things up throughout the day. Our only disappointment during testing was not being able to add additional items (i.e. characters, tags) to the toolbar at this time. Conclusion While a new toolbar may not be for everyone this extension can certainly prove useful when you need to quickly add special characters or coding in website text areas. Links Download the SKeys extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Quick Tip: Use Tab Characters in Textarea Boxes in FirefoxUse Special Characters in WindowsUsing Password Phrases For Better SecuritySearch For Rows With Special Characters in SQL ServerExpand Text Areas in Web Forms in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Insert Special Characters & Coding in Online Forms in Firefox

    - by Asian Angel
    If you are active in forums or comment areas on different websites then you most likely use some type of special characters, HTML, or other code throughout the day. Now you can easily insert commonly used “items” with the SKeys extension for Firefox. Your New Special Text Edit Bar After installing the extension you will see the new toolbar that has been added to your browser. These are the kinds of text that can be added to online comment areas, forums, or other website areas that allow their use: Special characters HTML tags BB codes Wiki characters All that you will need to do is click on the appropriate special character or code to insert it into the website text area. The first two toolbar items are each singular in their function and insert the following types of text. A look at the special characters available for your use. The wiki code menu. The HTML menu… And the BB code menu. Here is a quick sample using the HTML menu…much better than doing it manually. This should definitely help speed things up throughout the day. Our only disappointment during testing was not being able to add additional items (i.e. characters, tags) to the toolbar at this time. Conclusion While a new toolbar may not be for everyone this extension can certainly prove useful when you need to quickly add special characters or coding in website text areas. Links Download the SKeys extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Quick Tip: Use Tab Characters in Textarea Boxes in FirefoxUse Special Characters in WindowsUsing Password Phrases For Better SecuritySearch For Rows With Special Characters in SQL ServerExpand Text Areas in Web Forms in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >