Search Results

Search found 6407 results on 257 pages for 'reorder columns'.

Page 115/257 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Logic to create common Serverlet3 Login

    - by user3696143
    I am using Servlet3 Login to Authenticate User in website I have these Login Website Normal Login(Fill the Sigup form) Facebook Login (From Facebook Id) Twitter Login (From Twitter) And I am already authenticate user by below code HttpServletRequest request = (HttpServletRequest) FacesContext.getCurrentInstance().getExternalContext().getRequest(); request.login(username, password); And it is working fine for Website Login as user gave his/her EMailId and password and it store in DB. Now I modified table and added more columns to save Facebookid in same user table and also password for Facebook login FacebookId work as a Password as well. Same I will do for Twitter But I want the same Servlet3 to authenticate user. How can I achieve it? And also added context.xml file inside META-INF folder <Realm localDataSource="true" debug="99" className="org.apache.catalina.realm.JDBCRealm" connectionName="user" connectionPassword="password" connectionURL="jdbc:mysql://localhost:3306/ ccc" digest="md5" driverName="com.mysql.jdbc.Driver" roleNameCol="role_name" userCredCol="password" userNameCol="email_id" userRoleTable="users_list" userTable="user_list_view" /> Also it is possible to check which query fired by realm entry?

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 2)

    Last week's article, <a href="http://www.4guysfromrolla.com/articles/051910-1.aspx">Building a Store Locator ASP.NET Application Using Google Maps API (Part 1)</a>, was the first in a multi-part article series exploring how to add store locator-type functionality to your ASP.NET website using the free <a href="http://code.google.com/apis/maps/">Google Maps API</a>. Part 1 started with an examination of the database used to power the store locator, which contains a single table named <code>Stores</code> with columns capturing the store number, its address and its <a href="http://en.wikipedia.org/wiki/Latitude">latitude</a> and <a

    Read the article

  • Best Practice - XML To Excel

    - by MemLeak
    I've to read a big XML file with a lot of information. Afterwards I extract the needed information (~20 Points(columns) / ~80 relevant Data (rows, some of them with subdatasets) and write them out in a Excel File. My Question is how to handle the extraction (of unused Data) part, should I copy the whole file and delete the unused parts, and then write it to excel or is it a good approach to create Objects for each column? should I write the whole xml to excel and start to delete rows in excel? What would be performant and a acceptable solution?

    Read the article

  • Rails Easy Data Dumping

    - by Madhan ayyasamy
    Hi Friends,The following useful snippets,you can find out the easiest way of ruby on rails environment data dumping. You’ll often need to get data from production to dev or dev to your local or your local to another developer’s local. One plug-in we use over and over is Yaml_db. This nifty little plug-in enables you to dump or load data by issuing a Rake command. The data is persisted in a yaml file located in db/data.yml. This is very portable and easy to read if you need to examine the data.01rake db:data:dump02 03example data found in db/data.yml04 05---06campaigns:07  columns:08  - id09  - client_id10  - name11  - created_at12  - updated_at13  - token14  records:15  - - "1"16    - "1"17    - First push18    - 2008-11-03 18:23:5319    - 2008-11-03 18:23:5320    - 3f2523f6a66521  - - "2"22    - "2"23    - First push24    - 2008-11-03 18:26:5725    - 2008-11-03 18:26:5726    - 9ee8bc427d94

    Read the article

  • Working with packed dates in SSIS

    - by Jim Giercyk
    One of the challenges recently thrown my way was to read an EBCDIC flat file, decode packed dates, and insert the dates into a SQL table.  For those unfamiliar with packed data, it is a way to store data at the nibble level (half a byte), and was often used by mainframe programmers to conserve storage space.  In the case of my input file, the dates were 2 bytes long and  represented the number of days that have past since 01/01/1950.  My first thought was, in the words of Scooby, Hmmmmph?  But, I love a good challenge, so I dove in. Reading in the flat file was rather simple.  The only difference between reading an EBCDIC and an ASCII file is the Code Page option in the connection manager.  In my case, I needed to use Code Page 1140 for EBCDIC (I could have also used Code Page 37).       Once the code page is set correctly, SSIS can understand what it is reading and it will convert the output to the default code page, 1252.  However, packed data is either unreadable or produces non-alphabetic characters, as we can see in the preview window.   Column 1 is actually the packed date, columns 0 and 2 are the values in the rest of the file.  We are only interested in Column 1, which is a 2 byte field representing a packed date.  We know that 2 bytes of packed data can be stored in 1 byte of character data, so we are working with 4 packed digits in 2 character bytes.  If you are confused, stay tuned….this will make sense in a minute.   Right-click on your Flat File Source shape and select “Show Advanced Editor”. Here is where the magic begins. By changing the properties of the output columns, we can access the packed digits from each byte. By default, the Output Column data type is DT_STR. Since we want to look at the bytes individually and not the entire string, change the data type to DT_BYTES. Next, and most important, set UseBinaryFormat to TRUE. This will write the HEX VALUES of the output string instead of writing the character values.  Now we are getting somewhere! Next, you will need to use a Data Conversion shape in your Data Flow to transform the 2 position byte stream to a 4 position Unicode string containing the packed data.  You need the string to be 4 bytes long because it will contain the 4 packed digits.  Here is what that should look like in the Data Conversion shape: Direct the output of your data flow to a test table or file to see the results.  In my case, I created a test table.  The results looked like this:     Hold on a second!  That doesn't look like a date at all.  No, of course not.  It is a hex number which represents the days which have passed between 01/01/1950 and the date.  We have to convert the Hex value to a decimal value, and use the DATEADD function to get a date value.  Luckily, I have created a function to convert Hex to Decimal:   -- ============================================= -- Author:        Jim Giercyk -- Create date: March, 2012 -- Description:    Converts a Hex string to a decimal value -- ============================================= CREATE FUNCTION [dbo].[ftn_HexToDec] (     @hexValue NVARCHAR(6) ) RETURNS DECIMAL AS BEGIN     -- Declare the return variable here DECLARE @decValue DECIMAL IF @hexValue LIKE '0x%' SET @hexValue = SUBSTRING(@hexValue,3,4) DECLARE @decTab TABLE ( decPos1 VARCHAR(2), decPos2 VARCHAR(2), decPos3 VARCHAR(2), decPos4 VARCHAR(2) ) DECLARE @pos1 VARCHAR(1) = SUBSTRING(@hexValue,1,1) DECLARE @pos2 VARCHAR(1) = SUBSTRING(@hexValue,2,1) DECLARE @pos3 VARCHAR(1) = SUBSTRING(@hexValue,3,1) DECLARE @pos4 VARCHAR(1) = SUBSTRING(@hexValue,4,1) INSERT @decTab VALUES (CASE               WHEN @pos1 = 'A' THEN '10'                 WHEN @pos1 = 'B' THEN '11'               WHEN @pos1 = 'C' THEN '12'               WHEN @pos1 = 'D' THEN '13'               WHEN @pos1 = 'E' THEN '14'               WHEN @pos1 = 'F' THEN '15'               ELSE @pos1              END, CASE               WHEN @pos2 = 'A' THEN '10'                 WHEN @pos2 = 'B' THEN '11'               WHEN @pos2 = 'C' THEN '12'               WHEN @pos2 = 'D' THEN '13'               WHEN @pos2 = 'E' THEN '14'               WHEN @pos2 = 'F' THEN '15'               ELSE @pos2              END, CASE               WHEN @pos3 = 'A' THEN '10'                 WHEN @pos3 = 'B' THEN '11'               WHEN @pos3 = 'C' THEN '12'               WHEN @pos3 = 'D' THEN '13'               WHEN @pos3 = 'E' THEN '14'               WHEN @pos3 = 'F' THEN '15'               ELSE @pos3              END, CASE               WHEN @pos4 = 'A' THEN '10'                 WHEN @pos4 = 'B' THEN '11'               WHEN @pos4 = 'C' THEN '12'               WHEN @pos4 = 'D' THEN '13'               WHEN @pos4 = 'E' THEN '14'               WHEN @pos4 = 'F' THEN '15'               ELSE @pos4              END) SET @decValue = (CONVERT(INT,(SELECT decPos4 FROM @decTab)))         +                 (CONVERT(INT,(SELECT decPos3 FROM @decTab))*16)      +                 (CONVERT(INT,(SELECT decPos2 FROM @decTab))*(16*16)) +                 (CONVERT(INT,(SELECT decPos1 FROM @decTab))*(16*16*16))     RETURN @decValue END GO     Making use of the function, I found the decimal conversion, added that number of days to 01/01/1950 and FINALLY arrived at my “unpacked relative date”.  Here is the query I used to retrieve the formatted date, and the result set which was returned: SELECT [packedDate] AS 'Hex Value',        dbo.ftn_HexToDec([packedDate]) AS 'Decimal Value',        CONVERT(DATE,DATEADD(day,dbo.ftn_HexToDec([packedDate]),'01/01/1950'),101) AS 'Relative String Date'   FROM [dbo].[Output Table]         This technique can be used any time you need to retrieve the hex value of a character string in SSIS.  The date example may be a bit difficult to understand at first, but with SSIS becoming the preferred tool for enterprise level integration for many companies, there is no doubt that developers will encounter these types of requirements with regularity in the future. Please feel free to contact me if you have any questions.

    Read the article

  • Calendar like tool for managing multiple clients simultaneously?

    - by Yuji Tomita
    I've tried several systems to keep my clients requests / work organized, but somehow they still fall through the cracks. I don't like the idea of a bug tracker-like site for every client (I would not check them all). Ideally, it's all in one page but separated by client. What systems do you guys use? I'm about to just use a spreadsheet with columns for clients. Just curious if there's something better out there :) I've seen smartsheet in action which is basically a really nice spreadsheet that shows bars of time between things that are due. This looks promising.

    Read the article

  • FAQ: Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready event. This event will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor() function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • 2 min video about the SQL_Compare

    - by CatherineRussell
    It is nice to start blogging again! I am working on new project in a small company now. We do not have a full time database admin. I have to cover multiple roles: getting requirements, writing docs and creating diagrams, designing app, writing code, testing and DBA role. I am not a DBA. But, I have to do day to day database changes: adding new new columns and tables. Check out 2 min video about the SQL_Compare. This tool saves time by automatically comparing and synchronizing database schemas; eliminate mistakes migrating database changes from dev, to test, to production; speed up the deployment of new database schema updates; generate T-SQL scripts to update one database to match the schema of another; find and fix errors caused by differences between databases;  keeps an accurate history of all previous database records.  http://www.red-gate.com/products/SQL_Compare/index.htm

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Oracle Sequences

    - by jkrebsbach
    Reminder to myself - SQL Server has nice index columns directly tied to their tables. Oracle has sequences that are islands to themselves. select seq_name.currval from dual; select seq_name.nextval from dual; currval - return current number at top of sequence nextval - increment sequence by 1, return new number   therefore - to create functionality in oracle similar to an index column - OPTION A) - Create insert trigger: CREATE OR REPLACE TRIGGER dept_bir BEFORE INSERT ON departments FOR EACH ROW WHEN (new.id IS NULL) BEGIN SELECT dept_seq.NEXTVAL INTO :new.id FROM dual; END; This will handle creating a unique identity, but will not necessarily inform process flow of identity without additional logic. OPTION B) - Select indentity into temp variable, insert whole item into tab **** When attemptint to query currval, the below error was being thrown - SELECT seq_name.currval from dual; ERROR : TABLE OR VIEW DOES NOT EXIST *** Although Oracle sys tables may have access to the sequences, that isn't to say the Oracle user may have access to those sequences - verify permissions when the system can't see object that are being reported in the object explorer.

    Read the article

  • How to select and deselect checkbox field into the GridView

    - by SAMIR BHOGAYTA
    //JavaScript function for Select and Deselect checkbox field in GridView function SelectDeselectAll(chkAll) { var a = document.forms[0]; var i=0; for(i=0;i lessthansign a.length;i++) { if(a[i].name.indexOf("chkItem") != -1) { a[i].checked = chkAll.checked; } } } function DeselectChkAll(chk) { var c=0; var d=1; var a = document.forms[0]; //alert(a.length); if(chk.checked == false) { document.getElementById("chkAll").checked = chk.checked; } else { for(i=0;i lessthansign a.length;i++) { if(a[i].name.indexOf("chkItem") != -1) { if(a[i].checked==true) { c=1; } else { d=0; } } } if(d != 0) { document.getElementById("chkAll").checked =true; } } } //How to use this function asp:TemplateField input id="Checkbox1" runat="server" onclick="javascript:SelectDeselectAll(this);" type="checkbox" / /HeaderTemplate /asp:GridView columns asp:TemplateFieldheadertemplate input id="chkAll" runat="server" onclick="javascript:SelectDeselectAll(this);" type="checkbox" / /HeaderTemplate

    Read the article

  • Strategies for Indexing Custom Fields in RavenDB

    - by Adrian Thompson Phillips
    In the relational database world, if I was developing a CRM system and wanted to have the user add their own custom fields that are searchable, I could have tables that store the name of the new column, the data type and the value, etc. (which would be less inefficient to index) or I could use the less elegant (but more searchable) solution that software like Dynamics and SharePoint use, whereas I create a load of columns on my aggregate root called CustomInt1, CustomInt2, etc. (which looks dirty and has a limit of how many custom fields a user can have, but has indexing advantages). But my questions is this, in NoSQL databases, what would be the best way of achieving the same thing? My priority would be for searchability. So what would be the best way to store this data? If I used a predefined set of properties (i.e. CustomData1, CustomData2, etc.), because these are all stored as JSON (i.e. strings) in the database, does this make it simpler because I don't have to worry about data types?

    Read the article

  • Some datatypes doesn't honor localization

    - by Peter Larsson
    This bug has haunted me for a while, until today when I decided to not accept it anymore. So I filed a bug over at connect.microsoft.com, https://connect.microsoft.com/SQLServer/feedback/details/636074/some-datatypes-doesnt-honor-localization, and if you feel the way I do, please vote for this bug to be fixed. Here is a very simple repro of the problem DECLARE  @Sample TABLE          (              a DECIMAL(38, 19),              b FLOAT          ) INSERT   @Sample          (              a,              b          ) VALUES   (1E / 7E, 1E / 7E) SELECT   * FROM     @Sample Here is the actual output.                                       a                      b --------------------------------------- ----------------------                   0.1428571428571428400      0,142857142857143   I think that both columns should have the same decimal separator, don't you? //Peter

    Read the article

  • FAQ&ndash;Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready function. This function will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor) function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • Demantra Partitioning and the First PK Column

    - by user702295
      We have found that it is necessary in Demantra to have an index that matches the partition key, although it does not have to be the PK.  It is ok   to create a new index instead of changing the PK.   For example, if my PK on SALES_DATA is (ITEM_ID, LOCATION_ID, SALES_DATE) and I decide partition by SALES_DATE, then I should add an index starting   with the partition key like this: (SALES_DATE, ITEM_ID, LOCATION_ID).   * Note that the first column of the new index matches the partition key.   It might also be helpful to create a 2nd index with the other PK columns reversed (SALES_DATE, LOCATION_ID, ITEM_ID). Again, the first column   matches the partition key.

    Read the article

  • Which tips helped you learn touch-typing? [closed]

    - by julien
    I've been learning touch-typing for about two weeks now, and I'm really commited to mastering this skill. Eventhough I'm doing ok with prose already, I'm struggling with programming syntax and even more with keybindings. Those stray you away from the home row more than regular words, and aren't as easy to practice. So I often hunt and peck in order to just get it out, but when reverting to old habits like this, I find it hard to get back into the touch-typing mindframe quickly. One little trick that has helped me so far when getting lost is to reposition every finger on its home row key, and mentally visualize the layout bias of the keyboard, ie the backslash kind of alignment of key columns. It's hard to describe though and probably a bit weird... Hope you guys have better tips !

    Read the article

  • Using the full tty real estate with sqlplus

    - by katsumii
    I believe 'rlwrap' is widely used for adding 'sqlplus' the history function and command line editing. 'rlwrap' has other functions and here's my kludgy alias to force sqlplus to use the full real estate of yourtty. Be it PuTTy session from Windows or Linux gnome terminal. $ declare -f sqlplus sqlplus () { PRE_TEXT=$(resize |sed -n "1s/COLUMNS=/set linesize /p;2s/LINES=/set pagesize /p"|while read line; do printf "%s \ " "$line";done); if [ -z "$1" ]; then rlwrap -m -P "$PRE_TEXT" sqlplus /nolog; else if ! echo $1 | grep '^-' > /dev/null; then rlwrap -m -P "$PRE_TEXT connect $*" sqlplus /nolog; else command sqlplus $*; fi; fi }

    Read the article

  • How to Make a 9 Layer Density Column [Video]

    - by Jason Fitzpatrick
    Density columns, layers of varying density liquid in a glass cylinder, are nothing new in the world of science demonstrations, but this nine layer one with seven floating objects is something to see. Courtesy of Steve Spangler Science, the experiment goes above and beyond the traditional five layer column by adding another four layers and sinking objects of varying density into the column. The end result is a colorful demonstration of the varying densities of liquids and solids. [via Boing Boing] How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems 7 Ways To Free Up Hard Disk Space On Windows

    Read the article

  • How can I get a list of installed programs and corresponding size of each in Ubuntu?

    - by Philip Baker
    I would like to have a list of the installed software on my machine, with the disk space consumed by them. A previous answer here says "you can do this via GUI in Synaptic". This doesn't mean anything to me. I don't know what GUI is, and when I click on Synaptic, I do not get anything like the display shown in the answer, i.e. with "Settings ? Preferences" and "Columns and Fonts". In Windows, you just select 'Programs and Applications' in the Control Panel, and the list comes up immediately, with sizes. Is there something similar and simple with Ubuntu? Could the size of each program be included on the list of installed software? This would be the most obvious place to put it.

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • Program To Cascade/Tile Windows

    - by Richard
    I have perhaps ten or fifteen windows open. I'd like a program which automatically resizes all the windows and arranges them in columns and rows across the screen (a grid formation), automatically figuring out the largest size for the windows so that they still fit. This isn't an "Expose" type program - I want the windows to stay resized. I am using OpenBox to do my window management and am otherwise happy with it, I don't want to find a whole new window manager just to solve this problem. The program Tile is almost perfect, but it doesn't know how to lay the windows out in a grid formation. Any thoughts? Thanks!

    Read the article

  • Drawing a random x,y grid of objects within a prespective

    - by T Reddy
    I'm wrapping my head around OpenGL ES 2.0 and I think I'm trying to do something very simple, but I think the math may be eluding me. I created a simple, flat-ish cylinder in Blender that is 2 units in diameter. I want to create an arbitrary grid of these edge to edge (think of a checker board). I'm using a 3D perspective with GLKit: CGSize size = [[self view] bounds].size; _projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(45.0f), size.width/size.height, 0.1f, 100.0f); So, I managed to manually get all of these cylinders drawn on the screen just fine. However, I would like to understand how I can programmatically "fit" all of these cylinders on the screen at the same time given the camera location, screen size, cylinder diameter, and the number of rows/columns. So the net effect is that for small grids (i.e., 5x5) the objects are closer to the camera, but for large grids (i.e., 30x30) the objects are farther away. In either case, all of the cylinders are visible.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >