Search Results

Search found 4357 results on 175 pages for 'retrieve'.

Page 123/175 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Packing a DBF

    - by Tom Hines
    I thought my days of dealing with DBFs as a "production data" source were over, but HA (no such luck). I recently had to retrieve, modify and replace some data that needed to be delivered in a DBF file. Everything was fine until I realized / remembered the DBF driver does not ACTUALLY delete records from the data source -- it only marks them for deletion.  You are responsible for handling the "chaff" either by using a utility to remove deleted records or by simply ignoring them.  If imported into Excel, the marked-deleted records are ignored, but the file size will reflect the extra content. So, I went hunting for a method to "Pack" the records (removing deleted ones and resizing the DBF file) and eventually ran across the FOXPRO driver at ( http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx ).  Once installed, I changed the DSN in the code to the new one I created in the ODBC Administrator and ran some tests.  Using MSQuery, I simply tested the raw SQL command Pack {tablename} and it WORKED! One really neat thing is the PACK command is used like regular SQL instructions; "Pack {tablename}" is all that is needed. It is necessary, however, to close all connections to the database before issuing the PACK command.    Here is some C# code for a Pack method.         /// <summary>       /// Pack the DBF removing all deleted records       /// </summary>       /// <param name="strTableName">The table to pack</param>       /// <param name="strError">output of any errors</param>       /// <returns>bool (true if no errors)</returns>       public static bool Pack(string strTableName, ref string strError)       {          bool blnRetVal = true;          try          {             OdbcConnectionStringBuilder csbOdbc = new OdbcConnectionStringBuilder()             {                Dsn = "PSAP_FOX_DBF"             };             string strSQL = "pack " + strTableName;             using (OdbcConnection connOdbc = new OdbcConnection(csbOdbc.ToString()))             {                connOdbc.Open();                OdbcCommand cmdOdbc = new OdbcCommand(strSQL, connOdbc);                cmdOdbc.ExecuteNonQuery();                connOdbc.Close();             }          }          catch (Exception exc)          {             blnRetVal = false;             strError = exc.Message;          }          return blnRetVal;       }

    Read the article

  • Packing a DBF

    - by Tom Hines
    I thought my days of dealing with DBFs as a "production data" source were over, but HA (no such luck). I recently had to retrieve, modify and replace some data that needed to be delivered in a DBF file. Everything was fine until I realized / remembered the DBF driver does not ACTUALLY delete records from the data source -- it only marks them for deletion.  You are responsible for handling the "chaff" either by using a utility to remove deleted records or by simply ignoring them.  If imported into Excel, the marked-deleted records are ignored, but the file size will reflect the extra content.  After several rounds of testing CRUD, the output DBF was huge. So, I went hunting for a method to "Pack" the records (removing deleted ones and resizing the DBF file) and eventually ran across the FOXPRO driver at ( http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx ).  Once installed, I changed the DSN in the code to the new one I created in the ODBC Administrator and ran some tests.  Using MSQuery, I simply tested the raw SQL command Pack {tablename} and it WORKED! One really neat thing is the PACK command is used like regular SQL instructions; "Pack {tablename}" is all that is needed. It is necessary, however, to close all connections to the database (and re-open) before issuing the PACK command or you will get the "File is in use" error.    Here is some C# code for a Pack method.         /// <summary>       /// Pack the DBF removing all deleted records       /// </summary>       /// <param name="strTableName">The table to pack</param>       /// <param name="strError">output of any errors</param>       /// <returns>bool (true if no errors)</returns>       public static bool Pack(string strTableName, ref string strError)       {          bool blnRetVal = true;          try          {             OdbcConnectionStringBuilder csbOdbc = new OdbcConnectionStringBuilder()             {                Dsn = "PSAP_FOX_DBF"             };             string strSQL = "pack " + strTableName;             using (OdbcConnection connOdbc = new OdbcConnection(csbOdbc.ToString()))             {                connOdbc.Open();                OdbcCommand cmdOdbc = new OdbcCommand(strSQL, connOdbc);                cmdOdbc.ExecuteNonQuery();                connOdbc.Close();             }          }          catch (Exception exc)          {             blnRetVal = false;             strError = exc.Message;          }          return blnRetVal;       }

    Read the article

  • Web.Config is Cached

    - by SGWellens
    There was a question from a student over on the Asp.Net forums about improving site performance. The concern was that every time an app setting was read from the Web.Config file, the disk would be accessed. With many app settings and many users, it was believed performance would suffer. Their intent was to create a class to hold all the settings, instantiate it and fill it from the Web.Config file on startup. Then, all the settings would be in RAM. I knew this was not correct and didn't want to just say so without any corroboration, so I did some searching. Surprisingly, this is a common misconception. I found other code postings that cached the app settings from Web.Config. Many people even thanked the posters for the code. In a later post, the student said their text book recommended caching the Web.Config file. OK, here's the deal. The Web.Config file is already cached. You do not need to re-cache it. From this article http://msdn.microsoft.com/en-us/library/aa478432.aspx It is important to realize that the entire <appSettings> section is read, parsed, and cached the first time we retrieve a setting value. From that point forward, all requests for setting values come from an in-memory cache, so access is quite fast and doesn't incur any subsequent overhead for accessing the file or parsing the XML. The reason the misconception is prevalent may be because it's hard to search for Web.Config and cache without getting a lot of hits on how to setup caching in the Web.Config file. So here's a string for search engines to index on: "Is the Web.Config file Cached?" A follow up question was, are the connection strings cached? Yes. http://msdn.microsoft.com/en-us/library/ms178683.aspx At run time, ASP.NET uses the Web.Config files to hierarchically compute a unique collection of configuration settings for each incoming URL request. These settings are calculated only once and then cached on the server. And, as everyone should know, if you modify the Web.Config file, the web application will restart. I hope this helps people to NOT write code! Steve WellensCodeProject

    Read the article

  • Web.Config is Cached

    - by SGWellens
    There was a question from a student over on the Asp.Net forums about improving site performance. The concern was that every time an app setting was read from the Web.Config file, the disk would be accessed. With many app settings and many users, it was believed performance would suffer. Their intent was to create a class to hold all the settings, instantiate it and fill it from the Web.Config file on startup. Then, all the settings would be in RAM. I knew this was not correct and didn't want to just say so without any corroboration, so I did some searching. Surprisingly, this is a common misconception. I found other code postings that cached the app settings from Web.Config. Many people even thanked the posters for the code. In a later post, the student said their text book recommended caching the Web.Config file. OK, here's the deal. The Web.Config file is already cached. You do not need to re-cache it. From this article http://msdn.microsoft.com/en-us/library/aa478432.aspx It is important to realize that the entire <appSettings> section is read, parsed, and cached the first time we retrieve a setting value. From that point forward, all requests for setting values come from an in-memory cache, so access is quite fast and doesn't incur any subsequent overhead for accessing the file or parsing the XML. The reason the misconception is prevalent may be because it's hard to search for Web.Config and cache without getting a lot of hits on how to setup caching in the Web.Config file. So here's a string for search engines to index on: "Is the Web.Config file Cached?" A follow up question was, are the connection strings cached? Yes. http://msdn.microsoft.com/en-us/library/ms178683.aspx At run time, ASP.NET uses the Web.Config files to hierarchically compute a unique collection of configuration settings for each incoming URL request. These settings are calculated only once and then cached on the server. And, as everyone should know, if you modify the Web.Config file, the web application will restart. I hope this helps people to NOT write code!   Steve WellensCodeProject

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITY SELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) CREATE TABLE EmployeeLog ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITYSELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()))CREATE TABLE EmployeeLog( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE())) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • SQL SERVER – Challenge – Puzzle – Usage of FAST Hint

    - by pinaldave
    I was recently working with various SQL Server Hints. After working for a day on various hints, I realize that for one hint, I am not able to come up with good example. The hint is FAST. Let us look at the definition of the FAST hint from the Book On-Line. FAST number_rows Specifies that the query is optimized for fast retrieval of the first number_rows. This is a nonnegative integer. After the first number_rows are returned, the query continues execution and produces its full result set. Now the question is in what condition this hint can be useful. I have tried so many different combination, I have found this hint does not make much performance difference, infect I did not notice any change in time taken to load the resultset. I noticed that this hint does not change number of the page read to return result. Now when there is difference in performance is expected because if you read the what FAST hint does is that it only returns first few results FAST – which does not mean there will be difference in performance. I also understand that this hint gives the guidance/suggestions/hint to query optimizer that there are only 100 rows are in expected resultset. This tricking the optimizer to think there are only 100 rows and which (may) lead to render different execution plan than the one which it would have taken in normal case (without hint). Again, not necessarily, this will happen always. Now if you read above discussion, you will find that basic understanding of the hint is very clear to me but I still feel that I am missing something. Here are my questions: 1) In what condition this hint can be useful? What is the case, when someone want to see first few rows early because my experience suggests that when first few rows are rendered remaining rows are rendered as well. 2) Is there any way application can retrieve the fast fetched rows from SQL Server? 3) Do you use this hint in your application? Why? When? and How? Here are few examples I have attempted during the my experiment and found there is no difference in execution plan except its estimated number of rows are different leading optimizer think that the cost is less but in reality that is not the case. USE AdventureWorks GO SET STATISTICS IO ON SET STATISTICS TIME ON GO --------------------------------------------- -- Table Scan with Fast Hint SELECT * FROM Sales.SalesOrderDetail GO SELECT * FROM Sales.SalesOrderDetail OPTION (FAST 100) GO --------------------------------------------- -- Table Scan with Where on Index Key SELECT * FROM Sales.SalesOrderDetail WHERE OrderQty = 14 GO SELECT * FROM Sales.SalesOrderDetail WHERE OrderQty = 14 OPTION (FAST 100) GO --------------------------------------------- -- Table Scan with Where on Index Key SELECT * FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID < 1000 GO SELECT * FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID < 1000 OPTION (FAST 100) GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tom Cruise: Meet Fusion Apps UX and Feel the Speed

    - by ultan o'broin
    Unfortunately, I am old enough to remember, and now to admit that I really loved, the movie Top Gun. You know the one - Tom Cruise, US Navy F-14 ace pilot, Mr Maverick, crisis of confidence, meets woman, etc., etc. Anyway, one of more memorable lines (there were a few) was: "I feel the need, the need for speed." I was reminded of Tom Cruise recently. Paraphrasing a certain Senior Vice President talking about Oracle Fusion Applications and user experience at an all-hands meeting, I heard that: Applications can never be too easy to use. Performance can never be too fast. Developers, assume that your code is always "on". Perfect. You cannot overstate the user experience importance of application speed to users, or at least their perception of speed. We all want that super speed of execution and performance, and increasingly so as enterprise users bring the expectations of consumer IT into the work environment. Sten Vesterli (@stenvesterli), an Oracle Fusion Applications User Experience Advocate, also addressed the speed point artfully at an Oracle Usability Advisory Board meeting in Geneva. Sten asked us that when we next Googled something, to think about the message we see that Google has found hundreds of thousands or millions of results for us in a split second (for example, About 8,340,000 results (0.23 seconds)). Now, how many results can we see and how many can we use immediately? Yet, this simple message communicating the total results available to us works a special magic about speed, delight, and excitement that Google has made its own in the search space. And, guess what? The Oracle Application Development Framework table component relies on a similar "virtual performance boost", says Sten, when it displays the first 50 records in a table, and uses a scrollbar indicating the total size of the data record set. The user scrolls and the application automatically retrieves more records as needed. Application speed and its perception by users is worth bearing in mind the next time you're at a customer site and the IT Department demands that you retrieve every record from the database. Just think of... Dave Ensor: I'll give you all the rows you ask for in one second. If you promise to use them. (Again, hat tip to Sten.) And then maybe think of... Tom Cruise. And if you want to read about the speed of Oracle Fusion Applications, and what that really means in terms of user productivity for your entire business, then check out the Oracle Applications User Experience Oracle Fusion Applications white papers on the usable apps website.

    Read the article

  • Big Data – Buzz Words: What is NoSQL – Day 5 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored the basic architecture of Big Data . In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – NoSQL. What is NoSQL? NoSQL stands for Not Relational SQL or Not Only SQL. Lots of people think that NoSQL means there is No SQL, which is not true – they both sound same but the meaning is totally different. NoSQL does use SQL but it uses more than SQL to achieve its goal. As per Wikipedia’s NoSQL Database Definition – “A NoSQL database provides a mechanism for storage and retrieval of data that uses looser consistency models than traditional relational databases.“ Why use NoSQL? A traditional relation database usually deals with predictable structured data. Whereas as the world has moved forward with unstructured data we often see the limitations of the traditional relational database in dealing with them. For example, nowadays we have data in format of SMS, wave files, photos and video format. It is a bit difficult to manage them by using a traditional relational database. I often see people using BLOB filed to store such a data. BLOB can store the data but when we have to retrieve them or even process them the same BLOB is extremely slow in processing the unstructured data. A NoSQL database is the type of database that can handle unstructured, unorganized and unpredictable data that our business needs it. Along with the support to unstructured data, the other advantage of NoSQL Database is high performance and high availability. Eventual Consistency Additionally to note that NoSQL Database may not provided 100% ACID (Atomicity, Consistency, Isolation, Durability) compliance.  Though, NoSQL Database does not support ACID they provide eventual consistency. That means over the long period of time all updates can be expected to propagate eventually through the system and data will be consistent. Taxonomy Taxonomy is the practice of classification of things or concepts and the principles. The NoSQL taxonomy supports column store, document store, key-value stores, and graph databases. We will discuss the taxonomy in detail in later blog posts. Here are few of the examples of the each of the No SQL Category. Column: Hbase, Cassandra, Accumulo Document: MongoDB, Couchbase, Raven Key-value : Dynamo, Riak, Azure, Redis, Cache, GT.m Graph: Neo4J, Allegro, Virtuoso, Bigdata As of now there are over 150 NoSQL Database and you can read everything about them in this single link. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • [Windows 8] Application bar buttons symbols

    - by Benjamin Roux
    During the development of my current Windows 8 application, I wanted to add custom application bar buttons with symbols that were not available in the StandardStyle.xaml file created with the template project. First I tried to Bing some new symbols and I found this blog post by Tim Heuer with the list of all symbols available (supposedly) but the one I wanted was not there (a heart). In this blog post I’m going the show you how to retrieve all the symbols available without creating a custom path. First you have to start the “Character map” tool and select “Segoe UI Symbol” then go at the end of the grid to see all the symbols available. When you want one just select it and copy it’s code inside the content of your Button. In my case I wanted a heart and its code is “E0A5”, so my button (or style in this case) became <Style x:Key="LoveAppBarButtonStyle" TargetType="Button" BasedOn="{StaticResource AppBarButtonStyle}"> <Setter Property="AutomationProperties.AutomationId" Value="LoveAppBarButtonStyle"/> <Setter Property="AutomationProperties.Name" Value="Love"/> <Setter Property="Content" Value="&#xE0A5;"/> </Style> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Et voila. Hope this will help you (there is A LOT of symbols")!

    Read the article

  • SQL SERVER – Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video

    - by pinaldave
    You might have heard many times that one should not use SELECT * as there are many disadvantages to the usage of the SELECT *. I also believe that there are always rare occasion when we need every single column of the query. In most of the cases, we only need a few columns of the query and we should retrieve only those columns. SELECT * has many disadvantages. Let me list a few and remaining you can add as a comment.  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug. There are two quick tricks I have discussed in the video which explains how users can avoid using SELECT * but instead list the column names. 1) Drag the columns folder from SQL Server Management Studio to Query Editor 2) Right Click on Table Name >> Script TAble AS >> SELECT To… >> Select option It is extremely easy to list the column names in the table. In today’s sixty seconds video, you will notice that I was able to demonstrate both the methods very quickly. From now onwards there should be no excuse for not listing ColumnName. Let me ask a question back – is there ever a reason to SELECT *? If yes, would you please share that as a comment. More on SELECT *: SQL SERVER – Solution – Puzzle – SELECT * vs SELECT COUNT(*) SQL SERVER – Puzzle – SELECT * vs SELECT COUNT(*) SQL SERVER – SELECT vs. SET Performance Comparison I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012

    - by pinaldave
    Have you ever faced situation where something does not work and when you try to go and fix it – you like fixing it and started to appreciate the breaking changes. Well, this is exactly I felt yesterday. Before I begin my story of yesterday I want to state it candidly that I do not encourage anybody to use * in the SELECT statement. One of the my DBA friend who always used my performance tuning script yesterday sent me email asking following question - “Every time I want to retrieve OS related information in SQL Server – I used DMV sys.dm_os_sys_info. I just upgraded my SQL Server edition from 2008 R2 to SQL Server 2012 RC0 and it suddenly stopped working. Well, this is not the production server so the issue is not big yet but eventually I need to resolve this error. Any suggestion?” The funny thing was original email was very long but it did not talk about what is the exact error beside the query is not working. I think this is the disadvantage of being too friendly on email sometime. Well, never the less, I quickly looked at the DMV on my SQL Server 2008 R2 and SQL Server 2012 RC0 version. To my surprise I found out that there were few columns which are renamed in SQL Server 2012 RC0. Usually when people see breaking changes they do not like it but when I see these changes I was happy as new names were meaningful and additionally their new conversion is much more practical and useful. Here are the columns previous names - Previous Column Name New Column Name physical_memory_in_bytes physical_memory_kb bpool_commit_target committed_target_kb bpool_visible visible_target_kb virtual_memory_in_bytes virtual_memory_kb bpool_commited committed_kb If you read it carefully you will notice that new columns now display few results in the KB whereas earlier result was in bytes. When I see the results in bytes I always get confused as I could not guess what exactly it will convert to. I like to see results in kb and I am glad that new columns are now displaying the results in the kb. I sent the details of the new columns to my friend and ask him to check the columns used in application. From my comment he quickly realized why he was facing error and fixed it immediately. Overall – all was well at the end and I learned something new. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Saturday Fun Puzzle with SQL Server DATETIME2 and CAST

    - by pinaldave
    Note: I have used SQL Server 2012 for this small fun experiment. Here is what we are going to do. We will run the script one at time instead of running them all together and try to guess the answer. I am confident that many will get it correct but if you do not get correct, you learn something new. Let us create database and sample table. CREATE DATABASE DB2012 GO USE DB2012 GO CREATE TABLE TableDT (DT1 VARCHAR(100), DT2 DATETIME2, DT1C AS DT1, DT2C AS DT2); INSERT INTO TableDT (DT1, DT2) SELECT GETDATE(), GETDATE() GO There are four columns in the table. The first column DT1 is regular VARCHAR and second DT2 is DATETIME2. Both of the column are been populated with the same data as I have used the function GETDATE(). Now let us do the SELECT statement and get the result from both the columns. Before running the query please guess the answer and write it down on the paper or notepad. Question 1: Guess the resultset SELECT DT1, DT2 FROM TableDT GO Now once again run the select statement on the same table but this time retrieve the computed columns only. Once again I suggest you write down the result on the notepad. Question 2: Guess the resultset SELECT DT1C, DT2C FROM TableDT GO Now here is the best part. Let us use the CAST function over the computed columns. Here I do want you to stop and guess the answer for sure. If you have not done it so far, stop do it, believe me you will like it. Question 3: Guess the resultset SELECT CAST(DT1C AS DATETIME2) CDT1C, CAST(DT2C AS DATETIME2) CDT1C FROM TableDT GO Now let us inspect all the answers together and see how many of you got it correct. Answer 1: Answer 2: Answer 3:  If you have not tried to run the script so far, you can execute all the three of the above script together over here and see the result together. SELECT CAST(DT1C AS DATETIME2) CDT1C, CAST(DT2C AS DATETIME2) CDT1C FROM TableDT GO Here is the Saturday Fun question to you – why do we get same result from both of the expressions in Question 3, where as in question 2 both the expression have different answer. I will publish the valid answer with explanation in future blog posts. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SharePoint 2010 Field Expression Builder

    - by Ricardo Peres
    OK, back to two of my favorite topics, expression builders and SharePoint. This time I wanted to be able to retrieve a field value from the current page declaratively on the markup so that I can assign it to some control’s property, without the need for writing code. Of course, the most straight way to do it is through an expression builder. Here’s the code: 1: [ExpressionPrefix("SPField")] 2: public class SPFieldExpressionBuilder : ExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetFieldValue(String fieldName, PropertyInfo propertyInfo) 6: { 7: Object fieldValue = SPContext.Current.ListItem[fieldName]; 8:  9: if (fieldValue != null) 10: { 11: if ((fieldValue is IConvertible) && (typeof(IConvertible).IsAssignableFrom(propertyInfo.PropertyType) == true)) 12: { 13: if (propertyInfo.PropertyType.IsAssignableFrom(fieldValue.GetType()) != true) 14: { 15: fieldValue = Convert.ChangeType(fieldValue, propertyInfo.PropertyType); 16: } 17: } 18: } 19:  20: return (fieldValue); 21: } 22:  23: #endregion 24:  25: #region Public override methods 26: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 27: { 28: return (GetFieldValue(entry.Expression, entry.PropertyInfo)); 29: } 30:  31: public override CodeExpression GetCodeExpression(BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 32: { 33: if (String.IsNullOrEmpty(entry.Expression) == true) 34: { 35: return (new CodePrimitiveExpression(String.Empty)); 36: } 37: else 38: { 39: return (new CodeMethodInvokeExpression(new CodeMethodReferenceExpression(new CodeTypeReferenceExpression(this.GetType()), "GetFieldValue"), new CodePrimitiveExpression(entry.Expression), new CodePropertyReferenceExpression(new CodeArgumentReferenceExpression("entry"), "PropertyInfo"))); 40: } 41: } 42:  43: #endregion 44:  45: #region Public override properties 46: public override Boolean SupportsEvaluate 47: { 48: get 49: { 50: return (true); 51: } 52: } 53: #endregion 54: } You will notice that it will even try to convert the field value to the target property’s type, through the use of the IConvertible interface and the Convert.ChangeType method. It must be placed on the Global Assembly Cache or you will get a security-related exception. The other alternative is to change the trust level of your web application to full trust. Here’s how to register it on Web.config: 1: <expressionBuilders> 2: <!-- ... --> 3: <add expressionPrefix="SPField" type="MyNamespace.SPFieldExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=29186a6b9e7b779f" /> 4: </expressionBuilders> And finally, here’s how to use it on an ASPX or ASCX file inside a publishing page: 1: <asp:Label runat="server" Text="<%$ SPField:Title %>"/>

    Read the article

  • Solaris Tips : Assembler, Format, File Descriptors, Ciphers & Mount Points

    - by Giri Mandalika
    .roundedcorner { border:1px solid #a1a1a1; padding:10px 40px; border-radius:25px; } .boxshadow { padding:10px 40px; box-shadow: 10px 10px 5px #888888; } 1. Most Oracle software installers need assembler Assembler (as) is not installed by default on Solaris 11.      Find and install eg., # pkg search assembler INDEX ACTION VALUE PACKAGE pkg.fmri set solaris/developer/assembler pkg:/developer/[email protected] # pkg install pkg:/developer/assembler Assembler binary used to be under /usr/ccs/bin directory on Solaris 10 and prior versions.      There is no /usr/ccs/bin on Solaris 11. Contents were moved to /usr/bin 2. Non-interactive retrieval of the entire list of disks that format reports If the format utility cannot show the entire list of disks in a single screen on stdout, it shows some and prompts user to - hit space for more or s to select - to move to the next screen to show few more disks. Run the following command(s) to retrieve the entire list of disks in a single shot. format 3. Finding system wide file descriptors/handles in use Run the following kstat command as any user (privileged or non-privileged). kstat -n file_cache -s buf_inuse Going through /proc (process filesystem) is less efficient and may lead to inaccurate results due to the inclusion of duplicate file handles. 4. ssh connection to a Solaris 11 host fails with error Couldn't agree a client-to-server cipher (available: aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,arcfour) Solution: add 3des-cbc to the list of accepted ciphers to sshd configuration file. Steps: Append the following line to /etc/ssh/sshd_config Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,\ arcfour,3des-cbc Restart ssh daemon svcadm -v restart ssh 5. UFS: Finding the last mount point for a device fsck utility reports the last mountpoint on which the filesystem was mounted (it won't show the mount options though). The filesystem should be unmounted when running fsck. eg., # fsck -n /dev/dsk/c0t5000CCA0162F7BC0d0s6 ** /dev/rdsk/c0t5000CCA0162F7BC0d0s6 (NO WRITE) ** Last Mounted on /export/oracle ** Phase 1 - Check Blocks and Sizes ... ...

    Read the article

  • Case Management API by Koen van Dijk

    - by JuergenKress
    Case Management is a new addition to Oracle BPM in release 11.1.1.1.7 (PS6). This new release contains the Case Management engine, see blog Léon at  http://leonsmiers.blogspot.nl/ for more details.  However, currently this release does not contain a case portal. The case management API's, just like the already existing Oracle BPM API's, help in developing a portal page with relative ease. This blog will use some real life examples from the EURent casemanagement application and portal application developed by Oracle. The Oracle BPM Case Management API is a Java Based API that enables developers to programmatically access the new Case Management functionalities. It is an elaborate API that can access all the functionalities of Oracle Case Management. I will describe two of those functionalities in this blog: retrieving case data as DOM (http://www.w3.org/DOM/) and attaching a document to a case. Libraries First of all when creating a Case Management project you will need to attach the following libraries: These contain all the classes that are in the Case Management API. Service client To do anything with the BPM CaseManagement API in general it is necessary to create a CaseManagementServiceClient Object. The Case Management service client is the central piece of the Case Management API. It can be used to retrieve two different types of services. The first is the case stream service and the case service. The case stream service contains functionality to upload and download documents to and from a case. The second one is the CaseService. This service contains all the other functionality acting upon a case including but not limited to: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: ACM API,adaptive Case Management,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Thinktecture.IdentityServer RC

    - by Your DisplayName here!
    I just uploaded the RC of IdentityServer to Codeplex. This release is feature complete and if I don’t get any bug reports this is also pretty much the final V1. Changes from B1 The configuration data access is now based on EF 4.1 code first. This makes it much easier to use different data stores. For RTM I will also provide a SQL script for SQL Server so you can move the configuration to a separate machine (e.g. for load balancing scenarios). I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. Certificate based endpoint identities for SSL endpoints are optional now. Added a initial configuration “wizard”. This sets up the signing certificate, issuer URI and site title on the first run. Installation This is still a “developer” release – that means it ships with source code that you have to build it etc. But from that point it should be a little more straightforward as it used to be: Make sure SSL is configured correctly for IIS Map the WebSite directory to a vdir in IIS Run the web site. This should bring up the initial configuration Make sure the worker process account has access to the signing certificate private key Make sure all your users are in the “IdentityServerUsers” role in your role store. Administrators need the “IdentityServerAdministrators” role That should be it. A proper documentation will be hopefully available soon (any volunteers?). Please provide feedback! thanks!

    Read the article

  • Mounting a Microsoft Azure CloudDrive in a VMRole

    - by SeanBarlow
    Mounting a Drive in a VMRole is a little more complicated then a web or worker.  The Web and Worker roles offer OnStart and OnStop events, which you can use to mount or unmount your drives. The VMRole does not have these same events so you have to provide another way for the drives to be mounted or unmounted. The problem I have run into is what if you have multiple drives and you only want to mount certain drives. How do you let your user mount the drive. I am not going to go into details on what kind of GUI to present to the user. I have done this in a simple WPF application as well as a console application. We are going to need to get the storage account details. One thing to note when you are mounting cloud drives you cannot use https and have to use http. We force the use of http by using false when we create the CloudStorageAccount.   StorageCredentialsAccountAndKey credentials = new StorageCredentialsAccountAndKey("AccountName", "AccountKey"); CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, false);   Next we need to get a reference to the container.   var blobClient = storageAccount.CreateCloudBlobClient(); var container = blobClient.GetContainerReference("ContainerName");   Now we need to get a list of the drives in the container   var drives = container.ListBlobs();   Now that we have a list of the drives in the container we can let the user choose which drive they want to mount. I am just selecting the 1st drive in the list for the example and getting the Uri of the drive.   var driveUri = drives.First().Uri;   Now that we have the Uri we need to get the reference to the drive. var drive = new CloudDrive(driveUri, storageAccount.Credentials);   Now all that is left is to mount the drive.   var driveLetter = drive.Mount(0, DriveMountOptions.None);   To unmount the drive all you have to do is call unmount on the drive. drive.Unmount();   You do need to make sure you unount the drives when you are done with them. I have run into issues with the drives being locked until the VMRole is rebooted. I have also managed to have a drive be permanently locked and I was forced to delete it and upload it again. I have been unable to reproduce the permanent lock but I am still trying. The CloudDrive class provides a handy method to retrieve all the mounted drives in the Role. foreach (var drive in CloudDrive.GetMountedDrives()) {          var mountedDrive = Account.CreateCloudDrive(drive.Value.PathAndQuery);          mountedDrive.Unmount(); }

    Read the article

  • Increase application performance

    - by Prayos
    I'm writing a program for a company that will generate a daily report for them. All of the data that they use for this report is stored in a local SQLite database. For this report, the utilize pretty much every bit of the information in the database. So currently, when I query the datbase, I retrieve everything, and store the information in lists. Here's what I've got: using (var dataReader = _connection.Select(query)) { if (dataReader.HasRows) { while (dataReader.Read()) { _date.Add(Convert.ToDateTime(dataReader["date"])); _measured.Add(Convert.ToDouble(dataReader["measured_dist"])); _bit.Add(Convert.ToDouble(dataReader["bit_loc"])); _psi.Add(Convert.ToDouble(dataReader["pump_press"])); _time.Add(Convert.ToDateTime(dataReader["timestamp"])); _fob.Add(Convert.ToDouble(dataReader["force_on_bit"])); _torque.Add(Convert.ToDouble(dataReader["torque"])); _rpm.Add(Convert.ToDouble(dataReader["rpm"])); _pumpOneSpm.Add(Convert.ToDouble(dataReader["pump_1_strokes_pm"])); _pumpTwoSpm.Add(Convert.ToDouble(dataReader["pump_2_strokes_pm"])); _pullForce.Add(Convert.ToDouble(dataReader["pull_force"])); _gpm.Add(Convert.ToDouble(dataReader["flow"])); } } } I then utilize these lists for the calculations. Obviously, the more information that is in this database, the longer the initial query will take. I'm curious if there is a way to increase the performance of the query at all? Thanks for any and all help. EDIT One of the report rows is called Daily Drilling Hours. For this calculation, I use this method: // Retrieves the timestamps where measured depth == bit depth and PSI >= 50 public double CalculateDailyProjectDrillingHours(DateTime date) { var dailyTimeStamps = _time.Where((t, i) => _date[i].Equals(date) && _measured[i].Equals(_bit[i]) && _psi[i] >= 50).ToList(); return _dailyDrillingHours = Convert.ToDouble(Math.Round(TimeCalculations(dailyTimeStamps).TotalHours, 2, MidpointRounding.AwayFromZero)); } // Checks that the interval is less than 10, then adds the interval to the total time private static TimeSpan TimeCalculations(IList<DateTime> timeStamps) { var interval = new TimeSpan(0, 0, 10); var totalTime = new TimeSpan(); TimeSpan timeDifference; for (var j = 0; j < timeStamps.Count - 1; j++) { if (timeStamps[j + 1].Subtract(timeStamps[j]) <= interval) { timeDifference = timeStamps[j + 1].Subtract(timeStamps[j]); totalTime = totalTime.Add(timeDifference); } } return totalTime; }

    Read the article

  • AWS .NET SDK v2: the message-pump pattern

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/11/aws-.net-sdk-v2--the-message-pump-pattern.aspxVersion 2 of the AWS SDK for .NET has had a few pre-release iterations on NuGet and is stable, if a bit lacking in step-by-step guides. There’s at least one big reason to try it out: the SQS queue client now supports asynchronous reads, so you don’t need a clumsy polling mechanism to retrieve messages. The new approach  is easy to use, and lets you work with AWS queues in a similar way to the message-pump pattern used in the latest Azure SDK for Service Bus queues and topics. I’ve posted a simple wrapper class for subscribing to an SQS hub on gist here: A wrapper for the SQS client in the AWS SDK for.NET v2, which uses the message-pump pattern. Here’s the core functionality in the subscribe method: private async void Subscribe() { if (_isListening) { var request = new ReceiveMessageRequest { MaxNumberOfMessages = 10 }; request.QueueUrl = QueueUrl; var result = await _sqsClient.ReceiveMessageAsync(request, _cancellationTokenSource.Token); if (result.Messages.Count > 0) { foreach (var message in result.Messages) { if (_receiveAction != null && message != null) { _receiveAction(message.Body); DeleteMessage(message.ReceiptHandle); } } } } if (_isListening) { Subscribe(); } } which you call with something like this: client.Subscribe(x=>Log.Debug(x.Body)); The async SDK call returns when there is something in the queue, and will run your receive action for every message it gets in the batch (defaults to the maximum size of 10 messages per call). The listener will sit there awaiting messages until you stop it with: client.Unsubscribe(); Internally it has a cancellation token which it sets when you call unsubscribe, which cancels any in-flight call to SQS and stops the pump. The wrapper will also create the queue if it doesn’t exist at runtime. The Ensure() method gets called in the constructor so when you first use the client for a queue (sending or subscribing), it will set itself up: if (!Exists()) { var request = new CreateQueueRequest(); request.QueueName = QueueName; var response = _sqsClient.CreateQueue(request); QueueUrl = response.QueueUrl; } The Exists() check has to do make a call to ListQueues on the SNS client, as it doesn’t provide its own method to check if a queue exists. That call also populates the Amazon Resource Name, the unique identifier for this queue, which will be useful later. To use the wrapper, just instantiate and go: var queueClient = new QueueClient(“ProcessWorkflow”); queueClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. queueClient.Send(message);

    Read the article

  • The Oracle Cash Management Secret Very Few Customers Know About

    - by Theresa Hickman
    Did you know that Oracle Cash Management has a robust positioning feature? I had no idea. I was under the mistaken impression that Oracle Cash Management only did bank statement reconciliations. It seems I am not alone. In fact, many Oracle Financials customers are also not aware of this even though it is delivered for free with the Oracle Financials license. Even better, last week, Oracle released an enhancement to Oracle Cash Management for Release 12 that will greatly help customers with their cash positioning needs. As we all know, credit is tight these days. Companies need better visibility of their cash and other liquidity positions to make better use of their cash resources. Today, many customers are managing their cash positions manually using spreadsheets. We also hear how many of them are maintaining larger than normal balances in numerous bank accounts because they just do not have the visibility, and therefore the comfort they need. Although spreadsheets may work in the short-term, they are not the best way to manage your cash positions for the long-term especially if you have dozens, or even hundreds of bank and brokerage accounts. Also, spreadsheets are a lot more risky because they can be overwritten, deleted, difficult to audit, etc. With the newly enhanced positioning feature in Oracle Cash Management, customers can manage their daily cash positions using an excel-like interface that is very flexible and user-configurable. You can link the worksheet to an unlimited number of bank accounts to automatically retrieve your opening balances, the current/intra-day cash inflows and outflows, as well as your expected cash flows from your Fx, Investment and Debt positions if you have Oracle's Treasury module . Oracle Cash Management also has direct integration with Oracle Receivables, Oracle Payables, and Payroll, which adds to the comprehensive picture of what's happening with your organizations' cash in real-time. Here's a screen shot of what the cash positioning page looks like: View image As you can see, your Treasurers can obtain a holistic view of all cash positions across any number of bank accounts as well as other sources of cash flow movements. Depending on how they manage their accounts, they can also use this feature to initiate or monitor bank account sweeps or transfers between their zero balance accounts (ZBA) or cash pools. The cash position worksheet provide drill down for more detail and the ability to manually enter items directly into the worksheet for even greater flexibility and control. The enhancements to this feature were released last week. The following list the patches for Release 12.0.6 and 12.1.1: For more information, visit the following website. http://launch.oracle.com. PIN: yes2try

    Read the article

  • SQL SERVER – Working with FileTables in SQL Server 2012 – Part 2 – Methods to Insert Data Into Table

    - by pinaldave
    Read Part 1 Working with FileTables in SQL Server 2012 – Part 1 – Setting Up Environment In this second part of the series, we will see how we can insert the files into the FileTables. There are two methods to insert the data into FileTables: Method 1: Copy Paste data into the FileTables folder First, find the folder where FileTable will be storing the files. Go to Databases >> Newly Created Database (FileTableDB) >> Expand Tables. Here you will see a new folder which says “FileTables”. When expanded, it gives the name of the newly created “FileTableTb”. Right click on the newly created table, and click on “Explore FileTable Directory”. This will open up the folder where the FileTable data will be stored. When clicked on the option it will open up following folder in my local machine where the FileTable data will be stored. \\127.0.0.1\mssqlserver\FileTableDB\FileTableTb_Dir You can just copy your document just there. I copied few word document there and ran select statement to see the result. USE [FileTableDB] GO SELECT * FROM FileTableTb GO SELECT * returns all the rows. Here is SELECT statement which has only few columns selected from FileTable. SELECT [name] ,[file_type] ,CAST([file_stream] AS VARCHAR) FileContent ,[cached_file_size] ,[creation_time] ,[last_write_time] ,[last_access_time] FROM [dbo].[FileTableTb] GO I believe this is the simplest method to populate FileTable, because you just have to move the files to the specific table. Method 2: T-SQL Insert Statement There are always cases when you might want to programmatically insert the images into SQL Server File table. Here is a quick method which you can use to insert the data in the file table. I have inserted a very small text file using T-SQL, and later on, reading that using SELECT statement demonstrated in method 1 above. INSERT INTO [dbo].[FileTableTb] ([name],[file_stream]) SELECT 'NewFile.txt', * FROM OPENROWSET(BULK N'd:\NewFile.txt', SINGLE_BLOB) AS FileData GO The above T-SQL statement will copy the NewFile.txt to new location. When you run SELECT statement, it will retrieve the file and list in the resultset. Additionally, it returns the content in the SELECT statement as well. I think it is a pretty interesting way to insert the data into the FileTable. SELECT [name] ,[file_type] ,CAST([file_stream] AS VARCHAR) FileContent ,[cached_file_size] ,[creation_time] ,[last_write_time] ,[last_access_time] FROM [dbo].[FileTableTb] GO There are more to FileTable and we will see those in my future blog posts. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Filestream

    Read the article

  • Security Controls on data for P6 Analytics

    - by Jeffrey McDaniel
    The Star database and P6 Analytics calculates security based on P6 security using OBS, global, project, cost, and resource security considerations. If there is some concern that users are not seeing expected data in P6 Analytics here are some areas to review: 1. Determining if a user has cost security is based on the Project level security privileges - either View Project Costs/Financials or Edit EPS Financials. If expecting to see costs make sure one of these permissions are allocated.  2. User must have OBS access on a Project. Not WBS level. WBS level security is not supported. Make sure user has OBS on project level.  3. Resource Access is determined by what is granted in P6. Verify the resource access granted to this user in P6. Resource security is hierarchical. Project access will override Resource access based on the way security policies are applied. 4. Module access must be given to a P6 user for that user to come over into Star/P6 Analytics. For earlier version of RDB there was a report_user_flag on the Users table. This flag field is no longer used after P6 Reporting Database 2.1. 5. For P6 Reporting Database versions 2.2 and higher, the Extended Schema Security service must be run to calculate all security. Any changes to privileges or security this service must be rerun before any ETL. 6. In P6 Analytics 2.0 or higher, a Weblogic user must exist that matches the P6 username. For example user Tim must exist in P6 and Weblogic users for Tim to be able to log into P6 Analytics and access data based on  P6 security.  In earlier versions the username needed to exist in RPD. 7. Cache in OBI is another area that can sometimes make it seem a user isn't seeing the data they expect. While cache can be beneficial for performance in OBI. If the data is outdated it can retrieve older, stale data. Clearing or turning off cache when rerunning a query can determine if the returned result set was from cache or from the database.

    Read the article

  • Retreiving upcoming calendar events from a Google Calendar

    - by brian_ritchie
    Google has a great cloud-based calendar service that is part of their Gmail product.  Besides using it as a personal calendar, you can use it to store events for display on your web site.  The calendar is accessible through Google's GData API for which they provide a C# SDK. Here's some code to retrieve the upcoming entries from the calendar:  .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public class CalendarEvent 2: { 3: public string Title { get; set; } 4: public DateTime StartTime { get; set; } 5: } 6:   7: public class CalendarHelper 8: { 9: public static CalendarEvent[] GetUpcomingCalendarEvents 10: (int numberofEvents) 11: { 12: CalendarService service = new CalendarService("youraccount"); 13: EventQuery query = new EventQuery(); 14: query.Uri = new Uri( 15: "http://www.google.com/calendar/feeds/userid/public/full"); 16: query.FutureEvents = true; 17: query.SingleEvents = true; 18: query.SortOrder = CalendarSortOrder.ascending; 19: query.NumberToRetrieve = numberofEvents; 20: query.ExtraParameters = "orderby=starttime"; 21: var events = service.Query(query); 22: return (from e in events.Entries select new CalendarEvent() 23: { StartTime=(e as EventEntry).Times[0].StartTime, 24: Title = e.Title.Text }).ToArray(); 25: } 26: } There are a few special "tricks" to make this work: "SingleEvents" flag will flatten out reoccurring events "FutureEvents", "SortOrder", and the "orderby" parameters will get the upcoming events. "NumberToRetrieve" will limit the amount coming back  I then using Linq to Objects to put the results into my own DTO for use by my model.  It is always a good idea to place data into your own DTO for use within your MVC model.  This protects the rest of your code from changes to the underlying calendar source or API.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >