Search Results

Search found 21742 results on 870 pages for 'unique values'.

Page 146/870 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • When I try to redefine a variable, I get an index out of bounds error

    - by user2770254
    I'm building a program to act as a calculator with memory, so you can give variables and their values. Whenever I'm trying to redefine a variable, a = 5, to a = 6, I get an index out of bounds error. public static void main(String args[]) { LinkedHashMap<String,Integer> map = new LinkedHashMap<String,Integer>(); Scanner scan = new Scanner(System.in); ArrayList<Integer> values = new ArrayList<>(); ArrayList<String> variables = new ArrayList<>(); while(scan.hasNextLine()) { String line = scan.nextLine(); String[] tokens = line.split(" "); if(!Character.isDigit(tokens[0].charAt(0)) && !line.equals("clear") && !line.equals("var")) { int value = 0; for(int i=0; i<tokens.length; i++) { if(tokens.length==3) { value = Integer.parseInt(tokens[2]); System.out.printf("%5d\n",value); if(map.containsKey(tokens[0])) { values.set(values.indexOf(tokens[0]), value); variables.set(variables.indexOf(tokens[0]), tokens[0]); } else { values.add(value); } break; } else if(tokens[i].charAt(0) == '+') { value = addition(tokens, value); System.out.printf("%5d\n",value); variables.add(tokens[0]); if(map.containsKey(tokens[0])) { values.set(values.indexOf(tokens[0]), value); variables.set(variables.indexOf(tokens[0]), tokens[0]); } else { values.add(value); } break; } else if(i==tokens.length-1 && tokens.length != 3) { System.out.println("No operation"); break; } } map.put(tokens[0], value); } if(Character.isDigit(tokens[0].charAt(0))) { int value = 0; if(tokens.length==1) { System.out.printf("%5s\n", tokens[0]); } else { value = addition(tokens, value); System.out.printf("%5d\n", value); } } if(line.equals("clear")) { clear(map); } if(line.equals("var")) { variableList(variables, values); } } } public static int addition(String[] a, int b) { for(String item : a) { if(Character.isDigit(item.charAt(0))) { int add = Integer.parseInt(item); b = b + add; } } return b; } public static void clear(LinkedHashMap<String,Integer> b) { b.clear(); } public static void variableList(ArrayList<String> a, ArrayList<Integer> b) { for(int i=0; i<a.size(); i++) { System.out.printf("%5s: %d\n", a.get(i), b.get(i)); } } I included the whole code because I'm not sure where the error is arising from.

    Read the article

  • VirtualBox Issue: virtualbox changed my Computer Name's ip address in Windows

    - by suud
    I had installed virtualbox 4.2.2 in Windows 7. My Computer Name is: MY-PC My IP address (using ipconfig /all command) is: 192.168.1.101 My IP is dynamic and I set DNS to google dns (8.8.8.8) When I ping MY-PC, I got this result: Pinging MY-PC [192.168.56.1] with 32 bytes of data: Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 My virtualbox was not running and I expected the ip adress of MY-PC is 192.168.1.101, not 192.168.56.1 Then I run command: nbtstat -a MY-PC and I got this result: VirtualBox Host-Only Network: Node IpAddress: [192.168.56.1] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- MY-PC <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MY-PC <20> UNIQUE Registered MAC Address = 08-00-27-00-60-B3 Local Area Connection: Node IpAddress: [0.0.0.0] Scope Id: [] Host not found. Wireless Network Connection: Node IpAddress: [192.168.1.101] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- MY-PC <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MY-PC <20> UNIQUE Registered MAC Address = 94-0C-6D-E5-6D-5D So it seems virtualbox caused this problem. I want to know how to change back my Computer Name's ip address to 192.168.1.101 (or any ip address that set by my internet connection)?

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • Who could ask for more with LESS CSS? (Part 3 of 3&ndash;Clrizr)

    - by ToString(theory);
    Welcome back!  In the first two posts in this series, I covered some of the awesome features in CSS precompilers such as SASS and LESS, as well as how to get an initial project setup up and running in ASP.Net MVC 4. In this post, I will cover an actual advanced example of using LESS in a project, and show some of the great productivity features we gain from its usage. Introduction In the first post, I mentioned two subjects that I will be using in this example – constants, and color functions.  I’ve always enjoyed using online color scheme utilities such as Adobe Kuler or Color Scheme Designer to come up with a scheme based off of one primary color.  Using these tools, and requesting a complementary scheme you can get a couple of shades of your primary color, and a couple of shades of a complementary/accent color to display. Because there is no way in regular css to do color operations or store variables, there was no way to accomplish something like defining a primary color, and have a site theme cascade off of that.  However with tools such as LESS, that impossibility becomes a reality!  So, if you haven’t guessed it by now, this post is on the creation of a plugin/module/less file to drop into your project, plugin one color, and have your primary theme cascade from it.  I only went through the trouble of creating a module for getting Complementary colors.  However, it wouldn’t be too much trouble to go through other options such as Triad or Monochromatic to get a module that you could use off of that. Step 1 – Analysis I decided to mimic Adobe Kuler’s Complementary theme algorithm as I liked its simplicity and aesthetics.  Color Scheme Designer is great, but I do believe it can give you too many color options, which can lead to chaos and overload.  The first thing I had to check was if the complementary values for the color schemes were actually hues rotated by 180 degrees at all times – they aren’t.  Apparently Adobe applies some variance to the complementary colors to get colors that are actually more aesthetically appealing to users.  So, I opened up Excel and began to plot complementary hues based on rotation in increments of 10: Long story short, I completed the same calculations for Hue, Saturation, and Lightness.  For Hue, I only had to record the Complementary hue values, however for saturation and lightness, I had to record the values for ALL of the shades.  Since the functions were too complicated to put into LESS since they aren’t constant/linear, but rather interval functions, I instead opted to extrapolate the HSL values using the trendline function for each major interval, onto intervals of spacing 1. For example, using the hue extraction, I got the following values: Interval Function 0-60 60-140 140-270 270-360 Saturation and Lightness were much worse, but in the end, I finally had functions for all of the intervals, and then went the route of just grabbing each shades value in intervals of 1.  Step 2 – Mapping I declared variable names for each of these sections as something that shouldn’t ever conflict with a variable someone would define in their own file.  After I had each of the values, I extracted the values and put them into files of their own for hue variables, saturation variables, and lightness variables…  Example: /*HUE CONVERSIONS*/@clrizr-hue-source-0deg: 133.43;@clrizr-hue-source-1deg: 135.601;@clrizr-hue-source-2deg: 137.772;@clrizr-hue-source-3deg: 139.943;@clrizr-hue-source-4deg: 142.114;.../*SATURATION CONVERSIONS*/@clrizr-saturation-s2SV0px: 0;@clrizr-saturation-s2SV1px: 0;@clrizr-saturation-s2SV2px: 0;@clrizr-saturation-s2SV3px: 0;@clrizr-saturation-s2SV4px: 0;.../*LIGHTNESS CONVERSIONS*/@clrizr-lightness-s2LV0px: 30;@clrizr-lightness-s2LV1px: 31;@clrizr-lightness-s2LV2px: 32;@clrizr-lightness-s2LV3px: 33;@clrizr-lightness-s2LV4px: 34;...   In the end, I have 973 lines of mapping/conversion from source HSL to shade HSL for two extra primary shades, and two complementary shades. The last bit of the work was the file to compose each of the shades from these mappings. Step 3 – Clrizr Mapper The final step was the hardest to overcome as I was still trying to understand LESS to its fullest extent.  Imports As mentioned previously, I had separated the HSL mappings into different files, so the first necessary step is to import those for use into the Clrizr plugin: @import url("hue.less");@import url("saturation.less");@import url("lightness.less"); Extract Component Values For Each Shade Next, I extracted the necessary information for each shade HSL before shade composition: @clrizr-input-saturation: 1px+floor(saturation(@clrizr-input))-1;@clrizr-input-lightness: 1px+floor(lightness(@clrizr-input))-1; @clrizr-complementary-hue: formatstring("clrizr-hue-source-{0}", ceil(hue(@clrizr-input))); @clrizr-primary-2-saturation: formatstring("clrizr-saturation-s2SV{0}",@clrizr-input-saturation);@clrizr-primary-1-saturation: formatstring("clrizr-saturation-s1SV{0}",@clrizr-input-saturation);@clrizr-complementary-1-saturation: formatstring("clrizr-saturation-c1SV{0}",@clrizr-input-saturation); @clrizr-primary-2-lightness: formatstring("clrizr-lightness-s2LV{0}",@clrizr-input-lightness);@clrizr-primary-1-lightness: formatstring("clrizr-lightness-s1LV{0}",@clrizr-input-lightness);@clrizr-complementary-1-lightness: formatstring("clrizr-lightness-c1LV{0}",@clrizr-input-lightness); Here, you can see a couple of odd things…  On the first line, I am using operations to add units to the saturation and lightness.  This is due to some limitations in the operations that would give me saturation or lightness in %, which can’t be in a variable name.  So, I use first add 1px to it, which casts the result of the following functions as px instead of %, and then at the end, I remove that pixel.  You can also see here the formatstring method which is exactly what it sounds like – something like String.Format(string str, params object[] obj). Get Primary & Complementary Shades Now that I have components for each of the different shades, I can now compose them into each of their pieces.  For this, I use the @@ operator which will look for a variable with the name specified in a string, and then call that variable: @clrizr-primary-2: hsl(hue(@clrizr-input), @@clrizr-primary-2-saturation, @@clrizr-primary-2-lightness);@clrizr-primary-1: hsl(hue(@clrizr-input), @@clrizr-primary-1-saturation, @@clrizr-primary-1-lightness);@clrizr-primary: @clrizr-input;@clrizr-complementary-1: hsl(@@clrizr-complementary-hue, @@clrizr-complementary-1-saturation, @@clrizr-complementary-1-lightness);@clrizr-complementary-2: hsl(@@clrizr-complementary-hue, saturation(@clrizr-input), lightness(@clrizr-input)); That’s is it, for the most part.  These variables now hold the theme for the one input color – @clrizr-input.  However, I have one last addition… Perceptive Luminance Well, after I got the colors, I decided I wanted to also get the best font color that would go on top of it.  Black or white depending on light or dark color.  Now I couldn’t just go with checking the lightness, as that is half the story.  You see, the human eye doesn’t see ALL colors equally well but rather has more cells for interpreting green light compared to blue or red.  So, using the ratio, we can calculate the perceptive luminance of each of the shades, and get the font color that best matches it! @clrizr-perceptive-luminance-ps2: round(1 - ( (0.299 * red(@clrizr-primary-2) ) + ( 0.587 * green(@clrizr-primary-2) ) + (0.114 * blue(@clrizr-primary-2)))/255)*255;@clrizr-perceptive-luminance-ps1: round(1 - ( (0.299 * red(@clrizr-primary-1) ) + ( 0.587 * green(@clrizr-primary-1) ) + (0.114 * blue(@clrizr-primary-1)))/255)*255;@clrizr-perceptive-luminance-ps: round(1 - ( (0.299 * red(@clrizr-primary) ) + ( 0.587 * green(@clrizr-primary) ) + (0.114 * blue(@clrizr-primary)))/255)*255;@clrizr-perceptive-luminance-pc1: round(1 - ( (0.299 * red(@clrizr-complementary-1)) + ( 0.587 * green(@clrizr-complementary-1)) + (0.114 * blue(@clrizr-complementary-1)))/255)*255;@clrizr-perceptive-luminance-pc2: round(1 - ( (0.299 * red(@clrizr-complementary-2)) + ( 0.587 * green(@clrizr-complementary-2)) + (0.114 * blue(@clrizr-complementary-2)))/255)*255; @clrizr-col-font-on-primary-2: rgb(@clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2);@clrizr-col-font-on-primary-1: rgb(@clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1);@clrizr-col-font-on-primary: rgb(@clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps);@clrizr-col-font-on-complementary-1: rgb(@clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1);@clrizr-col-font-on-complementary-2: rgb(@clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2); Conclusion That’s it!  I have posted a project on clrizr.codePlex.com for this, and included a testing page for you to test out how it works.  Feel free to use it in your own project, and if you have any questions, comments or suggestions, please feel free to leave them here as a comment, or on the contact page!

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • Getting selected row in inputListOfValues returnPopupListener

    - by Frank Nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Model driven list-of-values in Oracle ADF are configured on the ADF Business component attribute which should be updated with the user value selection. The value lookup can be configured to be displayed as a select list, combo box, input list of values or combo box with list of values. Displaying the list in an af:inputListOfValues component shows the attribute value in an input text field and with an icon attached to it for the user to launch the list-of-values dialog. The list-of-values dialog allows users to use a search form to filter the lookup data list and to select an entry, which return value then is added as the value of the af:inputListOfValues component. Note: The model driven LOV can be configured in ADF Business Components to update multiple attributes with the user selection, though the most common use case is to update the value of a single attribute. A question on OTN was how to access the row of the selected return value on the ADF Faces front end. For this, you need to know that there is a Model property defined on the af:inputListOfValues that references the ListOfValuesModel implementation in the model. It is the value of this Model property that you need to get access to. The af:inputListOfValues has a ReturnPopupListener property that you can use to configure a managed bean method to receive notification when the user closes the LOV popup dialog by selecting the Ok button. This listener is not triggered when the cancel button is pressed. The managed bean signature can be created declaratively in Oracle JDeveloper 11g using the Edit option in the context menu next to the ReturnPopupListener field in the PropertyInspector. The empty method signature looks as shown below public void returnListener(ReturnPopupEvent returnPopupEvent) { } The ReturnPopupEvent object gives you access the RichInputListOfValues component instance, which represents the af:inputListOfValues component at runtime. From here you access the Model property of the component to then get a handle to the CollectionModel. The CollectionModel returns an instance of JUCtrlHierBinding in its getWrappedData method. Though there is no tree binding definition for the list of values dialog defined in the PageDef, it exists. Once you have access to this, you can read the row the user selected in the list of values dialog. See the following code: public void returnListener(ReturnPopupEvent returnPopupEvent) {   //access UI component instance from return event RichInputListOfValues lovField =        (RichInputListOfValues)returnPopupEvent.getSource();   //The LOVModel gives us access to the Collection Model and //ADF tree binding used to populate the lookup table ListOfValuesModel lovModel =  lovField.getModel(); CollectionModel collectionModel =          lovModel.getTableModel().getCollectionModel();     //The collection model wraps an instance of the ADF //FacesCtrlHierBinding, which is casted to JUCtrlHierBinding   JUCtrlHierBinding treeBinding =          (JUCtrlHierBinding) collectionModel.getWrappedData();     //the selected rows are defined in a RowKeySet.As the LOV table only   //supports single selections, there is only one entry in the rks RowKeySet rks = (RowKeySet) returnPopupEvent.getReturnValue();     //the ADF Faces table row key is a list. The list contains the //oracle.jbo.Key List tableRowKey = (List) rks.iterator().next();   //get the iterator binding for the LOV lookup table binding   DCIteratorBinding dciter = treeBinding.getDCIteratorBinding();   //get the selected row by its JBO key   Key key = (Key) tableRowKey.get(0); Row rw =  dciter.findRowByKeyString(key.toStringFormat(true)); //work with the row // ... }

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • Good ol fashioned debugging

    - by Tim Dexter
    I have been helping out one of our new customers over the last day or two and I have even managed to get to the bottom of their problem FTW! They use BIEE and BIP and wanted to mount a BIP report in a dashboard page, so far so good, BIP does that! Just follow the instructions in the BIEE user guide. The wrinkle is that they want to enter some fixed instruction strings into the dashboard prompts to help the user. These are added as fixed values to the prompt as the default values so they appear first. Once the user makes a selection, the default strings disappear. Its a fair requirement but the BIP report chokes Now, the BIP report had been setup with the Autorun checkbox, unchecked. I expected the BIP report to wait for the Go button to be hit but it was trying to run immediately and failing. That was the first issue. You can not stop the BIP report from trying to run in a dashboard. Even if the Autorun is turned off, it seems that dashboard still makes the request to BIP to run the report. Rather than BIP refusing because its waiting for input it goes ahead anyway, I guess the mechanism does not check the autorun flag when the request is coming from the dashboard. It appears that between BIEE and BIP, they collectively ignore the autorun flag. A bug? might be, at least an enhancement request. With that in mind, how could we get BIP to not at least not fail? This fact was stumping me on the parameter error, if the autorun flag was being respected then why was BIP complaining about the parameter values it should not even be doing anything until the Go button is clicked. I now knew that the autorun flag was being ignored, it was a simple case of putting BIP into debug mode. I use the OC4J server on my laptop so debug msgs are routed through the dos box used to start the OC4J container. When I changed a value on the dashboard prompt I spotted some debug text rushing by that subsequently disappeared from the log once the operation was complete. Another bug? I needed to catch that text as it went by, using the print screen function with some software to grab multiple screens as the log appeared and then disappeared. The upshot is that when you change the dashboard prompt value, BIP validates the value against its own LOVs, if its not in the list then it throws the error. Because 'Fill this first' and 'Fill this second' ie fixed strings from the dashboard prompts, are not in the LOV lists and because the report is auto running as soon as the dashboard page is brought up, the report complains about invalid parameters. To get around this, I needed to get the strings into the LOVs. Easily done with a UNION clause: select 'Fill this first' from SH.Products Products UNION select Products."Prod Category" as "Prod Category" from SH.Products Products Now when BIP wants to validate the prompt value, the LOV query fires and finds the fixed string -> No Error. No data, but definitely no errors :0) If users do run with the fixed values, you can capture that in the template. If there is no data in the report, either the fixed values were used or the parameters selected resulted in no rows. You can capture this in the template and display something like. 'Either your parameter values resulted in no data or you have not changed the default values' Thats the upside, the downside is that if your users run the report in the BP UI they re going to see the fixed strings. You could alleviate that by having BIP display the fixed strings in top of its parameter drop boxes (just set them as the default value for the parameter.) But they will not disappear like they do in the dashboard prompts, see below. If the expected autorun behaviour worked ie wait for the Go button, then we would not have to workaround it but for now, its a pretty good solution. It was an enjoyable hour or so for me, took me back to my developer daze, when we used to race each other for the most number of bug fixes. I used to run a distant 2nd behind 'Bugmeister Chen Hu' but led the chasing pack by a reasonable distance.

    Read the article

  • pchart pie chart legend and graph not correlating

    - by madphp
    Hi, I have ten values in the dataset, numbers 1 - 10 and corresponding values. Some of the values are coming back as zero, so they are not added to the chart, BUT, the legend is still listing 1 - 10. Because theres values missing in the chart, the colour coding is knocked off. ie Item 1, has a value of zero, its passed over in the chart, the colour in the legend is red, Item 2, has a value of 4, the percentage is calculated, and the chart gives it the colour red which is the colour for item 1 in the legend. Hope that makes sense. How can I print the legend just for the values that are displayed in the chart? --Mark

    Read the article

  • Can a struct become deallocated?

    - by brettr
    I've declared a struct in my header file like this: typedef struct { NSString *department; NSString *departmentId; } Department; Department currentDepartment; This struct is in a fairly simple class. I assign the struct values in viewDidLoad. Just before leaving viewDidLoad, I see the struct values are still there. After the user clicks a segment control, I reassign the struct values. Before assigning values, I see the two struct values are 0x0. I do have NSZombieEnabled, which is printing out this when I mouse over the struct while the app is running and one of my breakpoints have been hit: MyApp[25722:207] *** -[CFString _cfTypeID]: message sent to deallocated instance 0xfc0e90 I'm not creating an instance of the struct or deallocating it. How can it be getting deallocated?

    Read the article

  • C#, Linq, Dynamic Query: Code to filter a Dynamic query outside of the Repository

    - by Dr. Zim
    If you do something like this in your Repository: IQueryable<CarClass> GetCars(string condition, params object[] values) { return db.Cars.Where(condition, values); } And you set the condition and values outside of the repository: string condition = "CarMake == @Make"; object[] values = new string[] { Make = "Ford" }; var result = myRepo.GetCars( condition, values); How would you be able to sort the result outside of the repository with Dynamic Query? return View( "myView", result.OrderBy("Price")); Somehow I am losing the DynamicQuery nature when the data exits from the repository. And yes, I haven't worked out how to return the CarClass type where you would normally do a Select new Carclass { fieldName = m.fieldName, ... }

    Read the article

  • Doing a large number of upserts as fast as possible

    - by Jason Swett
    My app (which uses MySQL) is doing a large number of subsequent upserts. Right now my SQL looks like this: INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL OR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') So far I've found INSERT IGNORE to be the fastest way to achieve upserts. Selecting a record to see if it exists and then either updating it or inserting a new one is too slow. Even this is not as fast as I'd like because I need to do a separate statement for each record. Sometimes I'll have around 50,000 of these statements in a row. Is there a way to take care of all of these in just one statement, without deleting any existing records?

    Read the article

  • BizTalk Envelopes explained

    - by Robert Kokuti
    Recently I've been trying to get some order into an ESB-BizTalk pub/sub scenario, and decided to wrap the payload into standardized envelopes. I have used envelopes before in a 'light weight' fashion, and I found that they can be quite useful and powerful if used systematically. Here is what I learned. The Theory In my experience, Envelopes are often underutilised in a BizTalk solution, and quite often their full potential is not well understood. Here I try to simplify the theory behind the Envelopes within BizTalk.   Envelopes can be used to attach additional data to the ‘real’ data (payload). This additional data can contain all routing and processing information, and allows treating the business data as a ‘black box’, possibly compressed and/or encrypted etc. The point here is that the infrastructure does not need to know anything about the business data content, just as a post man does not need to know the letter within the envelope. BizTalk has built-in support for envelopes through the XMLDisassembler and XMLAssembler pipeline components (these are part of the XMLReceive and XMLSend default pipelines). These components, among other things, perform the following: XMLDisassembler Extracts the payload from the envelope into the Message Body Copies data from the envelope into the message context, as specified by the property schema(s) associated by the envelope schema. Typically, once the envelope is through the XMLDisassembler, the payload is submitted into the Messagebox, and the rest of the envelope data are copied into the context of the submitted message. The XMLDisassembler uses the Property Schemas, referenced by the Envelope Schema, to determine the name of the promoted Message Context element.   XMLAssembler Wraps the Message Body inside the specified envelope schema Populates the envelope values from the message context, as specified by the property schema(s) associated by the envelope schema. Notice that there are no requirements to use the receiving envelope schema when sending. The sent message can be wrapped within any suitable envelope, regardless whether the message was originally received within an envelope or not. However, by sharing Property Schemas between Envelopes, it is possible to pass values from the incoming envelope to the outgoing envelope via the Message Context. The Practice Creating the Envelope Add a new Schema to the BizTalk project:   Envelopes are defined as schemas, with the <Schema> Envelope property set to Yes, and the root node’s Body XPath property pointing to the node which contains the payload. Typically, you’d create an envelope structure similar to this: Click on the <Schema> node and set the Envelope property to Yes. Then, click on the Envelope node, and set the Body XPath property pointing to the ‘Body’ node:   The ‘Body’ node is a Child Element, and its Data Structure Type is set to xs:anyType.  This allows the Body node to carry any payload data. The XMLReceive pipeline will submit the data found in the Body node, while the XMLSend pipeline will copy the message into the Body node, before sending to the destination. Promoting Properties Once you defined the envelope, you may want to promote the envelope data (anything other than the Body) as Property Fields, in order to preserve their value in the message context. Anything not promoted will be lost when the XMLDisassembler extracts the payload from the Body. Typically, this means you promote everything in the Header node. Property promotion uses associated Property Schemas. These are special BizTalk schemas which have a flat field structure. Property Schemas define the name of the promoted values in the Message Context, by combining the Property Schema’s Namespace and the individual Field names. It is worth being systematic when it comes to naming your schemas, their namespace and type name. A coherent method will make your life easier when it comes to referencing the schemas during development, and managing subscriptions (filters) in BizTalk Administration. I developed a fairly coherent naming convention which I’ll probably share in another article. Because the property schema must be flat, I recommend creating one for each level in the envelope header hierarchy. Property schemas are very useful in passing data between incoming as outgoing envelopes. As I mentioned earlier, in/out envelopes do not have to be the same, but you can use the same property schema when you promote the outgoing envelope fields as you used for the incoming schema.  As you can reference many property schemas for field promotion, you can pick data from a variety of sources when you define your outgoing envelope. For example, the outgoing envelope can carry some of the incoming envelope’s promoted values, plus some values from the standard BizTalk message context, like the AdapterReceiveCompleteTime property from the BizTalk message-tracking properties. The values you promote for the outgoing envelope will be automatically populated from the Message Context by the XMLAssembler pipeline component. Using the Envelope Receiving Enveloped messages are automatically recognized by the XMLReceive pipeline, or any other custom pipeline which includes the XMLDisassembler component. The Body Path node will become the Message Body, while the rest of the envelope values will be added to the Message context, as defined by the Property Shemas referenced by the Envelope Schema. Sending The Send Port’s filter expression can use the promoted properties from the incoming envelope. If you want to enclose the sent message within an envelope, the Send Port XMLAssembler component must be configured with the fully qualified envelope name:   One way of obtaining the fully qualified envelope name is copy it off from the envelope schema property page: The full envelope schema name is constructed as <Name>, <Assembly> The outgoing envelope is populated by the XMLAssembler pipeline component. The Message Body is copied to the specified envelope’s Body Path node, while the rest of the envelope fields are populated from the Message Context, according to the Property Schemas associated with the Envelope Schema. That’s all for now, happy enveloping!

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • How to convert the date string into string( yyyy-MM-dd). While doing so, I getting null values ?

    - by Madan Mohan
    Hi, I have the data as customerFromDate " 01 Apr 2010 " and customerToDate " 30 Apr 2010 " which is a string. I want to convert that format into yyyy-MM-dd it should be in string. but while doing so. I got null values. Please see the following code which I had tried. printf("\n customerFromDate %s",[customerStatementObj.customerFromDate UTF8String]); printf("\n customerToDate %s",[customerStatementObj.customerToDate UTF8String]); /* prints as the following customerFromDate 01 Apr 2010 customerToDate 30 Apr 2010 */ NSDateFormatter *dateFormatter = [[NSDateFormatter alloc]init]; [dateFormatter setDateFormat:@"yyyy-MM-dd"]; NSDate *fromDate=[[NSDate alloc]init]; fromDate = [dateFormatter dateFromString:customerStatementObj.customerFromDate]; printf("\n fromDate: %s",[fromDate.description UTF8String]); NSString *fromDateString=[dateFormatter stringFromDate:fromDate]; printf("\n fromDateString: %s",[fromDateString UTF8String]); [dateFormatter release]; NSDateFormatter *dateFormatter1 = [[NSDateFormatter alloc]init]; [dateFormatter1 setDateFormat:@"yyyy-MM-dd"]; NSDate *toDate=[[NSDate alloc]init]; toDate = [dateFormatter1 dateFromString:customerStatementObj.customerToDate]; printf("\n toDate: %s",[toDate.description UTF8String]); NSString *toDateString=[dateFormatter1 stringFromDate:toDate]; printf("\n toDateString: %s",[toDateString UTF8String]); [dateFormatter1 release]; Thank you, Madan Mohan.

    Read the article

  • NullPointerException when linking to Service that uses ContentProvider

    - by Danny Chia
    H.i everyone, this is my first post here! Anyways, I'm trying to write a "todo list" application. It stores the data in a ContentProvider, which is accessed via a Service. However, my app crashes at launch. My code is below: Manifest file: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.examples.todolist" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name" android:debuggable="True"> <activity android:name=".ToDoList" android:label="@string/app_name" android:theme="@style/ToDoTheme"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <service android:name="TodoService"/> <provider android:name="TodoProvider" android:authorities="com.examples.provider.todolist" /> </application> <uses-sdk android:minSdkVersion="7" /> </manifest> ToDoList.java: package com.examples.todolist; import com.examples.todolist.TodoService.LocalBinder; import java.util.ArrayList; import java.util.Date; import android.app.Activity; import android.content.SharedPreferences; import android.database.Cursor; import android.os.AsyncTask; import android.os.Bundle; import android.view.ContextMenu; import android.content.ComponentName; import android.content.Context; import android.content.Intent; import android.content.ServiceConnection; import android.os.IBinder; import android.view.KeyEvent; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.view.View.OnKeyListener; import android.widget.AdapterView; import android.widget.EditText; import android.widget.ListView; import android.widget.Toast; public class ToDoList extends Activity { static final private int ADD_NEW_TODO = Menu.FIRST; static final private int REMOVE_TODO = Menu.FIRST + 1; private static final String TEXT_ENTRY_KEY = "TEXT_ENTRY_KEY"; private static final String ADDING_ITEM_KEY = "ADDING_ITEM_KEY"; private static final String SELECTED_INDEX_KEY = "SELECTED_INDEX_KEY"; private boolean addingNew = false; private ArrayList<ToDoItem> todoItems; private ListView myListView; private EditText myEditText; private ToDoItemAdapter aa; int entries = 0; int notifs = 0; //ToDoDBAdapter toDoDBAdapter; Cursor toDoListCursor; TodoService mService; boolean mBound = false; /** Called when the activity is first created. */ public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); myListView = (ListView)findViewById(R.id.myListView); myEditText = (EditText)findViewById(R.id.myEditText); todoItems = new ArrayList<ToDoItem>(); int resID = R.layout.todolist_item; aa = new ToDoItemAdapter(this, resID, todoItems); myListView.setAdapter(aa); myEditText.setOnKeyListener(new OnKeyListener() { public boolean onKey(View v, int keyCode, KeyEvent event) { if (event.getAction() == KeyEvent.ACTION_DOWN) if (keyCode == KeyEvent.KEYCODE_DPAD_CENTER) { ToDoItem newItem = new ToDoItem(myEditText.getText().toString(), 0); mService.insertTask(newItem); updateArray(); myEditText.setText(""); entries++; Toast.makeText(ToDoList.this, "Entry added", Toast.LENGTH_SHORT).show(); aa.notifyDataSetChanged(); cancelAdd(); return true; } return false; } }); registerForContextMenu(myListView); restoreUIState(); populateTodoList(); } private void populateTodoList() { // Get all the todo list items from the database. toDoListCursor = mService. getAllToDoItemsCursor(); startManagingCursor(toDoListCursor); // Update the array. updateArray(); Toast.makeText(this, "Todo list retrieved", Toast.LENGTH_SHORT).show(); } private void updateArray() { toDoListCursor.requery(); todoItems.clear(); if (toDoListCursor.moveToFirst()) do { String task = toDoListCursor.getString(toDoListCursor.getColumnIndex(ToDoDBAdapter.KEY_TASK)); long created = toDoListCursor.getLong(toDoListCursor.getColumnIndex(ToDoDBAdapter.KEY_CREATION_DATE)); int taskid = toDoListCursor.getInt(toDoListCursor.getColumnIndex(ToDoDBAdapter.KEY_ID)); ToDoItem newItem = new ToDoItem(task, new Date(created), taskid); todoItems.add(0, newItem); } while(toDoListCursor.moveToNext()); aa.notifyDataSetChanged(); } private void restoreUIState() { // Get the activity preferences object. SharedPreferences settings = getPreferences(0); // Read the UI state values, specifying default values. String text = settings.getString(TEXT_ENTRY_KEY, ""); Boolean adding = settings.getBoolean(ADDING_ITEM_KEY, false); // Restore the UI to the previous state. if (adding) { addNewItem(); myEditText.setText(text); } } @Override public void onSaveInstanceState(Bundle outState) { outState.putInt(SELECTED_INDEX_KEY, myListView.getSelectedItemPosition()); super.onSaveInstanceState(outState); } @Override public void onRestoreInstanceState(Bundle savedInstanceState) { int pos = -1; if (savedInstanceState != null) if (savedInstanceState.containsKey(SELECTED_INDEX_KEY)) pos = savedInstanceState.getInt(SELECTED_INDEX_KEY, -1); myListView.setSelection(pos); } @Override protected void onPause() { super.onPause(); // Get the activity preferences object. SharedPreferences uiState = getPreferences(0); // Get the preferences editor. SharedPreferences.Editor editor = uiState.edit(); // Add the UI state preference values. editor.putString(TEXT_ENTRY_KEY, myEditText.getText().toString()); editor.putBoolean(ADDING_ITEM_KEY, addingNew); // Commit the preferences. editor.commit(); } @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); // Create and add new menu items. MenuItem itemAdd = menu.add(0, ADD_NEW_TODO, Menu.NONE, R.string.add_new); MenuItem itemRem = menu.add(0, REMOVE_TODO, Menu.NONE, R.string.remove); // Assign icons itemAdd.setIcon(R.drawable.add_new_item); itemRem.setIcon(R.drawable.remove_item); // Allocate shortcuts to each of them. itemAdd.setShortcut('0', 'a'); itemRem.setShortcut('1', 'r'); return true; } @Override public boolean onPrepareOptionsMenu(Menu menu) { super.onPrepareOptionsMenu(menu); int idx = myListView.getSelectedItemPosition(); String removeTitle = getString(addingNew ? R.string.cancel : R.string.remove); MenuItem removeItem = menu.findItem(REMOVE_TODO); removeItem.setTitle(removeTitle); removeItem.setVisible(addingNew || idx > -1); return true; } @Override public void onCreateContextMenu(ContextMenu menu, View v, ContextMenu.ContextMenuInfo menuInfo) { super.onCreateContextMenu(menu, v, menuInfo); menu.setHeaderTitle("Selected To Do Item"); menu.add(0, REMOVE_TODO, Menu.NONE, R.string.remove); } @Override public boolean onOptionsItemSelected(MenuItem item) { super.onOptionsItemSelected(item); int index = myListView.getSelectedItemPosition(); switch (item.getItemId()) { case (REMOVE_TODO): { if (addingNew) { cancelAdd(); } else { removeItem(index); } return true; } case (ADD_NEW_TODO): { addNewItem(); return true; } } return false; } @Override public boolean onContextItemSelected(MenuItem item) { super.onContextItemSelected(item); switch (item.getItemId()) { case (REMOVE_TODO): { AdapterView.AdapterContextMenuInfo menuInfo; menuInfo =(AdapterView.AdapterContextMenuInfo)item.getMenuInfo(); int index = menuInfo.position; removeItem(index); return true; } } return false; } @Override public void onDestroy() { super.onDestroy(); } private void cancelAdd() { addingNew = false; myEditText.setVisibility(View.GONE); } private void addNewItem() { addingNew = true; myEditText.setVisibility(View.VISIBLE); myEditText.requestFocus(); } private void removeItem(int _index) { // Items are added to the listview in reverse order, so invert the index. //toDoDBAdapter.removeTask(todoItems.size()-_index); ToDoItem item = todoItems.get(_index); final long selectedId = item.getTaskId(); mService.removeTask(selectedId); entries--; Toast.makeText(this, "Entry deleted", Toast.LENGTH_SHORT).show(); updateArray(); } @Override protected void onStart() { super.onStart(); Intent intent = new Intent(this, TodoService.class); bindService(intent, mConnection, Context.BIND_AUTO_CREATE); } @Override protected void onStop() { super.onStop(); // Unbind from the service if (mBound) { unbindService(mConnection); mBound = false; } } private ServiceConnection mConnection = new ServiceConnection() { public void onServiceConnected(ComponentName className, IBinder service) { LocalBinder binder = (LocalBinder) service; mService = binder.getService(); mBound = true; } public void onServiceDisconnected(ComponentName arg0) { mBound = false; } }; public class TimedToast extends AsyncTask<Long, Integer, Integer> { @Override protected Integer doInBackground(Long... arg0) { if (notifs < 15) { try { Toast.makeText(ToDoList.this, entries + " entries left", Toast.LENGTH_SHORT).show(); notifs++; Thread.sleep(20000); } catch (InterruptedException e) { } } return 0; } } } TodoService.java: package com.examples.todolist; import android.app.Service; import android.content.ContentResolver; import android.content.ContentValues; import android.content.Intent; import android.database.Cursor; import android.os.Binder; import android.os.IBinder; public class TodoService extends Service { private final IBinder mBinder = new LocalBinder(); @Override public IBinder onBind(Intent arg0) { return mBinder; } public class LocalBinder extends Binder { TodoService getService() { return TodoService.this; } } public void insertTask(ToDoItem _task) { ContentResolver cr = getContentResolver(); ContentValues values = new ContentValues(); values.put(TodoProvider.KEY_CREATION_DATE, _task.getCreated().getTime()); values.put(TodoProvider.KEY_TASK, _task.getTask()); cr.insert(TodoProvider.CONTENT_URI, values); } public void updateTask(ToDoItem _task) { long tid = _task.getTaskId(); ContentResolver cr = getContentResolver(); ContentValues values = new ContentValues(); values.put(TodoProvider.KEY_TASK, _task.getTask()); cr.update(TodoProvider.CONTENT_URI, values, TodoProvider.KEY_ID + "=" + tid, null); } public void removeTask(long tid) { ContentResolver cr = getContentResolver(); cr.delete(TodoProvider.CONTENT_URI, TodoProvider.KEY_ID + "=" + tid, null); } public Cursor getAllToDoItemsCursor() { ContentResolver cr = getContentResolver(); return cr.query(TodoProvider.CONTENT_URI, null, null, null, null); } } TodoProvider.java: package com.examples.todolist; import android.content.*; import android.database.Cursor; import android.database.SQLException; import android.database.sqlite.SQLiteOpenHelper; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteQueryBuilder; import android.database.sqlite.SQLiteDatabase.CursorFactory; import android.net.Uri; import android.text.TextUtils; import android.util.Log; public class TodoProvider extends ContentProvider { public static final Uri CONTENT_URI = Uri.parse("content://com.examples.provider.todolist/todo"); @Override public boolean onCreate() { Context context = getContext(); todoHelper dbHelper = new todoHelper(context, DATABASE_NAME, null, DATABASE_VERSION); todoDB = dbHelper.getWritableDatabase(); return (todoDB == null) ? false : true; } @Override public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sort) { SQLiteQueryBuilder tb = new SQLiteQueryBuilder(); tb.setTables(TODO_TABLE); // If this is a row query, limit the result set to the passed in row. switch (uriMatcher.match(uri)) { case TASK_ID: tb.appendWhere(KEY_ID + "=" + uri.getPathSegments().get(1)); break; default: break; } // If no sort order is specified sort by date / time String orderBy; if (TextUtils.isEmpty(sort)) { orderBy = KEY_ID; } else { orderBy = sort; } // Apply the query to the underlying database. Cursor c = tb.query(todoDB, projection, selection, selectionArgs, null, null, orderBy); // Register the contexts ContentResolver to be notified if // the cursor result set changes. c.setNotificationUri(getContext().getContentResolver(), uri); // Return a cursor to the query result. return c; } @Override public Uri insert(Uri _uri, ContentValues _initialValues) { // Insert the new row, will return the row number if // successful. long rowID = todoDB.insert(TODO_TABLE, "task", _initialValues); // Return a URI to the newly inserted row on success. if (rowID > 0) { Uri uri = ContentUris.withAppendedId(CONTENT_URI, rowID); getContext().getContentResolver().notifyChange(uri, null); return uri; } throw new SQLException("Failed to insert row into " + _uri); } @Override public int delete(Uri uri, String where, String[] whereArgs) { int count; switch (uriMatcher.match(uri)) { case TASKS: count = todoDB.delete(TODO_TABLE, where, whereArgs); break; case TASK_ID: String segment = uri.getPathSegments().get(1); count = todoDB.delete(TODO_TABLE, KEY_ID + "=" + segment + (!TextUtils.isEmpty(where) ? " AND (" + where + ')' : ""), whereArgs); break; default: throw new IllegalArgumentException("Unsupported URI: " + uri); } getContext().getContentResolver().notifyChange(uri, null); return count; } @Override public int update(Uri uri, ContentValues values, String where, String[] whereArgs) { int count; switch (uriMatcher.match(uri)) { case TASKS: count = todoDB.update(TODO_TABLE, values, where, whereArgs); break; case TASK_ID: String segment = uri.getPathSegments().get(1); count = todoDB.update(TODO_TABLE, values, KEY_ID + "=" + segment + (!TextUtils.isEmpty(where) ? " AND (" + where + ')' : ""), whereArgs); break; default: throw new IllegalArgumentException("Unknown URI " + uri); } getContext().getContentResolver().notifyChange(uri, null); return count; } @Override public String getType(Uri uri) { switch (uriMatcher.match(uri)) { case TASKS: return "vnd.android.cursor.dir/vnd.examples.task"; case TASK_ID: return "vnd.android.cursor.item/vnd.examples.task"; default: throw new IllegalArgumentException("Unsupported URI: " + uri); } } // Create the constants used to differentiate between the different URI // requests. private static final int TASKS = 1; private static final int TASK_ID = 2; private static final UriMatcher uriMatcher; // Allocate the UriMatcher object, where a URI ending in 'tasks' will // correspond to a request for all tasks, and 'tasks' with a // trailing '/[rowID]' will represent a single task row. static { uriMatcher = new UriMatcher(UriMatcher.NO_MATCH); uriMatcher.addURI("com.examples.provider.Todolist", "tasks", TASKS); uriMatcher.addURI("com.examples.provider.Todolist", "tasks/#", TASK_ID); } //The underlying database private SQLiteDatabase todoDB; private static final String TAG = "TodoProvider"; private static final String DATABASE_NAME = "todolist.db"; private static final int DATABASE_VERSION = 1; private static final String TODO_TABLE = "todolist"; // Column Names public static final String KEY_ID = "_id"; public static final String KEY_TASK = "task"; public static final String KEY_CREATION_DATE = "date"; public long insertTask(ToDoItem _task) { // Create a new row of values to insert. ContentValues newTaskValues = new ContentValues(); // Assign values for each row. newTaskValues.put(KEY_TASK, _task.getTask()); newTaskValues.put(KEY_CREATION_DATE, _task.getCreated().getTime()); // Insert the row. return todoDB.insert(TODO_TABLE, null, newTaskValues); } public boolean updateTask(long _rowIndex, String _task) { ContentValues newValue = new ContentValues(); newValue.put(KEY_TASK, _task); return todoDB.update(TODO_TABLE, newValue, KEY_ID + "=" + _rowIndex, null) > 0; } public boolean removeTask(long _rowIndex) { return todoDB.delete(TODO_TABLE, KEY_ID + "=" + _rowIndex, null) > 0; } // Helper class for opening, creating, and managing database version control private static class todoHelper extends SQLiteOpenHelper { private static final String DATABASE_CREATE = "create table " + TODO_TABLE + " (" + KEY_ID + " integer primary key autoincrement, " + KEY_TASK + " TEXT, " + KEY_CREATION_DATE + " INTEGER);"; public todoHelper(Context cn, String name, CursorFactory cf, int ver) { super(cn, name, cf, ver); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS " + TODO_TABLE); onCreate(db); } } } I've omitted the other files as I'm sure they are correct. When I run the program, LogCat shows that the NullPointerException occurs in populateTodoList(), at toDoListCursor = mService.getAllToDoItemsCursor(). mService is the Cursor object returned by TodoService. I've added the service to the Manifest file, but I still cannot find out why it's causing an exception. Thanks in advance.

    Read the article

  • IRM Item Codes &ndash; what are they for?

    - by martin.abrahams
    A number of colleagues have been asking about IRM item codes recently – what are they for, when are they useful, how can you control them to meet some customer requirements? This is quite a big topic, but this article provides a few answers. An item code is part of the metadata of every sealed document – unless you define a custom metadata model. The item code is defined when a file is sealed, and usually defaults to a timestamp/filename combination. This time/name combo tends to make item codes unique for each new document, but actually item codes are not necessarily unique, as will become clear shortly. In most scenarios, item codes are not relevant to the evaluation of a user’s rights - the context name is the critical piece of metadata, as a user typically has a role that grants access to an entire classification of information regardless of item code. This is key to the simplicity and manageability of the Oracle IRM solution. Item codes are occasionally exposed to users in the UI, but most users probably never notice and never care. Nevertheless, here is one example of where you can see an item code – when you hover the mouse pointer over a sealed file. As you see, the item code for this freshly created file combines a timestamp with the file name. But what are item codes for? The first benefit of item codes is that they enable you to manage exceptions to the policy defined for a context. Thus, I might have access to all oracle – internal files - except for 2011_03_11 13:33:29 Board Minutes.sdocx. This simple mechanism enables Oracle IRM to provide file-by-file control where appropriate, whilst offering the scalability and manageability of classification-based control for the majority of users and content. You really don’t want to be managing each file individually, but never say never. Item codes can also be used for the opposite effect – to include a file in a user’s rights when their role would ordinarily deny access. So, you can assign a role that allows access only to specified item codes. For example, my role might say that I have access to precisely one file – the one shown above. So how are item codes set? In the vast majority of scenarios, item codes are set automatically as part of the sealing process. The sealing API uses the timestamp and filename as shown, and the user need not even realise that this has happened. This automatically creates item codes that are for all practical purposes unique - and that are also intelligible to users who might want to refer to them when viewing or assigning rights in the management UI. It is also possible for suitably authorised users and applications to set the item code manually or programmatically if required. Setting the item code manually using the IRM Desktop The manual process is a simple extension of the sealing task. An authorised user can select the Advanced… sealing option, and will see a dialog that offers the option to specify the item code. To see this option, the user’s role needs the Set Item Code right – you don’t want most users to give any thought at all to item codes, so by default the option is hidden. Setting the item code programmatically A more common scenario is that an application controls the item code programmatically. For example, a document management system that seals documents as part of a workflow might set the item code to match the document’s unique identifier in its repository. This offers the option to tie IRM rights evaluation directly to the security model defined in the document management system. Again, the sealing application needs to be authorised to Set Item Code. The Payslip Scenario To give a concrete example of how item codes might be used in a real world scenario, consider a Human Resources workflow such as a payslips. The goal might be to allow the HR team to have access to all payslips, but each employee to have access only to their own payslips. To enable this, you might have an IRM classification called Payslips. The HR team have a role in the normal way that allows access to all payslips. However, each employee would have an Item Reader role that only allows them to access files that have a particular item code – and that item code might match the employee’s payroll number. So, employee number 123123123 would have access to items with that code. This shows why item codes are not necessarily unique – you can deliberately set the same code on many files for ease of administration. The employees might have the right to unseal or print their payslip, so the solution acts as a secure delivery mechanism that allows payslips to be distributed via corporate email without any fear that they might be accessed by IT administrators, or forwarded accidentally to anyone other than the intended recipient. All that remains is to ensure that as each user’s payslip is sealed, it is assigned the correct item code – something that is easily managed by a simple IRM sealing application. Each month, an employee’s payslip is sealed with the same item code, so you do not need to keep amending the list of items that the user has access to – they have access to all documents that carry their employee code.

    Read the article

  • Comma-separated value insertion In SQL Server 2005

    - by Asim Sajjad
    How can I insert values from a comma-separated input parameter with a stored procedure? For example: exec StoredProcedure Name 17,'127,204,110,198',7,'162,170,163,170' you can see that I have two comma-separated value lists in the parameter list. Both will have the same number of values: if the first has 5 comma-separated values, then the second one also has 5 comma-separated values. 127 and 162 are related 204 and 170 are related ...and same for the others. How can I insert these two values? One comma-separated value is inserted, but how do I insert two?

    Read the article

  • iphone sqlite3_column_text issue

    - by Ruchir Shah
    I have a column in sqlite3 table. Column has values like 1 ½” 1 ¾” 2 ½” etc. Column has VARCHAR datatype. I am using this code. pref_HoseDiameter = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 2)]; Now, when I am fetching these values from database, I am getting pref_HoseDiameter string values like this: 1 1/2" 1 3/4" 2 1/2" How to fetch those values as they are in database or how to convert them that look like database values. Help would be appreciated. Thanks in advance.

    Read the article

  • Mixed Table Type with other types as parameters to Stored Procedured c#

    - by amemak
    Hi, I am asking about how could i pass multi parameters to a stored procedure, one of these parameters is user defined table. When I tried to do it it shows this error: INSERT INTO BD (ID, VALUE, BID) values( (SELECT t1.ID, t1.Value FROM @Table AS t1),someintvalue) here @Table is the user defined table parameter. Msg 116, Level 16, State 1, Procedure UpdateBD, Line 12 Only one expression can be specified in the select list when the subquery is not introduced with EXISTS. Msg 109, Level 15, State 1, Procedure UpdateBD, Line 11 There are more columns in the INSERT statement than values specified in the VALUES clause. The number of values in the VALUES clause must match the number of columns specified in the INSERT statement. Thank you

    Read the article

  • PostgreSQL - best way to return an array of key-value pairs

    - by Matt W
    I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values. The query is something like SELECT t.field1, t.field2, ARRAY(--with each element containing two values i.e. {'TheName', 1 }) FROM MyTable t I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?) Any help is much appreciated.

    Read the article

  • IS NULL doesn't work as expected in SQL Server 2000 with no Service Pack on it

    - by user306825
    The following batch executed on different instances of SQL Server 2000 illustrates the problem. select @@version create table a (a int) create table b (b int) insert into a(a) values (1) insert into a(a) values (2) insert into a(a) values (3) insert into b(b) values (1) insert into b(b) values (2) select * from a left outer join (select 1 as test, b from b) as j on j.b = a.a where j.test IS NULL drop table a drop table b Output 1: Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Aug 6 2000 00:57:48 Copyright (c) 1988-2000 Microsoft Corporation Developer Edition on Windows NT 6.1 (Build 7600: ) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b ----------- ----------- ----------- (0 row(s) affected) Output 2: Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b ----------- ----------- ----------- 3 NULL NULL (1 row(s) affected) If someone encounters the same problem - make sure you have the SP installed!

    Read the article

  • C++ assignment question [closed]

    - by Adam Joof
    (Bubble Sort) In the bubble sort algorithm, smaller values gradually "bubble" their way upward to the top of the array like air bubbles rising in water, while the larger values sink to the bottom. The bubble sort makes several passes through the array. On each pass, successive pairs of elements are compared. If a pair is in increasing order (or the values are identical), we leave the values as they are. If a pair is in decreasing order, their values are swapped in the array. Write a program that sorts an array of 10 integers using bubble sort.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >