Search Results

Search found 1863 results on 75 pages for 'populate'.

Page 69/75 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Creating A SharePoint Parent/Child List Relationship&ndash; SharePoint 2010 Edition

    - by Mark Rackley
    Hey blog readers… It has been almost 2 years since I posted my most read blog on creating a Parent/Child list relationship in SharePoint 2007: Creating a SharePoint List Parent / Child Relationship - Out of the Box And then a year ago I improved on my method and redid the blog post… still for SharePoint 2007: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX Since then many of you have been asking me how to get this to work in SharePoint 2010, and frankly I have just not had time to look into it. I wish I could have jumped into this sooner, but have just recently began to look at it. Well.. after all this time I have actually come up with two solutions that work, neither of them are as clean as I’d like them to be, but I wanted to get something in your hands that you can start using today. Hopefully in the coming weeks and months I’ll be able to improve upon this further and give you guys some better options. For the most part, the process is identical to the 2007 process, but you have probably found out that the list view web parts in 2010 behave differently, and getting the Parent ID to your new child form can be a pain in the rear (at least that’s what I’ve discovered). Anyway, like I said, I have found a couple of solutions that work. If you know of a better one, please let us know as it bugs me that this not as eloquent as my 2007 implementation. Getting on the same page First thing I’d recommend is recreating this blog: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX in SharePoint 2010… There are some vague differences, but it’s basically the same…  Here’s a quick video of me doing this in SP 2010: Creating Lists necessary for this blog post Now that you have the lists created, lets set up the New Time form to use a QueryString variable to populate the Parent ID field: Creating parameters in Child’s new item form to set parent ID Did I talk fast enough through both of those videos? Hopefully by now that stuff is old hat to you, but I wanted to make sure everyone could get on the same page.  Okay… let’s get started. Solution 1 – XSLTListView with Javascript This solution is the more elegant of the two, however it does require the use of a little javascript.  The other solution does not use javascript, but it also doesn’t use the pretty new SP 2010 pop-ups.  I’ll let you decide which you like better. The basic steps of this solution are: Inserted a Related Item View Insert a ContentEditorWebPart Insert script in ContentEditorWebPart that pulls the ID from the Query string and calls the method to insert a new item on the child entry form Hide the toolbar from data view to remove “add new item” link. Again, you don’t HAVE to use a CEWP, you could just put the javascript directly in the page using SPD.  Anyway, here is how I did it: Using Related Item View / JavaScript Here’s the JavaScript I used in my Content Editor Web Part: <script type="text/javascript"> function NewTime() { // Get the Query String values and split them out into the vals array var vals = new Object(); var qs = location.search.substring(1, location.search.length); var args = qs.split("&"); for (var i=0; i < args.length; i++) { var nameVal = args[i].split("="); var temp = unescape(nameVal[1]).split('+'); nameVal[1] = temp.join(' '); vals[nameVal[0]] = nameVal[1]; } var issueID = vals["ID"]; //use this to bring up the pretty pop up NewItem2(event,"http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID); //use this to open a new window //window.location="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID; } </script> Solution 2 – DataFormWebPart and exact same 2007 Process This solution is a little more of a hack, but it also MUCH more close to the process we did in SP 2007. So, if you don’t mind not having the pretty pop-up and prefer the comforts of what you are used to, you can give this one a try.  The basics steps are: Insert a DataFormWebPart instead of the List Data View Create a Parameter on DataFormWebPart to store “ID” Query String Variable Filter DataFormWebPart using Parameter Insert a link at bottom of DataForm Web part that points to the Child’s new item form and passes in the Parent Id using the Parameter. See.. like I told you, exact same process as in 2007 (except using the DataFormWeb Part). The DataFormWebPart also requires a lot more work to make it look “pretty” but it’s just table rows and cells, and can be configured pretty painlessly.  Here is that video: Using DataForm Web Part One quick update… if you change the link in this solution from: <tr> <td><a href="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}">Click here to create new item...</a> </td> </tr> to: <tr> <td> <a href="javascript:NewItem2(event,'http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}');">Click here to create new item...</a> </td> </tr> It will open up in the pretty pop up and act the same as solution one… So… both Solutions will now behave the same to the end user. Just depends on which you want to implement. That’s all for now… Remember in both solutions when you have them working, you can make the “IssueID” invisible to users by using the “ms-hidden” class (it’s my previous blog post on the subject up there). That’s basically all there is to it! No pithy or witty closing this time… I am sorry it took me so long to dive into this and I hope your questions are answered. As I become more polished myself I will try to come up with a cleaner solution that will make everyone happy… As always, thanks for taking the time to stop by.

    Read the article

  • Announcing Windows Azure Mobile Services

    - by ScottGu
    I’m excited to announce a new capability we are adding to Windows Azure today: Windows Azure Mobile Services Windows Azure Mobile Services makes it incredibly easy to connect a scalable cloud backend to your client and mobile applications.  It allows you to easily store structured data in the cloud that can span both devices and users, integrate it with user authentication, as well as send out updates to clients via push notifications. Today’s release enables you to add these capabilities to any Windows 8 app in literally minutes, and provides a super productive way for you to quickly build out your app ideas.  We’ll also be adding support to enable these same scenarios for Windows Phone, iOS, and Android devices soon. Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services.  Or watch this video of me showing how to do it step by step. Getting Started If you don’t already have a Windows Azure account, you can sign up for a no-obligation Free Trial.  Once you are signed-up, click the “preview features” section under the “account” tab of the www.windowsazure.com website and enable your account to support the “Mobile Services” preview.   Instructions on how to enable this can be found here. Once you have the mobile services preview enabled, log into the Windows Azure Portal, click the “New” button and choose the new “Mobile Services” icon to create your first mobile backend.  Once created, you’ll see a quick-start page like below with instructions on how to connect your mobile service to an existing Windows 8 client app you have already started working on, or how to create and connect a brand-new Windows 8 client app with it: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app  that stores data in Windows Azure. Storing Data in the Cloud Storing data in the cloud with Windows Azure Mobile Services is incredibly easy.  When you create a Windows Azure Mobile Service, we automatically associate it with a SQL Database inside Windows Azure.  The Windows Azure Mobile Service backend then provides built-in support for enabling remote apps to securely store and retrieve data from it (using secure REST end-points utilizing a JSON-based ODATA format) – without you having to write or deploy any custom server code.  Built-in management support is provided within the Windows Azure portal for creating new tables, browsing data, setting indexes, and controlling access permissions. This makes it incredibly easy to connect client applications to the cloud, and enables client developers who don’t have a server-code background to be productive from the very beginning.  They can instead focus on building the client app experience, and leverage Windows Azure Mobile Services to provide the cloud backend services they require.  Below is an example of client-side Windows 8 C#/XAML code that could be used to query data from a Windows Azure Mobile Service.  Client-side C# developers can write queries like this using LINQ and strongly typed POCO objects, which are then translated into HTTP REST queries that run against a Windows Azure Mobile Service.   Developers don’t have to write or deploy any custom server-side code in order to enable client-side code below to execute and asynchronously populate their client UI: Because Mobile Services is part of Windows Azure, developers can later choose to augment or extend their initial solution and add custom server functionality and more advanced logic if they want.  This provides maximum flexibility, and enables developers to grow and extend their solutions to meet any needs. User Authentication and Push Notifications Windows Azure Mobile Services also make it incredibly easy to integrate user authentication/authorization and push notifications within your applications.  You can use these capabilities to enable authentication and fine grain access control permissions to the data you store in the cloud, as well as to trigger push notifications to users/devices when the data changes.  Windows Azure Mobile Services supports the concept of “server scripts” (small chunks of server-side script that executes in response to actions) that make it really easy to enable these scenarios. Below are some tutorials that walkthrough common authentication/authorization/push scenarios you can do with Windows Azure Mobile Services and Windows 8 apps: Enabling User Authentication Authorizing Users  Get Started with Push Notifications Push Notifications to multiple Users Manage and Monitor your Mobile Service Just like with every other service in Windows Azure, you can monitor usage and metrics of your mobile service backend using the “Dashboard” tab within the Windows Azure Portal. The dashboard tab provides a built-in monitoring view of the API calls, Bandwidth, and server CPU cycles of your Windows Azure Mobile Service.   You can also use the “Logs” tab within the portal to review error messages.  This makes it easy to monitor and track how your application is doing. Scale Up as Your Business Grows Windows Azure Mobile Services now allows every Windows Azure customer to create and run up to 10 Mobile Services in a free, shared/multi-tenant hosting environment (where your mobile backend will be one of multiple apps running on a shared set of server resources).  This provides an easy way to get started on projects at no cost beyond the database you connect your Windows Azure Mobile Service to (note: each Windows Azure free trial account also includes a 1GB SQL Database that you can use with any number of apps or Windows Azure Mobile Services). If your client application becomes popular, you can click the “Scale” tab of your Mobile Service and switch from “Shared” to “Reserved” mode.  Doing so allows you to isolate your apps so that you are the only customer within a virtual machine.  This allows you to elastically scale the amount of resources your apps use – allowing you to scale-up (or scale-down) your capacity as your traffic grows: With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need.  This enables a super flexible model that is ideal for new mobile app scenarios, as well as startups who are just getting going.  Summary I’ve only scratched the surface of what you can do with Windows Azure Mobile Services – there are a lot more features to explore.  With Windows Azure Mobile Services you’ll be able to build mobile app experiences faster than ever, and enable even better user experiences – by connecting your client apps to the cloud. Visit the Windows Azure Mobile Services development center to learn more, and build your first Windows 8 app connected with Windows Azure today.  And read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • Java Cloud Service Integration using Web Service Data Control

    - by Jani Rautiainen
    Java Cloud Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance.This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance.In this article a custom application integrating with Fusion Application using Web Service Data Control will be implemented. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration. Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source. The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the “Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud” link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guide. For details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Create Application In this example the “JcsWsDemo” application created in the “Java Cloud Service Integration using Web Service Proxy” article is used as the base. Create Web Service Data Control In this example we will use a Web Service Data Control to integrate with Credit Rule Service in Fusion Applications. The data control will be used to query data from Fusion Applications using a web service call and present the data in a table. To generate the data control choose the “Model” project and navigate to "New -> All Technologies -> Business Tier -> Data Controls -> Web Service Data Control" and enter following: Name: CreditRuleServiceDC URL: https://ic-[POD].oracleoutsourcing.com/icCnSetupCreditRulesPublicService/CreditRuleService?WSDL Service: {{http://xmlns.oracle.com/apps/incentiveCompensation/cn/creditSetup/creditRule/creditRuleService/}CreditRuleService On step 2 select the “findRule” operation: Skip step 3 and on step 4 define the credentials to access the service. Do note that in this example these credentials are only used if testing locally, for JCS deployment credentials need to be manually updated on the EAR file: Click “Finish” and the proxy generation is done. Creating UI In order to use the data control we will need to populate complex objects FindCriteria and FindControl. For simplicity in this example we will create logic in a managed bean that populates the objects. Open “JcsWsDemoBean.java” and add the following logic: Map findCriteria; Map findControl; public void setFindCriteria(Map findCriteria) { this.findCriteria = findCriteria; } public Map getFindCriteria() { findCriteria = new HashMap(); findCriteria.put("fetchSize",10); findCriteria.put("fetchStart",0); return findCriteria; } public void setFindControl(Map findControl) { this.findControl = findControl; } public Map getFindControl() { findControl = new HashMap(); return findControl; } Open “JcsWsDemo.jspx”, navigate to “Data Controls -> CreditRuleServiceDC -> findRule(Object, Object) -> result” and drag and drop the “result” node into the “af:form” element in the page: On the “Edit Table Columns” remove all columns except “RuleId” and “Name”: On the “Edit Action Binding” window displayed enter reference to the java class created above by selecting “#{JcsWsDemoBean.findCriteria}”: Also define the value for the “findControl” by selecting “#{JcsWsDemoBean.findControl}”. Deploy to JCS For WS DC the authentication details need to be updated on the connection details before deploying. Open “connections.xml” by navigating “Application Resources -> Descriptors -> ADF META-INF -> connections.xml”: Change the user name and password entry from: <soap username="transportUserName" password="transportPassword" To match the access details for the target environment. Follow the same steps as documented in previous article ”Java Cloud Service ADF Web Application”. Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsWsDemo-ViewController-context-root/faces/JcsWsDemo.jspx When accessed the first 10 rules in the system are displayed: Summary In this article we learned how to integrate with Fusion Applications using a Web Service Data Control in JCS. In future articles various other integration techniques will be covered. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • ASP.NET WebAPI Security 3: Extensible Authentication Framework

    - by Your DisplayName here!
    In my last post, I described the identity architecture of ASP.NET Web API. The short version was, that Web API (beta 1) does not really have an authentication system on its own, but inherits the client security context from its host. This is fine in many situations (e.g. AJAX style callbacks with an already established logon session). But there are many cases where you don’t use the containing web application for authentication, but need to do it yourself. Examples of that would be token based authentication and clients that don’t run in the context of the web application (e.g. desktop clients / mobile). Since Web API provides a nice extensibility model, it is easy to implement whatever security framework you want on top of it. My design goals were: Easy to use. Extensible. Claims-based. ..and of course, this should always behave the same, regardless of the hosting environment. In the rest of the post I am outlining some of the bits and pieces, So you know what you are dealing with, in case you want to try the code. At the very heart… is a so called message handler. This is a Web API extensibility point that gets to see (and modify if needed) all incoming and outgoing requests. Handlers run after the conversion from host to Web API, which means that handler code deals with HttpRequestMessage and HttpResponseMessage. See Pedro’s post for more information on the processing pipeline. This handler requires a configuration object for initialization. Currently this is very simple, it contains: Settings for the various authentication and credential types Settings for claims transformation Ability to block identity inheritance from host The most important part here is the credential type support, but I will come back to that later. The logic of the message handler is simple: Look at the incoming request. If the request contains an authorization header, try to authenticate the client. If this is successful, create a claims principal and populate the usual places. If not, return a 401 status code and set the Www-Authenticate header. Look at outgoing response, if the status code is 401, set the Www-Authenticate header. Credential type support Under the covers I use the WIF security token handler infrastructure to validate credentials and to turn security tokens into claims. The idea is simple: an authorization header consists of two pieces: the schema and the actual “token”. My configuration object allows to associate a security token handler with a scheme. This way you only need to implement support for a specific credential type, and map that to the incoming scheme value. The current version supports HTTP Basic Authentication as well as SAML and SWT tokens. (I needed to do some surgery on the standard security token handlers, since WIF does not directly support string-ified tokens. The next version of .NET will fix that, and the code should become simpler then). You can e.g. use this code to hook up a username/password handler to the Basic scheme (the default scheme name for Basic Authentication). config.Handler.AddBasicAuthenticationHandler( (username, password) => username == password); You simply have to provide a password validation function which could of course point back to your existing password library or e.g. membership. The following code maps a token handler for Simple Web Tokens (SWT) to the Bearer scheme (the currently favoured scheme name for OAuth2). You simply have to specify the issuer name, realm and shared signature key: config.Handler.AddSimpleWebTokenHandler(     "Bearer",     http://identity.thinktecture.com/trust,     Constants.Realm,     "Dc9Mpi3jaaaUpBQpa/4R7XtUsa3D/ALSjTVvK8IUZbg="); For certain integration scenarios it is very useful if your Web API can consume SAML tokens. This is also easily accomplishable. The following code uses the standard WIF API to configure the usual SAMLisms like issuer, audience, service certificate and certificate validation. Both SAML 1.1 and 2.0 are supported. var registry = new ConfigurationBasedIssuerNameRegistry(); registry.AddTrustedIssuer( "d1 c5 b1 25 97 d0 36 94 65 1c e2 64 fe 48 06 01 35 f7 bd db", "ADFS"); var adfsConfig = new SecurityTokenHandlerConfiguration(); adfsConfig.AudienceRestriction.AllowedAudienceUris.Add( new Uri(Constants.Realm)); adfsConfig.IssuerNameRegistry = registry; adfsConfig.CertificateValidator = X509CertificateValidator.None; // token decryption (read from configuration section) adfsConfig.ServiceTokenResolver = FederatedAuthentication.ServiceConfiguration.CreateAggregateTokenResolver(); config.Handler.AddSaml11SecurityTokenHandler("SAML", adfsConfig); Claims Transformation After successful authentication, if configured, the standard WIF ClaimsAuthenticationManager is called to run claims transformation and validation logic. This stage is used to transform the “technical” claims from the security token into application claims. You can either have a separate transformation logic, or share on e.g. with the containing web application. That’s just a matter of configuration. Adding the authentication handler to a Web API application In the spirit of Web API this is done in code, e.g. global.asax for web hosting: protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     ConfigureApis(GlobalConfiguration.Configuration);     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     BundleTable.Bundles.RegisterTemplateBundles(); } private void ConfigureApis(HttpConfiguration configuration) {     configuration.MessageHandlers.Add( new AuthenticationHandler(ConfigureAuthentication())); } private AuthenticationConfiguration ConfigureAuthentication() {     var config = new AuthenticationConfiguration     {         // sample claims transformation for consultants sample, comment out to see raw claims         ClaimsAuthenticationManager = new ApiClaimsTransformer(),         // value of the www-authenticate header, // if not set, the first scheme added to the handler collection is used         DefaultAuthenticationScheme = "Basic"     };     // add token handlers - see above     return config; } You can find the full source code and some samples here. In the next post I will describe some of the samples in the download, and then move on to authorization. HTH

    Read the article

  • ASP.NET MVC 3 Hosting :: Error Handling and CustomErrors in ASP.NET MVC 3 Framework

    - by C. Miller
    So, what else is new in MVC 3? MVC 3 now has a GlobalFilterCollection that is automatically populated with a HandleErrorAttribute. This default FilterAttribute brings with it a new way of handling errors in your web applications. In short, you can now handle errors inside of the MVC pipeline. What does that mean? This gives you direct programmatic control over handling your 500 errors in the same way that ASP.NET and CustomErrors give you configurable control of handling your HTTP error codes. How does that work out? Think of it as a routing table specifically for your Exceptions, it's pretty sweet! Global Filters The new Global.asax file now has a RegisterGlobalFilters method that is used to add filters to the new GlobalFilterCollection, statically located at System.Web.Mvc.GlobalFilter.Filters. By default this method adds one filter, the HandleErrorAttribute. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute());     } HandleErrorAttributes The HandleErrorAttribute is pretty simple in concept: MVC has already adjusted us to using Filter attributes for our AcceptVerbs and RequiresAuthorization, now we are going to use them for (as the name implies) error handling, and we are going to do so on a (also as the name implies) global scale. The HandleErrorAttribute has properties for ExceptionType, View, and Master. The ExceptionType allows you to specify what exception that attribute should handle. The View allows you to specify which error view (page) you want it to redirect to. Last but not least, the Master allows you to control which master page (or as Razor refers to them, Layout) you want to render with, even if that means overriding the default layout specified in the view itself. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute         {             ExceptionType = typeof(DbException),             // DbError.cshtml is a view in the Shared folder.             View = "DbError",             Order = 2         });         filters.Add(new HandleErrorAttribute());     }Error Views All of your views still work like they did in the previous version of MVC (except of course that they can now use the Razor engine). However, a view that is used to render an error can not have a specified model! This is because they already have a model, and that is System.Web.Mvc.HandleErrorInfo @model System.Web.Mvc.HandleErrorInfo           @{     ViewBag.Title = "DbError"; } <h2>A Database Error Has Occurred</h2> @if (Model != null) {     <p>@Model.Exception.GetType().Name<br />     thrown in @Model.ControllerName @Model.ActionName</p> }Errors Outside of the MVC Pipeline The HandleErrorAttribute will only handle errors that happen inside of the MVC pipeline, better known as 500 errors. Errors outside of the MVC pipeline are still handled the way they have always been with ASP.NET. You turn on custom errors, specify error codes and paths to error pages, etc. It is important to remember that these will happen for anything and everything outside of what the HandleErrorAttribute handles. Also, these will happen whenever an error is not handled with the HandleErrorAttribute from inside of the pipeline. <system.web>  <customErrors mode="On" defaultRedirect="~/error">     <error statusCode="404" redirect="~/error/notfound"></error>  </customErrors>Sample Controllers public class ExampleController : Controller {     public ActionResult Exception()     {         throw new ArgumentNullException();     }     public ActionResult Db()     {         // Inherits from DbException         throw new MyDbException();     } } public class ErrorController : Controller {     public ActionResult Index()     {         return View();     }     public ActionResult NotFound()     {         return View();     } } Putting It All Together If we have all the code above included in our MVC 3 project, here is how the following scenario's will play out: 1.       A controller action throws an Exception. You will remain on the current page and the global HandleErrorAttributes will render the Error view. 2.       A controller action throws any type of DbException. You will remain on the current page and the global HandleErrorAttributes will render the DbError view. 3.       Go to a non-existent page. You will be redirect to the Error controller's NotFound action by the CustomErrors configuration for HTTP StatusCode 404. But don't take my word for it, download the sample project and try it yourself. Three Important Lessons Learned For the most part this is all pretty straight forward, but there are a few gotcha's that you should remember to watch out for: 1) Error views have models, but they must be of type HandleErrorInfo. It is confusing at first to think that you can't control the M in an MVC page, but it's for a good reason. Errors can come from any action in any controller, and no redirect is taking place, so the view engine is just going to render an error view with the only data it has: The HandleError Info model. Do not try to set the model on your error page or pass in a different object through a controller action, it will just blow up and cause a second exception after your first exception! 2) When the HandleErrorAttribute renders a page, it does not pass through a controller or an action. The standard web.config CustomErrors literally redirect a failed request to a new page. The HandleErrorAttribute is just rendering a view, so it is not going to pass through a controller action. But that's ok! Remember, a controller's job is to get the model for a view, but an error already has a model ready to give to the view, thus there is no need to pass through a controller. That being said, the normal ASP.NET custom errors still need to route through controllers. So if you want to share an error page between the HandleErrorAttribute and your web.config redirects, you will need to create a controller action and route for it. But then when you render that error view from your action, you can only use the HandlerErrorInfo model or ViewData dictionary to populate your page. 3) The HandleErrorAttribute obeys if CustomErrors are on or off, but does not use their redirects. If you turn CustomErrors off in your web.config, the HandleErrorAttributes will stop handling errors. However, that is the only configuration these two mechanisms share. The HandleErrorAttribute will not use your defaultRedirect property, or any other errors registered with customer errors. In Summary The HandleErrorAttribute is for displaying 500 errors that were caused by exceptions inside of the MVC pipeline. The custom errors are for redirecting from error pages caused by other HTTP codes.

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • How to Control Screen Layouts in LightSwitch

    - by ChrisD
    Visual Studio LightSwitch has a bunch of screen templates that you can use to quickly generate screens. They give you good starting points that you can customize further. When you add a new screen to your project you see a set of screen templates that you can choose from. These templates lay out all the related data you choose to put on a screen automatically for you. And don’t under estimate them; they do a great job of laying out controls in a smart way. For instance, a tab control will be used when you select more than one related set of data to display on a screen. However, you’re not limited to taking the layout as is. In fact, the screen designer is pretty flexible and allows you to create stacks of controls in a variety of configurations. You just need to visualize your screen as a series of containers that you can lay out in rows and columns. You then place controls or stacks of controls into these areas to align the screen exactly how you want. If you’re new in Visual Studio LightSwitch, you can see this tutorial. OK, Let’s start with a simple example. I have already designed my data entities for a simple order tracking system similar to the Northwind database. I also have added a Search Data  Screen to search my Products already. Now I will add a new Details Screen for my Products and make it the default screen via the “Add New Screen” dialog: The screen designer picks a simple layout for me based on the single entity I chose, in this case Product. Hit F5 to run the application, select a Product on the search screen to open the Product Details Screen. Notice that it’s pretty simple because my entity is simple. Click the “Customize” button in the top right of the screen so we can start tweaking it. The left side of the screen shows the containership of controls and data bindings (called the content tree) and the right side shows the live preview with data. Notice that we have a simple layout of two rows but only one row is populated (with a vertical stack of controls in this case). The bottom row is empty. You can envision the screen like this: Each container will display a group of data that you select. For instance in the above screen, the top row is set to a vertical stack control and the group of data to display is coming from Product. So when laying out screens you need to think in terms of containers of controls bound to groups of data. To change the data to which a container is bound, select the data item next to the container: You can select the “New Group” item in order to create more containers (or controls) within the current container. For instance to totally control the layout, select the Product in the top row and hit the delete key. This will delete the vertical stack and therefore all the controls on the screen. The content tree will still have two rows, but the rows are now both empty. If you want a layout of four containers (two rows and two columns) then select “New Group” for the data item and then change the vertical stack control to “Two Columns” for both of the rows as shown here: You can keep going on and on by selecting new groups and choosing between rows or columns. Here’s a layout with 8 containers, 4 rows and 2 columns: And here is a layout with 7 content areas; one row across the top of the screen and three rows with two columns below that: When you select Choose Content and select a data item like Product it will populate all the controls within the container (row or column in a vertical stack) however you have complete control on what to display within each group. You can delete fields you don’t want to display and/or change their controls. You can also change the size of controls and how they display by changing the settings in the properties window. If you are in the Screen Designer (and not the customization mode like we are here) you can also drag-drop data items from the left-hand side of the screen to the content tree. Note, however, that not all areas of the tree will allow you to drop a data item if there is a binding already set to a different set of data. For instance you can’t drop a Customer ID into the same group as a Product if they originate from different entities. To get around this, all you need to do is create a new group and content area as shown above. Let’s take a more complex example that deals with more than just product. I want to design a complex screen that displays Products and their Category, as well as all the OrderDetails for which that product is selected. This time I will create a new screen and select List and Details, select the Products screen data, and include the related OrderDetails. However I’m going to totally change the layout so that a Product grid is at the top left and below that is the selected Product detail. Below that will be the Category text fields and image in two columns below. On the right side I want the OrderDetails grid to take up the whole right side of the screen. All this can be done in customization mode while you’re debugging the application. To do this, I first deleted all the content items in the tree and then re-created the content tree as shown in the image below. I also set the image to be larger and the description textbox to be 5 rows using the property window below the live preview. I added the green lines to indicate the containers and show how it maps to the content tree (click to enlarge): I hope this demystifies the screen designer a little bit. Remember that screen templates are excellent starting points – you can take them as-is or customize them further. It takes a little fooling around with customizing screens to get them to do exactly what you want but there are a ton of possibilities once you get the hang of it. Stay tuned for more information on how to create your own screen templates that show up in the “Add New Screen” dialog. Enjoy! The tutorial that might be interested: Adding Custom Control In LightSwitch

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • How do I use C# and ADO.NET to query an Oracle table with a spatial column of type SDO_GEOMETRY?

    - by John Donahue
    My development machine is running Windows 7 Enterprise, 64-bit version. I am using Visual Studio 2010 Release Candidate. I am connecting to an Oracle 11g Enterprise server version 11.1.0.7.0. I had a difficult time locating Oracle client software that is made for 64-bit Windows systems and eventually landed here to download what I assume is the proper client connectivity software. I added a reference to "Oracle.DataAccess" which is version 2.111.6.0 (Runtime Version is v2.0.50727). I am targeting .NET CLR version 4.0 since all properties of my VS Solution are defaults and this is 2010 RC. I was then able to write a console application in C# that established connectivity, executed a SELECT statement, and properly returned data when the table in question does NOT contain a spatial column. My problem is that this no longer works when the table I query has a column of type SDO_GEOMETRY in it. Below is the simple console application I am trying to run that reproduces the problem. When the code gets to the line with the "ExecuteReader" command, an exception is raised and the message is "Unsupported column datatype". using System; using System.Data; using Oracle.DataAccess.Client; namespace ConsoleTestOracle { class Program { static void Main(string[] args) { string oradb = string.Format("Data Source={0};User Id={1};Password={2};", "hostname/servicename", "login", "password"); try { using (OracleConnection conn = new OracleConnection(oradb)) { conn.Open(); OracleCommand cmd = new OracleCommand(); cmd.Connection = conn; cmd.CommandText = "select * from SDO_8307_2D_POINTS"; cmd.CommandType = CommandType.Text; OracleDataReader dr = cmd.ExecuteReader(); } } catch (Exception e) { string error = e.Message; } } } } The fact that this code works when used against a table that does not contain a spatial column of type SDO_GEOMETRY makes me think I have my windows 7 machine properly configured so I am surprised that I get this exception when the table contains different kinds of columns. I don't know if there is some configuration on my machine or the Oracle machine that needs to be done, or if the Oracle client software I have installed is wrong, or old and needs to be updated. Here is the SQL I used to create the table, populate it with some rows containing points in the spatial column, etc. if you want to try to reproduce this exactly. SQL Create Commands: create table SDO_8307_2D_Points (ObjectID number(38) not null unique, TestID number, shape SDO_GEOMETRY); Insert into SDO_8307_2D_Points values (1, 1, SDO_GEOMETRY(2001, 8307, null, SDO_ELEM_INFO_ARRAY(1, 1, 1), SDO_ORDINATE_ARRAY(10.0, 10.0))); Insert into SDO_8307_2D_Points values (2, 2, SDO_GEOMETRY(2001, 8307, null, SDO_ELEM_INFO_ARRAY(1, 1, 1), SDO_ORDINATE_ARRAY(10.0, 20.0))); insert into user_sdo_geom_metadata values ('SDO_8307_2D_Points', 'SHAPE', SDO_DIM_ARRAY(SDO_DIM_ELEMENT('Lat', -180, 180, 0.05), SDO_DIM_ELEMENT('Long', -90, 90, 0.05)), 8307); create index SDO_8307_2D_Point_indx on SDO_8307_2D_Points(shape) indextype is mdsys.spatial_index PARAMETERS ('sdo_indx_dims=2' ); Any advice or insights would be greatly appreciated. Thank you.

    Read the article

  • How can I dynamically change auto complete entries in a C# combobox or textbox?

    - by Sam Hopkins
    I have a combobox in C# and I want to use auto complete suggestions with it, however I want to be able to change the auto complete entries as the user types, because the possible valid entries are far too numerous to populate the AutoCompleteStringCollection at startup. As an example, suppose I'm letting the user type in a name. I have a list of possible first names ("Joe", "John") and a list of surnames ("Bloggs", "Smith"), but if I have a thousand of each, then that would be a million possible strings - too many to put in the auto complete entries. So initially I want to have just the first names as suggestions ("Joe", "John") , and then once the user has typed the first name, ("Joe"), I want to remove the existing auto complete entries and replace them with a new set consisting of the chosen first name followed by the possible surnames ("Joe Bloggs", "Joe Smith"). In order to do this, I tried the following code: void InitializeComboBox() { ComboName.AutoCompleteMode = AutoCompleteMode.SuggestAppend; ComboName.AutoCompleteSource = AutoCompleteSource.CustomSource; ComboName.AutoCompleteCustomSource = new AutoCompleteStringCollection(); ComboName.TextChanged += new EventHandler( ComboName_TextChanged ); } void ComboName_TextChanged( object sender, EventArgs e ) { string text = this.ComboName.Text; string[] suggestions = GetNameSuggestions( text ); this.ComboQuery.AutoCompleteCustomSource.Clear(); this.ComboQuery.AutoCompleteCustomSource.AddRange( suggestions ); } However, this does not work properly. It seems that the call to Clear() causes the auto complete mechanism to "turn off" until the next character appears in the combo box, but of course when the next character appears the above code calls Clear() again, so the user never actually sees the auto complete functionality. It also causes the entire contents of the combo box to become selected, so between every keypress you have to deselect the existing text, which makes it unusable. If I remove the call to Clear() then the auto complete works, but it seems that then the AddRange() call has no effect, because the new suggestions that I add do not appear in the auto complete dropdown. I have been searching for a solution to this, and seen various things suggested, but I cannot get any of them to work - either the auto complete functionality appears disabled, or new strings do not appear. Here is a list of things I have tried: Calling BeginUpdate() before changing the strings and EndUpdate() afterwards. Calling Remove() on all the existing strings instead of Clear(). Clearing the text from the combobox while I update the strings, and adding it back afterwards. Setting the AutoCompleteMode to "None" while I change the strings, and setting it back to "SuggestAppend" afterwards. Hooking the TextUpdate or KeyPress event instead of TextChanged. Replacing the existing AutoCompleteCustomSource with a new AutoCompleteStringCollection each time. None of these helped, even in various combinations. Spence suggested that I try overriding the ComboBox function that gets the list of strings to use in auto complete. Using a reflector I found a couple of methods in the ComboBox class that look promising - GetStringsForAutoComplete() and SetAutoComplete(), but they are both private so I can't access them from a derived class. I couldn't take that any further. I tried replacing the ComboBox with a TextBox, because the auto complete interface is the same, and I found that the behaviour is slightly different. With the TextBox it appears to work better, in that the Append part of the auto complete works properly, but the Suggest part doesn't - the suggestion box briefly flashes to life but then immediately disappears. So I thought "Okay, I'll

    Read the article

  • Android - Custom Adapter Problem

    - by Ryan
    Hello, I seem to be having a problem with my Custom Adapter view. When I display the list, it only displays a white screen. Here is how it works: 1.) I send a JSON request 2.) populate the ArrayList with the returned results 3.) create a custom adapter 4.) then bind the adapter. Here is steps 2-4 private void updateUI() { ListView myList = (ListView) findViewById(android.R.id.list); itemList = new ArrayList(); Iterator it = data.entrySet().iterator(); while (it.hasNext()) { //Get the key name and value for it Map.Entry pair = (Map.Entry)it.next(); String keyName = (String) pair.getKey(); String value = pair.getValue().toString(); if (value != null) { ListItem li = new ListItem(keyName, value, false); itemList.add(li); } } CustomAdapter mAdapter = new CustomAdapter( mContext, itemList); myList.setAdapter(mAdapter); //Bind the adapter to the list //Tell the dialog it's cool now. dismissDialog(0); //Show next screen flipper.setInAnimation(inFromRightAnimation()); flipper.setOutAnimation(outToRightAnimation()); flipper.showNext(); } And here is my CustomAdapter class: import java.util.List; import android.R.color; import android.content.Context; import android.view.View; import android.view.ViewGroup; import android.widget.BaseAdapter; import android.widget.ImageView; import android.widget.RelativeLayout; import android.widget.TextView; class MyAdapterView extends RelativeLayout { public MyAdapterView(Context c, ListItem li) { super( c ); RelativeLayout rL = new RelativeLayout(c); RelativeLayout.LayoutParams containerParams = new RelativeLayout.LayoutParams( ViewGroup.LayoutParams.FILL_PARENT, ViewGroup.LayoutParams.FILL_PARENT); rL.setLayoutParams(containerParams); rL.setBackgroundColor(color.white); ImageView img = new ImageView (c); img.setImageResource(li.getImage()); img.setPadding(5, 5, 10, 5); rL.addView(img, 48, 48); TextView top = new TextView(c); top.setText(li.getTopText()); top.setTextColor(color.black); top.setTextSize(20); top.setPadding(0, 20, 0, 0); rL.addView(top,ViewGroup.LayoutParams.FILL_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT); TextView bot = new TextView( c ); bot.setText(li.getBottomText()); bot.setTextColor(color.black); bot.setTextSize(12); bot.setPadding(0, 0, 0, 10); bot.setAutoLinkMask(1); rL.addView(bot,ViewGroup.LayoutParams.FILL_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT); } } public class CustomAdapter extends BaseAdapter { private Context context; private List itemList; public CustomAdapter(Context c, List itemL ) { this.context = c; this.itemList = itemL; } public int getCount() { return itemList.size(); } public Object getItem(int position) { return itemList.get(position); } public long getItemId(int position) { return position; } @Override public View getView(int position, View convertView, ViewGroup parent) { ListItem li = itemList.get(position); return new MyAdapterView(this.context, li); } } Does anyone have any idea why this displays a white screen upon completion?? Thanks in advance!

    Read the article

  • ASP.NET MVC 2 - Saving child entities on form submit

    - by Justin
    Hey, I'm using ASP.NET MVC 2 and am struggling with saving child entities. I have an existing Invoice entity (which I create on a separate form) and then I have a LogHours view that I'd like to use to save InvoiceLog's, which are child entities of Invoice. Here's the view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TothSolutions.Data.Invoice>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Log Hours </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="HeadContent" runat="server"> <script type="text/javascript"> $(document).ready(function () { $("#InvoiceLogs_0__Description").focus(); }); </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Log Hours</h2> <% using (Html.BeginForm("SaveHours", "Invoices")) {%> <%: Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <table> <tr> <th>Date</th> <th>Description</th> <th>Hours</th> </tr> <% int index = 0; foreach (var log in Model.InvoiceLogs) { %> <tr> <td><%: log.LogDate.ToShortDateString() %></td> <td><%: Html.TextBox("InvoiceLogs[" + index + "].Description")%></td> <td><%: Html.TextBox("InvoiceLogs[" + index + "].Hours")%></td> <td>Hours</td> </tr> <% index++; } %> </table> <p> <%: Html.Hidden("InvoiceID") %> <%: Html.Hidden("CreateDate") %> <input type="submit" value="Save" /> </p> </fieldset> <% } %> <div> <%: Html.ActionLink("Back to List", "Index") %> </div> </asp:Content> And here's the controller code: //GET: /Secure/Invoices/LogHours/ public ActionResult LogHours(int id) { var invoice = DataContext.InvoiceData.Get(id); if (invoice == null) { throw new Exception("Invoice not found with id: " + id); } return View(invoice); } //POST: /Secure/Invoices/SaveHours/ [AcceptVerbs(HttpVerbs.Post)] public ActionResult SaveHours([Bind(Exclude = "InvoiceLogs")]Invoice invoice) { TryUpdateModel(invoice.InvoiceLogs, "InvoiceLogs"); invoice.UpdateDate = DateTime.Now; invoice.DeveloperID = DeveloperID; //attaching existing invoice. DataContext.InvoiceData.Attach(invoice); //save changes. DataContext.SaveChanges(); //redirect to invoice list. return RedirectToAction("Index"); } And the data access code: public static void Attach(Invoice invoice) { var i = new Invoice { InvoiceID = invoice.InvoiceID }; db.Invoices.Attach(i); db.Invoices.ApplyCurrentValues(invoice); } In the SaveHours action, it properly sets the values of the InvoiceLog entities after I call TryUpdateModel but when it does SaveChanges it doesn't update the database with the new values. Also, if you manually update the values of the InvoiceLog entries in the database and then go to this page it doesn't populate the textboxes so it's clearly not binding correctly. Thanks, Justin

    Read the article

  • FMOD.net streaming, callback and exinfo parameters

    - by Tesserex
    I posted a question on gamedev about how to play nsf files (NES console music) in FMOD. It didn't get any results, but since then I made some progress. I decided that the easiest method was just to compile an existing player into a dll and then call it from C# to populate my buffer. The problem now is getting it to sound right, and making sure all my paremeters are correct. Here are the facts so far: The nsf dll is dealing with shorts, so the data is PCM16. The sample nsf I'm using has a playback rate of 60 Hz. Just for playing around now, I'm using a frequency of 48000. Based on 2 and 3, the dll calculates a necessary buffer size of 48000 / 60hz = 800. This means it will render 800 shorts worth of buffer for every simulated NES frame. I've so far got my C# code to play the nsf, at the correct pitch and tempo, but it's very grainy / fuzzy, which I'm attributing to the fact that the FMOD read callback is giving a data length of 1600, whereas I should be expecting 800. I've tried playing around with all the numbers and it either crashes, or the music changes pitch, tempo, or both. Here's some of my C# code: uint channels = 1, frequency = 48000; FMOD.MODE mode = (FMOD.MODE.DEFAULT | FMOD.MODE.OPENUSER | FMOD.MODE.LOOP_NORMAL); FMOD.Sound sound = new FMOD.Sound(); FMOD.CREATESOUNDEXINFO ex = new FMOD.CREATESOUNDEXINFO(); ex.cbsize = Marshal.SizeOf(ex); ex.fileoffset = 0; ex.format = FMOD.SOUND_FORMAT.PCM16; // does this even matter? It doesn't change my results as long as it's long enough for one update ex.length = frequency; ex.numchannels = (int)channels; ex.defaultfrequency = (int)frequency; ex.pcmreadcallback = pcmreadcallback; ex.dlsname = null; // eventually I will calculate this with frequency / nsf hz, but I'm just testing for now ex.decodebuffersize = 800; // from the dll load_nsf_file("file.nsf", 8, (int)frequency); // 8 is the track number to play var result = system.createSound( (string)null, (mode | FMOD.MODE.CREATESTREAM), ref ex, ref sound); channel = new FMOD.Channel(); result = system.playSound(FMOD.CHANNELINDEX.FREE, sound, false, ref channel); private FMOD.RESULT PCMREADCALLBACK(IntPtr soundraw, IntPtr data, uint datalen) { // from the dll process_buffer(data, (int)800); // if I use datalen, it usually crashes (I can't get datalen to = 800 safely) return FMOD.RESULT.OK; } So here are some of my questions: What is the relationship between exinfo.decodebuffersize, frequency, and the datalen parameter of the read callback? With this code sample, it's coming in as 3200. I don't know where that factor of 4 between it and the decodebuffersize comes from. Is datalen in the callback referring to number of bytes, or shorts? The process_buffer function takes a short array and its length. I would expect fmod is talking about shorts as well because I told it PCM16. Maybe my playback quality is bad for some totally different reason. If so I have no idea where to begin solving that. Any ideas there?

    Read the article

  • Autocomplete jQuery on User Controller within Repeater .NET

    - by TheDPQ
    I have a Multiview search feature on a Web User Controller that is called within a Repeater, OHMY!! I have some training sessions being listed out on a page, each calling an employeeSearch Web User Controller so people can search for employees to add to the training session. I have the Employee Names and Employee IDs listed out in JS on the page and using the jQuery autocomplete i have them search for the employee and populate a hidden field in the User controller. Once the process is done they have the option of adding yet another employee. So i had Autocompelte 'work' in all the employee search boxes, but one i do the initial search (postback) autocomplete won't work again. Then i updated $().ready(function() to pageLoad() so it works correctly on multiple searches but only in the LAST item of the repeater (jQuery is loaded on the User Controller) FYI: I have the JS string set as EMPLOYEENAME|ID and jQuery displays the Employee Name and if they select it throws the ID in a ASP:HIDDEN FIELD <script type="text/javascript"> format_item = function(item, position, length) { var str = item.toString().split("|", 2); return str[0]; } function pageLoad() { $("#<%=tb_EmployeeName.ClientID %>").autocomplete(EmployeeList, { minChars: 0, width: 500, matchContains: true, autoFill: false, scrollHeight: 300, scroll: true, formatItem: format_item, formatMatch: format_item, formatResult: format_item }); $("#<%=tb_EmployeeName.ClientID %>").result(function(event, data, formatted) { var str = data.toString().split("|", 2); $("#<%=hf_EmployeeID.ClientID %>").val(str[1]); }); }; </script> I can already guess that by repeating pageLoad within the User Controll i override the previous pageLoad. THE QUESTION: Is there a way around this, a way to have all the jQuery appear in a single pageLoad or to somehow have a single jquery call to handle all my search boxes? I can't move the jQuery into the page calling all the controllers because i have no way of referencing the specific *tb_EmployeeName* textbox AND *hf_EmployeeID* hidden field. Thank you so much for any help or insight you can give me into this problem. This is the Multiview that on the User Controller <asp:MultiView ID="mv_EmployeeArea" runat="server" ActiveViewIndex="0"> <asp:View ID="vw_Search" runat="server"> <asp:Panel ID="eSearch" runat="server"> <b>Signup Employee Search</b> (<i>Last Name, First Name</i>)<br /> <asp:TextBox ID="tb_EmployeeName" class="EmployeeSearch" runat="server"></asp:TextBox> <asp:HiddenField ID="hf_EmployeeID" runat="server" /> <asp:Button ID="btn_Search" runat="server" Text="Search" /> </asp:Panel> </asp:View> <asp:View ID="vw_Confirm" runat="server"> <b>Signup Confirmation</b> <asp:FormView ID="fv_EmployeeInfo" runat="server"> <ItemTemplate> <%#(Eval("LastName"))%>, <%#(Eval("FirstName"))%><br /> </ItemTemplate> </asp:FormView> <asp:Button ID="btn_Confirm" runat="server" Text="Signup this employee" /> &nbsp; <asp:Button ID="btn_Reset3" runat="server" Text="Reset" /> </asp:View> <asp:View ID="vw_ThankYou" runat="server"> <b>Thank You</b><br /> The employee has been signed up and an email confirmation has been sent out.<br /><br /> <asp:Button ID="btn_Reset" runat="server" Text="Reset" /> </asp:View> </asp:MultiView> UPDATE: I never did find an answer but i had to do a demo so i hacked together something that 'works', but feels sort of cheesy. I am still very much needed of a better question or better understanding.

    Read the article

  • EXC_BAD_ACCESS in CFAttributedStringSetAttribute and NSNumber?

    - by RichardR
    Hi all, I am getting an infuriating EXC_BAD_ACCESS error in an objective c app I am working on. Any help you could offer would be much appreciated. I have tried the normal debug methods for this error (turning on NSZombieEnabled, checking retain/release/autorelease to make sure I'm not trying to access a deallocated object, etc.) and it hasn't seemed to help. Basically, the error always occurs in this function: ` void op_TJ(CGPDFScannerRef scanner, void *info) { PDFPage *self = info; CGPDFArrayRef array; NSMutableString *tempString = [NSMutableString stringWithCapacity:1]; NSMutableArray *kernArray = [[NSMutableArray alloc] initWithCapacity:1]; if(!CGPDFScannerPopArray(scanner, &array)) { [kernArray release]; return; } for(size_t n = 0; n < CGPDFArrayGetCount(array); n += 2) { if(n >= CGPDFArrayGetCount(array)) continue; CGPDFStringRef pdfString; // if we get a PDF string if (CGPDFArrayGetString(array, n, &pdfString)) { //get the actual string const unsigned char *charstring = CGPDFStringGetBytePtr(pdfString); //add this string to our temp string [tempString appendString:[NSString stringWithCString:(const char*)charstring encoding:[self pageEncoding]]]; //NSLog(@"string: %@", tempString); //get the space after this string CGPDFReal r = 0; if (n+1 < CGPDFArrayGetCount(array)) { CGPDFArrayGetNumber(array, n+1, &r); // multiply by the font size CGFloat k = r; k = -k/1000 * self.tmatrix.a * self.fontSize; CGFloat kKern = self.kern * self.tmatrix.a; k = k + kKern; // add the location and kern to the array NSNumber *tempKern = [NSNumber numberWithFloat:k]; NSLog(@"tempKern address: %p", tempKern); [kernArray addObject:[NSArray arrayWithObjects:[NSNumber numberWithInt:[tempString length] - 1], tempKern, nil]]; } } } // create an attribute string CFMutableAttributedStringRef attString = CFAttributedStringCreateMutable(kCFAllocatorDefault, 10); CFAttributedStringReplaceString(attString, CFRangeMake(0, 0), (CFStringRef)tempString); //apply overall kerning NSNumber *tkern = [NSNumber numberWithFloat:self.kern * self.tmatrix.a * self.fontSize]; CFAttributedStringSetAttribute(attString, CFRangeMake(0, CFAttributedStringGetLength(attString)), kCTKernAttributeName, (CFNumberRef)tkern); //apply individual kern attributes for (NSArray *kernLoc in kernArray) { NSLog(@"kern location: %i, %i", [[kernLoc objectAtIndex:0] intValue],[[kernLoc objectAtIndex:1] floatValue]); CFAttributedStringSetAttribute(attString, CFRangeMake([[kernLoc objectAtIndex:0] intValue], 1), kCTKernAttributeName, (CFNumberRef)[kernLoc objectAtIndex:1]); } CFAttributedStringReplaceAttributedString([self cfAttString], CFRangeMake(CFAttributedStringGetLength([self cfAttString]), 0), attString); //release CFRelease(attString); [kernArray release]; } ` The program always crashes because of line CFAttributedStringSetAttribute(attString, CFRangeMake([[kernLoc objectAtIndex:0] intValue], 1), kCTKernAttributeName, (CFNumberRef)[kernLoc objectAtIndex:1]) And it seems to depend on a few things: if [kernLoc objectAtIndex:1] refers to an [NSNumber numberWithFloat:k] where k = 0 (in other words, if k = 0 above where I populate kernArray) then the program crashes almost immediately If I comment out the line k = k + kKern, it takes longer for the program to crash, but does eventually (why would the crash depend on this value?) If I change the length of CFRangeMake from 1 to 0, it takes a lot longer for the program to crash, but still eventually does. (I don't think I am trying to access beyond the bounds of attString, but am I missing something?) When it crashes, I get something similar to: #0 0x942c7ed7 in objc_msgSend () #1 0x00000013 in ?? () #2 0x0285b827 in CFAttributedStringSetAttribute () #3 0x0000568f in op_TJ (scanner=0x472a590, info=0x4a32320) at /Users/Richard/Desktop/AppTest/PDFHighlight 2/PDFScannerOperators.m:251 Any ideas? It seems like somewhere along the way I am overwriting memory or trying to access memory that has been changed, but I have no idea. If there's anymore information I can provide, please let me know. Thanks, Richard

    Read the article

  • UI android question/problem with Listview

    - by user309554
    Hi, I'm trying to recreate the UI screen called 'My Places' that is used in the Weather Channel app. I'd attach a screenshot of the screen, but I can't seem to do it here. It seems they're using two listviews one on top of the other, but I'm not sure for certain. Could anybody confirm this for me? If they are doing this, how is this done? I've tried to implement this, but without full success. My top listview 'Add a place' 'comes up correctly, but the bottom listview will not appear/populate for me? I shall attach my code so far...... Any help would be greatly appreciated. Thanks Simon header_row.xml ?xml version="1.0" encoding="utf-8"? LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" ImageView android:id="@+id/icon" android:layout_width="wrap_content" android:layout_height="fill_parent" android:layout_marginRight="6dip" android:src="@drawable/ic_menu_add" / LinearLayout android:orientation="vertical" android:layout_width="0dip" android:layout_weight="1" android:layout_height="fill_parent" TextView android:id="@+id/caption" android:layout_width="fill_parent" android:layout_height="0dip" android:layout_weight="1" android:gravity="center_vertical" android:text="Add a place"/ /LinearLayout /LinearLayout main.xml ?xml version="1.0" encoding="utf-8"? LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="?android:attr/listPreferredItemHeight" android:padding="6dip" ListView android:id="@+id/header" android:layout_width="wrap_content" android:layout_height="wrap_content"/ LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="wrap_content" ListView android:id="@+id/list" android:layout_width="fill_parent" android:layout_height="wrap_content"/ /LinearLayout /LinearLayout public class ListViewTest extends Activity { private static String[] items={"lorem", "ipsum", "dolor", "sit", "amet", "consectetuer", "adipiscing", "elit", "morbi", "vel", "ligula", "vitae", "arcu", "aliquet", "mollis", "etiam", "vel", "erat", "placerat", "ante", "porttitor", "sodales", "pellentesque", "augue", "purus"}; private ListView Header; private ListView List; private ArrayList caption = null; private CaptionAdapter adapter; private ArrayAdapter listAdapter; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); caption = new ArrayList(); Caption cap = new Caption(); cap.setCaption("Add a place"); caption.add(cap); this.adapter = new CaptionAdapter(this, R.layout.header_row, caption); Header = (ListView) findViewById(R.id.header); Header.setAdapter(adapter); //Log.d("ListViewTest", "caption size is:" + caption.size()); adapter.notifyDataSetChanged(); List = (ListView) findViewById(R.id.list); listAdapter = new ArrayAdapter(this, android.R.layout.simple_list_item_1, items); List.setAdapter(listAdapter); listAdapter.notifyDataSetChanged(); //setListAdapter(new ArrayAdapter(this, //android.R.layout.simple_list_item_1, //items)); } private class CaptionAdapter extends ArrayAdapter { private ArrayList caption; public CaptionAdapter(Context context, int textViewResourceId, ArrayList caption) { super(context, textViewResourceId, caption); this.caption = caption; } @Override public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; if (v == null) { LayoutInflater vi = (LayoutInflater)getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = vi.inflate(R.layout.header_row, null); } Caption c = caption.get(position); if (c != null) { TextView caption = (TextView) v.findViewById(R.id.caption); if (caption != null) { caption.setText(c.getCaption()); } } return v; } } }

    Read the article

  • TableView Cells Use Whole Screen Height

    - by Kyle
    I read through this tutorial Appcelerator: Using JSON to Build a Twitter Client and attempted to create my own simple application to interact with a Jetty server I setup running some Spring code. I basically call a get http request that gives me a bunch of contacts in JSON format. I then populate several rows with my JSON data and try to build a TableView. All of that works, however, my tableView rows take up the whole screen. Each row is one screen. I can scroll up and down and see all my data, but I'm trying to figure out what's wrong in my styling that's making the cells use the whole screen. My CSS is not great, so any help is appreciated. Thanks! Here's my js file that's loading the tableView: // create variable "win" to refer to current window var win = Titanium.UI.currentWindow; // Function loadContacts() function loadContacts() { // empty array "rowData" for table view cells var rowData = []; // create http client var loader = Titanium.Network.createHTTPClient(); // set http request method and url loader.setRequestHeader("Accept", "application/json"); loader.open("GET", "http://localhost:8080/contactsample/contacts"); // run the function when the data is ready for us to process loader.onload = function(){ Ti.API.debug("JSON Data: " + this.responseText); // evaluate json var contacts = JSON.parse(this.responseText); for(var i=0; i < contacts.length; i++) { var id = contacts[i].id; Ti.API.info("JSON Data, Row[" + i + "], ID: " + contacts[i].id); var name = contacts[i].name; Ti.API.info("JSON Data, Row[" + i + "], Name: " + contacts[i].name); var phone = contacts[i].phone; Ti.API.info("JSON Data, Row[" + i + "], Phone: " + contacts[i].phone); var address = contacts[i].address; Ti.API.info("JSON Data, Row[" + i + "], Address: " + contacts[i].address); // create row var row = Titanium.UI.createTableViewRow({ height:'auto' }); // create row's view var contactView = Titanium.UI.createView({ height:'auto', layout:'vertical', top:5, right:5, bottom:5, left:5 }); var nameLbl = Titanium.UI.createLabel({ text:name, left:5, height:24, width:236, textAlign:'left', color:'#444444', font:{ fontFamily:'Trebuchet MS', fontSize:16, fontWeight:'bold' } }); var phoneLbl = Titanium.UI.createLabel({ text: phone, top:0, bottom:2, height:'auto', width:236, textAlign:'right', font:{ fontSize:14} }); var addressLbl = Titanium.UI.createLabel({ text: address, top:0, bottom:2, height:'auto', width:236, textAlign:'right', font:{ fontSize:14} }); contactView.add(nameLbl); contactView.add(phoneLbl); contactView.add(addressLbl); row.add(contactView); row.className = "item" + i; rowData.push(row); } Ti.API.info("RowData: " + rowData); // create table view var tableView = Titanium.UI.createTableView( { data: rowData }); win.add(tableView); }; // send request loader.send(); } // get contacts loadContacts(); And here are some screens showing my problem. I tried playing with the top, bottom, right, left pixels a bit and didn't seem to be getting anywhere. All help is greatly appreciated. Thanks!

    Read the article

  • A little bit of Ajax goes a long way

    - by Holland
    ..except when you're having problems. My problem is this: I have a hierarchical list of categories stored in a database which I wish to output in a dropdown list. The hierarchy comes into place when the subcategories are to be displayed, which are dependent on a parent id (which equals out to the first seven or so main categories listed). My thoughts are relatively simple: when the user clicks the dynamically allocated list of main categories, they are clicking on an option tag. For each option tag, an id (i.e., the parent) is listed in the value attribute, as well as an argument which is sent to a Javascript function which then uses AJAX to get the data via PHP and sends it to my 'javascript.php' file. The file then does magic, and populates the subcategory list, which is dependent on the main category selected. I believe I have the idea down, it's just that I'm implementing the solution improperly, for some reason. Here's what I have so far: from javascript.php <script type="text/javascript" src=<?=JPATH_BASE.DS.'includes'.DS.'jquery.js';?>> var ajax = { ajax.sendAjaxData = function(method, url, dataTitle, ajaxData) { $.ajax({ type: 'post', url: "C:/wamp/www/patention/components/com_adsmanagar/views/edit/tmpl/javascript.php", data: { 'data' : ajaxData }, success: function(result){ // we have the response alert("Your request was successful." + result); }, error: function(e){ alert('Error: ' + e); } }); } ajax.printSubCategoriesOnClick = function(parent) { alert("hello world!"); ajax.sendAjaxData('post', 'javascript.php', 'data' parent); <?php $categories = $this->formData->getCategories($_POST['data']); ?> ajax.printSubCategories(<?=$categories;?>); } ajax.printSubCategories = function(categories) { var select = document.getElementById("subcategories"); for (var i = 0; i < categories.length; i++) { var opt = document.createElement("option"); opt.text = categories['name']; opt.value = categories['id']; } } } </script> the function used to populate the form data function populateCategories($parent, FormData $formData) { $categories = $formData->getCategories($parent); echo "<pre>"; print_r($categories); echo "</pre>"; foreach($categories as $section => $row){ ?> <option value=<?=$row['id'];?> onclick="ajax.printSubCategoriesOnClick(<?=$row['id']?>)"> <? echo $row['name']; ?> </option> <?php } } The problem is that when I try to do a print_r on my $_POST variable, nothing shows up. I also receive an "undefined index" error (which refers to the key I set for the data type). I'm thinking that for some reason, the data is being sent to my form.php file, or my default.php file which includes both the form.php and javascript.php files via a function. Is there something specific that I'm missing here? Just looking up basic AJAX syntax (via jQuery) hasn't helped out, unfortunately.

    Read the article

  • Stored proc running 30% slower through Java versus running directly on database

    - by James B
    Hi All, I'm using Java 1.6, JTDS 1.2.2 (also just tried 1.2.4 to no avail) and SQL Server 2005 to create a CallableStatement to run a stored procedure (with no parameters). I am seeing the Java wrapper running the same stored procedure 30% slower than using SQL Server Management Studio. I've run the MS SQL profiler and there is little difference in I/O between the two processes, so I don't think it's related to query plan caching. The stored proc takes no arguments and returns no data. It uses a server-side cursor to calculate the values that are needed to populate a table. I can't see how the calling a stored proc from Java should add a 30% overhead, surely it's just a pipe to the database that SQL is sent down and then the database executes it....Could the database be giving the Java app a different query plan?? I've posted to both the MSDN forums, and the sourceforge JTDS forums (topic: "stored proc slower in JTDS than direct in DB") I was wondering if anyone has any suggestions as to why this might be happening? Thanks in advance, -James (N.B. Fear not, I will collate any answers I get in other forums together here once I find the solution) Java code snippet: sLogger.info("Preparing call..."); stmt = mCon.prepareCall("SP_WB200_POPULATE_TABLE_limited_rows"); sLogger.info("Call prepared. Executing procedure..."); stmt.executeQuery(); sLogger.info("Procedure complete."); I have run sql profiler, and found the following: Java app : CPU: 466,514 Reads: 142,478,387 Writes: 284,078 Duration: 983,796 SSMS : CPU: 466,973 Reads: 142,440,401 Writes: 280,244 Duration: 769,851 (Both with DBCC DROPCLEANBUFFERS run prior to profiling, and both produce the correct number of rows) So my conclusion is that they both execute the same reads and writes, it's just that the way they are doing it is different, what do you guys think? It turns out that the query plans are significantly different for the different clients (the Java client is updating an index during an insert that isn't in the faster SQL client, also, the way it is executing joins is different (nested loops Vs. gather streams, nested loops Vs index scans, argh!)). Quite why this is, I don't know yet (I'll re-post when I do get to the bottom of it) Epilogue I couldn't get this to work properly. I tried homogenising the connection properties (arithabort, ansi_nulls etc) between the Java and Mgmt studio clients. It ended up the two different clients had very similar query/execution plans (but still with different actual plan_ids). I posted a summary of what I found to the MSDN SQL Server forums as I found differing performance not just between a JDBC client and management studio, but also between Microsoft's own command line client, SQLCMD, I also checked some more radical things like network traffic too, or wrapping the stored proc inside another stored proc, just for grins. I have a feeling the problem lies somewhere in the way the cursor was being executed, and it was somehow giving rise to the Java process being suspended, but why a different client should give rise to this different locking/waiting behaviour when nothing else is running and the same execution plan is in operation is a little beyond my skills (I'm no DBA!). As a result, I have decided that 4 days is enough of anyone's time to waste on something like this, so I will grudgingly code around it (if I'm honest, the stored procedure needed re-coding to be more incremental instead of re-calculating all data each week anyway), and chalk this one down to experience. I'll leave the question open, big thanks to everyone who put their hat in the ring, it was all useful, and if anyone comes up with anything further, I'd love to hear some more options...and if anyone finds this post as a result of seeing this behaviour in their own environments, then hopefully there's some pointers here that you can try yourself, and hope fully see further than we did. I'm ready for my weekend now! -James

    Read the article

  • SQLAlchemy session management in long-running process

    - by codeape
    Scenario: A .NET-based application server (Wonderware IAS/System Platform) hosts automation objects that communicate with various equipment on the factory floor. CPython is hosted inside this application server (using Python for .NET). The automation objects have scripting functionality built-in (using a custom, .NET-based language). These scripts call Python functions. The Python functions are part of a system to track Work-In-Progress on the factory floor. The purpose of the system is to track the produced widgets along the process, ensure that the widgets go through the process in the correct order, and check that certain conditions are met along the process. The widget production history and widget state is stored in a relational database, this is where SQLAlchemy plays its part. For example, when a widget passes a scanner, the automation software triggers the following script (written in the application server's custom scripting language): ' wiget_id and scanner_id provided by automation object ' ExecFunction() takes care of calling a CPython function retval = ExecFunction("WidgetScanned", widget_id, scanner_id); ' if the python function raises an Exception, ErrorOccured will be true ' in this case, any errors should cause the production line to stop. if (retval.ErrorOccured) then ProductionLine.Running = False; InformationBoard.DisplayText = "ERROR: " + retval.Exception.Message; InformationBoard.SoundAlarm = True end if; The script calls the WidgetScanned python function: # pywip/functions.py from pywip.database import session from pywip.model import Widget, WidgetHistoryItem from pywip import validation, StatusMessage from datetime import datetime def WidgetScanned(widget_id, scanner_id): widget = session.query(Widget).get(widget_id) validation.validate_widget_passed_scanner(widget, scanner) # raises exception on error widget.history.append(WidgetHistoryItem(timestamp=datetime.now(), action=u"SCANNED", scanner_id=scanner_id)) widget.last_scanner = scanner_id widget.last_update = datetime.now() return StatusMessage("OK") # ... there are a dozen similar functions My question is: How do I best manage SQLAlchemy sessions in this scenario? The application server is a long-running process, typically running months between restarts. The application server is single-threaded. Currently, I do it the following way: I apply a decorator to the functions I make avaliable to the application server: # pywip/iasfunctions.py from pywip import functions def ias_session_handling(func): def _ias_session_handling(*args, **kwargs): try: retval = func(*args, **kwargs) session.commit() return retval except: session.rollback() raise return _ias_session_handling # ... actually I populate this module with decorated versions of all the functions in pywip.functions dynamically WidgetScanned = ias_session_handling(functions.WidgetScanned) Question: Is the decorator above suitable for handling sessions in a long-running process? Should I call session.remove()? The SQLAlchemy session object is a scoped session: # pywip/database.py from sqlalchemy.orm import scoped_session, sessionmaker session = scoped_session(sessionmaker()) I want to keep the session management out of the basic functions. For two reasons: There is another family of functions, sequence functions. The sequence functions call several of the basic functions. One sequence function should equal one database transaction. I need to be able to use the library from other environments. a) From a TurboGears web application. In that case, session management is done by TurboGears. b) From an IPython shell. In that case, commit/rollback will be explicit. (I am truly sorry for the long question. But I felt I needed to explain the scenario. Perhaps not necessary?)

    Read the article

  • How to optimize this JSON/JQuery/Javascript function in IE7/IE8?

    - by melaos
    hi guys, i'm using this function to parse this json data but i find the function to be really slow in IE7 and slightly slow in IE8. basically the first listbox generate the main product list, and upon selection of the main list, it will populate the second list. this is my data: [{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":18094,"ProductName":"Sound Blaster Wireless Receiver","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16185,"ProductName":"Xdock Wireless","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16186,"ProductName":"Xmod Wireless","ProductServiceLifeId":1}] and these are the functions that i'm using: //Three Product Panes function function populateMainPane() { $.getJSON('/Home/ThreePaneProductData/', function(data) { products = data; alert(JSON.stringify(products)); var prodCategory = {}; for (i = 0; i < products.length; i++) { prodCategory[products[i].ProductCategoryId] = products[i].ProductCategoryName; } //end for //take only unique product category to be used var id = 0; for (id in prodCategory) { if (prodCategory.hasOwnProperty(id)) { $(".LBox1").append("<option value='" + id + "'>" + prodCategory[id] + "</option>"); //alert(prodCategory[id]); } } var url = document.location.href; var parms = url.substring(url.indexOf("?") + 1).split("&"); for (var i = 0; i < parms.length; i++) { var parm = parms[i].split("="); if (parm[0].toLowerCase() == "pid") { $(".PanelProductReg").show(); var nProductIds = parm[1].split(","); for (var k = 0; k < nProductIds.length; k++) { var nProductId = parseInt(nProductIds[k], 10); for (var j = 0; j < products.length; j++) { if (nProductId == parseInt(products[j].ProductId, 10)) { addProductRow(nProductId, products[j].ProductName); j = products.length; } } //end for } } } }); } //end function function populateSubCategoryPane() { var subCategory = {}; for (var i = 0; i < products.length; i++) { if (products[i].ProductCategoryId == $('.LBox1').val()) subCategory[products[i].ProductSubCategoryId] = products[i].ProductSubCategoryName; } //end for //clear off the list box first $(".LBox2").html(""); var id = 0; for (id in subCategory) { if (subCategory.hasOwnProperty(id)) { $(".LBox2").append("<option value='" + id + "'>" + subCategory[id] + "</option>"); //alert(prodCategory[id]); } } } //end function is there anything i can do to optimize this or is this a known browser issue?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75  | Next Page >