Search Results

Search found 15731 results on 630 pages for 'showplan xml'.

Page 583/630 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • Extending Blend for Visual Studio 2013

    - by Chris Skardon
    Originally posted on: http://geekswithblogs.net/cskardon/archive/2013/11/01/extending-blend-for-visual-studio-2013.aspxSo, I got a comment yesterday on my post about Extending Blend 4 and Blend for Visual Studio 2012 asking if I knew how to get it working for Blend for Visual Studio 2013.. My initial thoughts were, just change the location to get the blend dlls from Visual Studio 11.0 to 12.0 and you’re all set, so I went to do that, only to discover that the dlls I normally reference, well – they don’t exist. So… I’ve made a presumption that the actual process of using MEF etc is still the same. I was wrong. So, the route to discovery – required DotPeek and opening a few of blends dlls.. Browsing through the Blend install directory (./Microsoft Visual Studio 12.0/Blend/) I notice the .addin files: So I decide to peek into the SketchFlow dll, then promptly remember SketchFlow is quite a big thing, and hunting through there is not ideal, luckily there is another dll using an .addin file, ‘Microsoft.Expression.Importers.Host’, so we’ll go for that instead. We can see it’s still using the ‘IPackage’ formula, but where is that sucker? Well, we just press F12 on the ‘IPackage’ bit and DotPeek takes us there, with a very handy comment at the top: // Type: Microsoft.Expression.Framework.IPackage // Assembly: Microsoft.Expression.Framework, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a // MVID: E092EA54-4941-463C-BD74-283FD36478E2 // Assembly location: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Blend\Microsoft.Expression.Framework.dll Now we know where the IPackage interface is defined, so let’s just try writing a control. Last time I did a separate dll for the control, this time I’m not, but it still works if you want to do it that way. Let’s build a control! STEP 1 Create a new WPF application Naming doesn’t matter any more! I have gone with ‘Hello2013’ (see what I did there?) STEP 2 Delete: App.Config App.xaml MainWindow.xaml We won’t be needing them STEP 3 Change your application to be a Class Library instead. (You might also want to delete the ‘vshost’ stuff in your output directory now, as they only exist for hosting the WPF app, and just cause clutter) STEP 4 Add a reference to the ‘Microsoft.Expression.Framework.dll’ (which you can find in ‘C:\Program Files\Microsoft Visual Studio 12.0\Blend’ – that’s Program Files (x86) if you’re on an x64 machine!). STEP 5 Add a User Control, I’m going with ‘Hello2013Control’, and following from last time, it’s just a TextBlock in a Grid: <UserControl x:Class="Hello2013.Hello2013Control" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <Grid> <TextBlock>Hello Blend for VS 2013</TextBlock> </Grid> </UserControl> STEP 6 Add a class to load the package – I’ve called it – yes you guessed – Hello2013Package, which will look like this: namespace Hello2013 { using Microsoft.Expression.Framework; using Microsoft.Expression.Framework.UserInterface; public class Hello2013Package : IPackage { private Hello2013Control _hello2013Control; private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); Initialize(); } private void Initialize() { _hello2013Control = new Hello2013Control(); if (_windowService.PaletteRegistry["HelloPanel"] == null) _windowService.RegisterPalette("HelloPanel", _hello2013Control, "Hello Window"); } public void Unload(){} } } You might note that compared to the 2012 version we’re no longer [Exporting(typeof(IPackage))]. The file you create in STEP 7 covers this for us. STEP 7 Add a new file called: ‘<PROJECT_OUTPUT_NAME>.addin’ – in reality you can call it anything and it’ll still read it in just fine, it’s just nicer if it all matches up, so I have ‘Hello2013.addin’. Content wise, we need to have: <?xml version="1.0" encoding="utf-8"?> <AddIn AssemblyFile="Hello2013.dll" /> obviously, replacing ‘Hello2013.dll’ with whatever your dll is called. STEP 8 We set the ‘addin’ file to be copied to the output directory: STEP 9 Build! STEP 10 Go to your output directory (./bin/debug) and copy the 3 files (Hello2013.dll, Hello2013.pdb, Hello2013.addin) and then paste into the ‘Addins’ folder in your Blend directory (C:\Program Files\Microsoft Visual Studio 12.0\Blend\Addins) STEP 11 Start Blend for Visual Studio 2013 STEP 12 Go to the ‘Window’ menu and select ‘Hello Window’ STEP 13 Marvel at your new control! Feel free to email me / comment with any problems!

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 7

    - by Max
    Download this article as a PDF Welcome back :) This week we are going to look at something more exciting and a much required feature for any twitter client – auto refresh so as to show new status updates. We are going to achieve this using Silverlight 4 Timers and a bit and refresh our datagrid every 2 minutes to show new updates. We will do this so that we do only minimal request to the twitter api, so that twitter does not block us – there is a limit of 150 request an hour. Let us get started now. Also we will get the profile user id hyperlinked, so that when ever the user click on it, we will take them to their twitter page. Also it was a pain to always run this application by pressing F5, then it would open in a browser you would have to right click uninstall and install it again to see any changes. All this and yet we were not able to debug it :( Now there is a solution for this to run a silverlight application directly out of browser and yet have the debug feature. Super cool, here is how. Right on the Silverlight project and go to debug and then select the Out-Of-Browser application option and choose the *.Web project. Then just right click on the SL project and set as Startup Project. There you go, now every time you press F5, it will automatically run out of browser and still have the debug options. I go to know about this after some binging. Now let us jump to the core straight away. 1) To get the user id hyperlinked, we need to have a DataGridTemplateColumn and within that have a HyperLinkButton. The code for this will  be <data:DataGridTemplateColumn> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <HyperlinkButton Click="HyperlinkButton_Click" Content="{Binding UserName}" TargetName="_blank" ></HyperlinkButton> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> 2) Now let us look at how we are getting this done by looking into HyperlinkButton_Click event handler. There we will dynamically set the NavigateUri to the twitter page. I tried to do this using some binding, eval like stuff as in ASP.NET, but no luck! private void HyperlinkButton_Click(object sender, RoutedEventArgs e) { HyperlinkButton hb = (HyperlinkButton)e.OriginalSource; hb.NavigateUri = new Uri("http://twitter.com/" + hb.Content.ToString(), UriKind.Absolute); } 3) Now we need to switch on our Timer right in the OnNavigated to event on our SL page. So we need to modify our OnNavigated event to some thing like below: protected override void OnNavigatedTo(NavigationEventArgs e) { image1.Source = new BitmapImage(new Uri(GlobalVariable.profileImage, UriKind.Absolute)); this.Title = GlobalVariable.getUserName() + " - Home"; if (!GlobalVariable.isLoggedin()) this.NavigationService.Navigate(new Uri("/Login", UriKind.Relative)); else { currentGrid = "Timeline-Grid"; TwitterCredentialsSubmit(); myDispatcherTimer.Interval = new TimeSpan(0, 0, 0, 60, 0); myDispatcherTimer.Tick += new EventHandler(Each_Tick); myDispatcherTimer.Start(); } } I use a global string – here it is currentGrid variable to indicate what is bound in the datagrid so that after every timer tick, I can rebind the latest data to it again. Like I will only rebind the friends timeline again if the data grid currently holds it and I’ll only rebind the respective list status again in the data grid, if already a list status is bound to the data grid. In the above timer code, its set to trigger the Each_Tick event handler every 1 minute (60 seconds). TimeSpan takes in (days, hours, minutes, seconds, milliseconds). 4) Now we need to set the list name in the currentGrid variable when a list button is clicked. So add the code line below to the list button event handler currentGrid = currentList = b.Content.ToString(); 5) Now let us see how Each_Tick event handler is implemented. public void Each_Tick(object o, EventArgs sender) { if (!currentGrid.Equals("Timeline-Grid")) getListStatuses(currentGrid); else { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } } If the data grid hold friends timeline, I just use the same bit of code we had already to bind the friends timeline to the data grid. Copy Paste. But if it is some list timeline that is bound in the datagrid, I then call the getListStatus method with the currentGrid string which will actually be holding the list name. 6) I wanted to make the hyperlinks inside the status message as hyperlinks and when the user clicks on it, we can then open that link. I tried using a convertor and using a regex to recognize a url and wrap it up with a href, but that is not gonna work in silverlight textblock :( Anyways that convertor code is in the zip file. 7) You can get the complete project files from here. 8) Please comment below for your doubts, suggestions, improvements. I will try to reply as early as possible. Thanks for all your support. Technorati Tags: Silverlight 4,Datagrid,Twitter API,Silverlight Timer

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • WIF-less claim extraction from ACS: SWT

    - by Elton Stoneman
    WIF with SAML is solid and flexible, but unless you need the power, it can be overkill for simple claim assertion, and in the REST world WIF doesn’t have support for the latest token formats.  Simple Web Token (SWT) may not be around forever, but while it's here it's a nice easy format which you can manipulate in .NET without having to go down the WIF route. Assuming you have set up a Relying Party in ACS, specifying SWT as the token format: When ACS redirects to your login page, it will POST the SWT in the first form variable. It comes through in the BinarySecurityToken element of a RequestSecurityTokenResponse XML payload , the SWT type is specified with a TokenType of http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0 : <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:31:18.655Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:11:18.655Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="uuid:fc8d3332-d501-4bb0-84ba-d31aa95a1a6c" ValueType="http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> [ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> Reading the SWT is as simple as base-64 decoding, then URL-decoding the element value:     var wrappedToken = XDocument.Parse(HttpContext.Current.Request.Form[1]);     var binaryToken = wrappedToken.Root.Descendants("{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}BinarySecurityToken").First();     var tokenBytes = Convert.FromBase64String(binaryToken.Value);     var token = Encoding.UTF8.GetString(tokenBytes);     var tokenType = wrappedToken.Root.Descendants("{http://schemas.xmlsoap.org/ws/2005/02/trust}TokenType").First().Value; The decoded token contains the claims as key/value pairs, along with the issuer, audience (ACS realm), expiry date and an HMAC hash, which are in query string format. Separate them on the ampersand, and you can write out the claim values in your logged-in page:     var decoded = HttpUtility.UrlDecode(token);     foreach (var part in decoded.Split('&'))     {         Response.Write("<pre>" + part + "</pre><br/>");     } - which will produce something like this: http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant=2012-08-31T06:57:01.855Z http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod=http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname=XYZ http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider=http://fs.svc.xyz.com/adfs/services/trust Audience=http://localhost/x.y.z ExpiresOn=1346402225 Issuer=https://x-y-z.accesscontrol.windows.net/ HMACSHA256=oDCeEDDAWEC8x+yBnTaCLnzp4L6jI0Z/xNK95PdZTts= The HMAC hash lets you validate the token to ensure it hasn’t been tampered with. You'll need the token signing key from ACS, then you can re-sign the token and compare hashes. There's a full implementation of an SWT parser and validator here: How To Request SWT Token From ACS And How To Validate It At The REST WCF Service Hosted In Windows Azure, and a cut-down claim inspector on my github code gallery: ACS Claim Inspector. Interestingly, ACS lets you have a value for your logged-in page which has no relation to the realm for authentication, so you can put this code into a generic claim inspector page, and set that to be your logged-in page for any relying party where you want to check what's being sent through. Particularly handy with ADFS, when you're modifying the claims provided, and want to quickly see the results.

    Read the article

  • jqGrid - dynamically load different drop down values for different rows depending on another column value

    - by Renso
    Goal: As we all know the jqGrid examples in the demo and the Wiki always refer to static values for drop down boxes. This of course is a personal preference but in dynamic design these values should be populated from the database/xml file, etc, ideally JSON formatted. Can you do this in jqGrid, yes, but with some custom coding which we will briefly show below (refer to some of my other blog entries for a more detailed discussion on this topic). What you CANNOT do in jqGrid, referrign here up and to version 3.8.x, is to load different drop down values for different rows in the jqGrid. Well, not without some trickery, which is what this discussion is about. Issue: Of course the issue is that jqGrid has been designed for high performance and thus I have no issue with them loading a  reference to a single drop down values list for every column. This way if you have 500 rows or one, each row only refers to a single list for that particuolar column. Nice! SO how easy would it be to simply traverse the grid once loaded on gridComplete or loadComplete and simply load the select tag's options from scratch, via ajax, from memory variable, hard coded etc? Impossible! Since their is no embedded SELECT tag within each cell containing the drop down values (remeber it only has a reference to that list in memory), all you will see when you inspect the cell prior to clicking on it, or even before and on beforeEditCell, is an empty <TD></TD>. When trying to load that list via a click event on that cell will temporarily load the list but jqGrid's last internal callback event will remove it and replace it with the old one, and you are back to square one. Solution: Yes, after spending a few hours on this found a solution to the problem that does not require any updates to jqGrid source code, thank GOD! Before we get into the coding details, the solution here can of course be customized to suite your specific needs, this one loads the entire drop down list that would be needed across all rows once into global variable. I then parse this object that contains all the properties I need to filter the rows depending on which ones I want the user to see based off of another cell value in that row. This only happens when clicking the cell, so no performance penalty. You may of course to load it via ajax when the user clicks the cell, but I found it more effecient to load the entire list as part of jqGrid's normal editoptions: { multiple: false, value: listingStatus } colModel options which again keeps only a reference to the sinlge list, no duplciation. Lets get into the meat and potatoes of it.         var acctId = $('#Id').val();         var data = $.ajax({ url: $('#ajaxGetAllMaterialsTrackingLookupDataUrl').val(), data: { accountId: acctId }, dataType: 'json', async: false, success: function(data, result) { if (!result) alert('Failure to retrieve the Alert related lookup data.'); } }).responseText;         var lookupData = eval('(' + data + ')');         var listingCategory = lookupData.ListingCategory;         var listingStatus = lookupData.ListingStatus;         var catList = '{';         $(lookupData.ListingCategory).each(function() {             catList += this.Id + ':"' + this.Name + '",';         });         catList += '}';         var lastsel;         var ignoreAlert = true;         $(item)         .jqGrid({             url: listURL,             postData: '',             datatype: "local",             colNames: ['Id', 'Name', 'Commission<br />Rep', 'Business<br />Group', 'Order<br />Date', 'Edit', 'TBD', 'Month', 'Year', 'Week', 'Product', 'Product<br />Type', 'Online/<br />Magazine', 'Materials', 'Special<br />Placement', 'Logo', 'Image', 'Text', 'Contact<br />Info', 'Everthing<br />In', 'Category', 'Status'],             colModel: [                 { name: 'Id', index: 'Id', hidden: true, hidedlg: true },                 { name: 'AccountName', index: 'AccountName', align: "left", resizable: true, search: true, width: 100 },                 { name: 'OnlineName', index: 'OnlineName', align: 'left', sortable: false, width: 80 },                 { name: 'ListingCategoryName', index: 'ListingCategoryName', width: 85, editable: true, hidden: false, edittype: "select", editoptions: { multiple: false, value: eval('(' + catList + ')') }, editrules: { required: false }, formatoptions: { disabled: false} }             ],             jsonReader: {                 root: "List",                 page: "CurrentPage",                 total: "TotalPages",                 records: "TotalRecords",                 userdata: "Errors",                 repeatitems: false,                 id: "0"             },             rowNum: $rows,             rowList: [10, 20, 50, 200, 500, 1000, 2000],             imgpath: jQueryImageRoot,             pager: $(item + 'Pager'),             shrinkToFit: true,             width: 1455,             recordtext: 'Traffic lines',             sortname: 'OrderDate',             viewrecords: true,             sortorder: "asc",             altRows: true,             cellEdit: true,             cellsubmit: "remote",             cellurl: editURL + '?rows=' + $rows + '&page=1',             loadComplete: function() {               },             gridComplete: function() {             },             loadError: function(xhr, st, err) {             },             afterEditCell: function(rowid, cellname, value, iRow, iCol) {                 var select = $(item).find('td.edit-cell select');                 $(item).find('td.edit-cell select option').each(function() {                     var option = $(this);                     var optionId = $(this).val();                     $(lookupData.ListingCategory).each(function() {                         if (this.Id == optionId) {                                                       if (this.OnlineName != $(item).getCell(rowid, 'OnlineName')) {                                 option.remove();                                 return false;                             }                         }                     });                 });             },             search: true,             searchdata: {},             caption: "List of all Traffic lines",             editurl: editURL + '?rows=' + $rows + '&page=1',             hiddengrid: hideGrid   Here is the JSON data returned via the ajax call during the jqGrid function call above (NOTE it must be { async: false}: {"ListingCategory":[{"Id":29,"Name":"Document Imaging & Management","OnlineName":"RF Globalnet"} ,{"Id":1,"Name":"Ancillary Department Hardware","OnlineName":"Healthcare Technology Online"} ,{"Id":2,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":3,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":4,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":5,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":6,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":7,"Name":"EMR/EHR Software","OnlineName":"Healthcare Technology Online"}]} I only need the Id and Name for the drop down list, but the third column in the JSON object is important, it is the only that I match up with the OnlineName in the jqGrid column, and then in the loop during afterEditCell simply remove the ones I don't want the user to see. That's it!

    Read the article

  • Cookbook: SES and UCM setup

    - by George Maggessy
    The purpose of this post is to guide you setting up the integration between UCM and SES. On my next post I’ll show different approaches to integrate WebCenter Portal, UCM and SES based on some common scenarios. Let’s get started. WebCenter Content Configuration WebCenter Content has a component that adds functionality to the content server to allow it to be searched via the Oracle SES. To enable the component installation, go to Administration -&gt; Admin Server and select SESCrawlerExport. Click the update button and restart UCM_server1 managed server. Once the managed server is back, we’ll configure the component. In the menu, under Administration you should see SESCrawlerExport. Click on the link. You’ll see the window below. Click on Configure SESCrawlerExport. Configure the values below: Hostname: SES hostname. Feed Location: Directory where data feeds will be saved. Metadata List: List of metadata that will be searchable by SES. After updating the values click on the Update button. Come back to the SESCrawlerExport Administration UI and click on Take Snapshot button. It will create the data feeds in the specified Feed Location. To check if the correct configuration was done, please access the following URL http://&lt;ucm_server&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWLOAD_CONFIG&amp;source=default. It should download config file in the format below: &lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;rsscrawler xmlns="http://xmlns.oracle.com/search/rsscrawlerconfig"&gt; &lt;feedLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONTROL&amp;source=default]]&gt;&lt;/feedLocation&gt; &lt;errorFileLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_STATUS&amp;IsJava=1&amp;source=default&amp;StatusFeed=]]&gt;&lt;/errorFileLocation&gt; &lt;feedType&gt;controlFeed&lt;/feedType&gt; &lt;sourceName&gt;default&lt;/sourceName&gt; &lt;securityType&gt;attributeBased&lt;/securityType&gt; &lt;securityAttribute name="Account" grant="true"/&gt; &lt;securityAttribute name="DocSecurityGroup" grant="true"/&gt; &lt;securityAttribute name="Collab" grant="true"/&gt; &lt;/rsscrawler&gt; Make sure Account and DocSecurityGroup values are true. SES Configuration Let’s start by configuring the Identity Plug-ins in SES. Go to Global Settings -&gt; System -&gt; Identity Management Setup. Select Oracle Content Server and click the Activate button. We’ll populate the following values: HTTP endpoint for authentication: URL to WebCenter Content. Notice that /cs/idcplg was added at the end of the URL. Admin User: UCM Admin user. This user must have access to all CPOE content. Password: Password to Admin user. Authentication Type: NATIVE. Go back to the Home tab and click on Sources on the top left. Select Oracle Content Server on the right and click the Create button. Configuration URL: URL that point to the configuration file. Example: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONFIG&amp;source=default. User ID: UCM Admin user. Password: Password to Admin user. Click on the Authorization tab and add the appropriate values to the fields below. Make sure you see the ACCOUNT and DOCSECURITYGROUP security attributes at the end of the page. HTTP endpoint for authorization: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg. Display URL prefix: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs. Administrator user: UCM Admin user. Administrator password. On the Document Types tab, add the documents that should be indexed by SES. As our last step, we’ll configure the Federation Trusted Entities under Global Settings. Entity Name: The user must be present in both the identity management server configured for your WebCenter application and the identity management server configured for Oracle SES. For instance, I used weblogic in my sample. Password: Entity user password.\ Now you are ready to test the integration on the SES UI: http://&lt;ses hostname&gt;:&lt;port&gt;/search/query/.

    Read the article

  • Namespaces are obsolete

    - by Bertrand Le Roy
    To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question. However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years… Namespaces are used for a few different things: Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package. Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases. Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user. 1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today. 2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant. 3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app. And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node. Here is how you import a module with RequireJS: var http = require("http"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down. Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here. Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name. Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is. As for hierarchical organization, you don’t really want that, do you? RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices. First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point… Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools. I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing?

    Read the article

  • Application Composer Series: Where and When to use Groovy

    - by Richard Bingham
    This brief post is really intended as more of a reference than an article. The table below highlights two things, firstly where you can add you own custom logic via groovy code (end column), and secondly (middle column) when you might use each particular feature. Obviously this applies only where Application Composer exists, namely Fusion CRM and Oracle Sales Cloud, and is based on current (release 8) functionality. Feature Most Common Use Case Groovy Field Triggers React to run-time data changes. Only fired when the field is changed and upon submit. Y Object Triggers To extend the standard processing logic for an object, based on record creation, updates and deletes. There is a split between these firing events, with some related to UI/ADF actions and others originating in the database. UI Trigger Points: After Create - fires when a new object record is created. Commonly used to set default values for fields. Before Modify - Fires when the end-user tries to modify a field value. Could be used for generic warnings or extra security logic. Before Invalidate - Fires on the parent object when one of its child object records is created, updated, or deleted. For building in relationship logic. Before Remove - Fires when an attempt is made to delete an object record. Can be used to create conditions that prevent deletes. Database Trigger Points: Before Insert in Database - Fires before a new object is inserted into the database. Can be used to ensure a dependent record exists or check for duplicates. After Insert in Database - Fires after a new object is inserted into the database. Could be used to create a complementary record. Before Update in Database -Fires before an existing object is modified in the database. Could be used to check dependent record values. After Update in Database - Fires after an existing object is modified in the database. Could be used to update a complementary record. Before Delete in Database - Fires before an existing object is deleted from the database. Could be used to check dependent record values. After Delete in Database - Fires after an existing object is deleted from the database. Could be used to remove dependent records. After Commit in Database - Fires after the change pending for the current object (insert, update, delete) is made permanent in the current transaction. Could be used when committed data that has passed all validation is required. After Changes Posted to Database - Fires after all changes have been posted to the database, but before they are permanently committed. Could be used to make additional changes that will be saved as part of the current transaction. Y Field Validation Displays a user entered error message based groovy logic validating the field value. The message is shown only when the validation logic returns false, and the logic is triggered only when tabbing out of the field on the user interface. Y Object Validation Commonly used where validation is needed across multiple related fields on the object. Triggered on the submit UI action. Y Object Workflows All Object Workflows are fired upon either record creation or update, along with the option of adding a custom groovy firing condition. Y Field Updates - change another field when a specified one changes. Intended as an easy way to set different run-time values (e.g. pick values for LOV's) plus the value field permits groovy logic entry. Y E-Mail Notification - sends an email notification to specified users/roles. Templates support using run-time value tokens and rich text. N Task Creation - for adding standard tasks for use in the worklist functionality. N Outbound Message - will create and send an XML payload of the related object SDO to a specified endpoint. N Business Process Flow - intended for approval using the seeded process, however can also trigger custom BPMN flows. N Global Functions Utility functions that can be called from any groovy code in Application Composer (across applications). Y Object Functions Utility functions that are local to the parent object. Usually triggered from within 'Buttons and Actions' definitions in Application Composer, although can be called from other code for that object (e.g. from a trigger). Y Add Custom Fields When adding custom fields there are a few places you can include groovy logic. Y Default Value - to add logic within setting the default value when new records are entered. Y Conditionally Updateable - to add logic to set the field to read-only or not. Y Conditionally Required - to add logic to set the field to required or not. Y Formula Field - Used to provide a new aggregate field that is entirely based on groovy logic and other field values. Y Simplified UI Layouts - Advanced Expressions Used for creating dynamic layouts for simplified UI pages where fields and regions show/hide based on run-time context values and logic. Also includes support for the depends-on feature as a trigger. Y Related References This Blog: Application Composer Series Extending Sales Guide: Using Groovy Scripts Groovy Scripting Reference Guide

    Read the article

  • Clone a VirtualBox Machine

    I just installed VirtualBox, which I want to try out based on recommendations from peers for running a server from within my Windows 7 x64 OS.  Ive never used VirtualBox, so Im certainly no expert at it, but I did want to share my experience with it thus far.  Specifically, my intention is to create a couple of virtual machines.  One I intend to use as a build server, for which a virtual machine makes sense because I can easily move it around as needed if there are hardware issues (its worth noting my need for setting up a build server at the moment is a result of a disk failure on the old build server).  The other VM I want to set up will act as a proxy server for the issue tracking system were using at Code Project, Axosoft OnTime.  They have a Remote Server application for this purpose, and since the OnTime install is 300 miles away from my location, the Remote Server should speed up my use of the OnTime client by limiting the chattiness with the database (at least, thats the hope). So, I need two VMs, and Im lazy.  I dont want to have to install the OS and such twice.  No problem, it should be simple to clone a virtualbox machine, or clone a virtualbox hard drive, right?  Well unfortunately, if you look at the UI for VirtualBox, theres no such command.  Youre left wondering How do I clone a VirtualBox machine? or the slightly related How do I clone a VirtualBox hard drive? If youve used VirtualPC, then you know that its actually pretty easy to copy and move around those VMs.  Not quite so easy with VirtualBox.  Finding the files is easy, theyre located in your user folder within the .VirtualBox folder (possibly within a HardDisks folder).  The disks have a .vdi extension and will be pretty large if youve installed anything.  The one shown here has just Windows Server 2008 R2 installed on it nothing else. If you copy the .vdi file and rename it, you can use the Virtual Media Manager to view it and you can create a new machine and choose the new drive to attach to.  Unfortunately, if you simply make a copy of the drive, this wont work and youll get an error that says something to the effect of: Cannot register the hard disk PATH with UUID {id goes here} because a hard disk PATH2 with UUID {same id goes here} already exists in the media registry (PATH to XML file). There are command line tools you can use to do this in a way that avoids this error.  Specifically, the c:\Program File\Sun\VirtualBox\VBoxManage.exe program is used for all command line access to VirtualBox, and to copy a virtual disk (.vdi file) you would call something like this: VBoxManage clonehd Disk1.vdi Disk1_Copy.vdi However, in my case this didnt work.  I got basically the same error I showed above, along with some debug information for line 628 of VBoxManageDisk.cpp.  As my main task was not to debug the C++ code used to write VirtualBox, I continued looking for a simple way to clone a virtual drive.  I found it in this blog post. The Secret setvdiuuid Command VBoxManage has a whole bunch of commands you can use with it just pass it /? to see the list.  However, it also has a special command called internalcommands that opens up access to even more commands.  The one thats interesting for us here is the setvdiuuid command.  By calling this command and passing in the file path to your vdi file, it will reset the UUID to a new (random, apparently) UUID.  This then allows the virtual media manager to cope with the file, and lets you set up new machines that reference the newly UUIDd virtual drive.  The full command line would be: VBoxManage internalcommands setvdiuuid MyCopy.vdi The following screenshot shows the error when trying clonehd as well as the successful use of setvdiuuid. Summary Now that I can clone machines easily, its a simple matter to set up base builds of any OS I might need, and then fork from there as needed.  Hopefully the GUI for VirtualBox will be improved to include better support for copying machines/disks, as this is Im sure a very common scenario. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Defining scope for Record Count functoid:

    - by ArunManick
    Defining scope for Record Count functoid: Problem: One of the most common scenarios in BizTalk is calculating the record count of repeating structure. BizTalk has come up with an advanced functoid called Record Count functoid which will give the record count for the repeating structure however you cannot define the scope for a Record Count functoid. Because Record Count functoid accepts exactly one parameter which can be repeating record or field element.   If somebody don’t know what “scope” means I will explain with a simple example. Consider that we have a source schema having a structure Country -> State -> City. Country will have various states and each state will have different cities. Now you want to calculate no. of cities present in each state. Here scope is defined at the parent node “State”. Traditional Record Count functoid will give the total no. of cities present in the source message and not the State level city count.   Source Schema:   Destination Schema:   Soultion #1: As the title indicates we are not going to add one more parameter to the record count functoid. Instead of that, we are going to achieve the solution with the help of Scripting functoid with Inline XSLT script. XSLT is basically the transformation language used in the mapping.     “No.OfCities” indicates the destination field name to which we are going to send the value. In count(City), “count” refers to built in XPath function used in XSLT and “City” refers to source schema record name. Here you can find the list of built-in functions available in XSLT.   The mapping will look like as follows:   The 2 Record Count functoids used in this map will give the total number of states and total number of cities as that of input message.   Soultion #2:  If someone doesn’t like XSLT code and they wish to achieve the solution using functoids alone, then here is another solution.   Use logical Existence functoid to check whether “City” exist or not Connect the output of Logical Existence functoid to the Value Mapping functoid with second parameter as constant “1”. Hence if the first parameter is TRUE it will give the output as “1”. Connect the output of Value Mapping functoid to the Cumulative Sum functoid with scope as “1”   This will calculate the City count at the state level. The mapping will look like as follows:     Let us see the sample input and the map output.   Input: <?xml version="1.0" encoding="utf-8"?> <ns0:Country xmlns:ns0="http://RecordCount.Source">   <State>     <StateName>Tamilnadu</StateName>     <City>       <CityName>Pollachi</CityName>     </City>     <City>       <CityName>Coimbatore</CityName>     </City>     <City>       <CityName>Chennai</CityName>     </City>   </State>   <State>     <StateName>Kerala</StateName>     <City>       <CityName>Palakad</CityName>     </City>   </State>   <State>     <StateName>Karnataka</StateName>     <City>       <CityName>Bangalore</CityName>     </City>     <City>       <CityName>Mangalore</CityName>     </City>   </State> </ns0:Country>     Output: <ns0:Country xmlns:ns0="http://RecordCount.Destination">           <No.OfStates>3</No.OfStates>           <No.OfCities>6</No.OfCities>           <States>                    <No.OfCities>3</No.OfCities>           </States>           <States>                    <No.OfCities>1</No.OfCities>           </States>           <States>                    <No.OfCities>2</No.OfCities>           </States> </ns0:Country>   Conclusion: This is my first post and I hope you enjoyed it.   -Arun

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Restoring OutlineView Changes

    - by Geertjan
    Spent the last afternoons working with Ruben Hinojo, who I met recently at the Tinkerforge/NetBeans integration course in Germany. He's Spanish, lives in Scotland, and joined the course by flying from Edinburgh to Amsterdam and then driving from there to the course in Germany. Since then he spent some days in Amsterdam and we've been working a bit in a cafe in Amsterdam. He's working freelance on a freight management system on the NetBeans Platform and here's a pic of him and his application: I showed him a few things to improve the initial appearance of the application, such as removing the unneeded tab in the editor position and displaying data at startup so that the main window isn't empty initially. He, in turn, told me about something I didn't know about, where "freightViewer" below is an OutlineView: void writeProperties(java.util.Properties p) {     // better to version settings since initial version as advocated at     // http://wiki.apidesign.org/wiki/PropertyFiles     p.setProperty("version", "1.0");     freightViewer.writeSettings(p, "FreightViewer"); } void readProperties(java.util.Properties p) {     String version = p.getProperty("version");     freightViewer.readSettings(p, "FreightViewer"); } The "OutlineView.read/writeSettings" enables you to save/restore changes to an OutlineView, e.g., column width, column position, and which columns are displayed/hidden. In the user dir, within the .settings file of the TopComponent (in config/Windows2Local/Components), you'll then find content like this, where the "FreightViewer" argument above is now the prefix of the name of each property element: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE properties PUBLIC "-//org.ruben.viewer//RubenViewer//EN" "http://www.netbeans.org/dtds/properties-1_0.dtd"> <properties>     <property name="FreightViewerOutlineViewOutlineColumn-1-shortDescription" value="Type"/>     <property name="FreightViewerETableColumn-1-HeaderValue" value="Type"/>     <property name="FreightViewerColumnsNumber" value="3"/>     <property name="FreightViewerETableColumn-0-PreferredWidth" value="75"/>     <property name="FreightViewerETableColumn-2-HeaderValue" value="Description"/>     <property name="version" value="1.0"/>     <property name="FreightViewerETableColumn-2-SortRank" value="0"/>     <property name="FreightViewerETableColumn-2-Width" value="122"/>     <property name="FreightViewerETableColumn-0-ModelIndex" value="0"/>     <property name="FreightViewerETableColumn-1-Width" value="123"/>     <property name="FreightViewerHiddenColumnsNumber" value="0"/>     <property name="FreightViewerETableColumn-0-SortRank" value="0"/>     <property name="FreightViewerETableColumn-1-ModelIndex" value="1"/>     <property name="FreightViewerETableColumn-1-PreferredWidth" value="75"/>     <property name="FreightViewerETableColumn-0-Ascending" value="true"/>     <property name="FreightViewerETableColumn-2-ModelIndex" value="2"/>     <property name="FreightViewerETableColumn-1-Ascending" value="true"/>     <property name="FreightViewerETableColumn-2-PreferredWidth" value="75"/>     <property name="FreightViewerETableColumn-1-SortRank" value="0"/>     <property name="FreightViewerETableColumn-0-HeaderValue" value="Nodes"/>     <property name="FreightViewerETableColumn-2-Ascending" value="true"/>     <property name="FreightViewerETableColumn-0-Width" value="122"/>     <property name="FreightViewerOutlineViewOutlineColumn-2-shortDescription" value="Description"/> </properties> NB: However, note as described in this issue, i.e., since 7.2, hiding a column isn't persisted and in fact causes problems. I replaced the org-openide-explorer.jar with a previous one, from 7.1.1, and then the problem was solved. But now the enhancements in the OutlineView since 7.2 are no longer present, of course. So, looking forward to seeing this problem fixed.

    Read the article

  • Viewing the NetBeans Central Registry (Part 2)

    - by Geertjan
    Jens Hofschröer, who has one of the very best NetBeans Platform blogs (if you more or less understand German), and who wrote, sometime ago, the initial version of the Import Statement Organizer, as well as being the main developer of a great gear design & manufacturing tool on the NetBeans Platform in Aachen, commented on my recent blog entry "Viewing the NetBeans Central Registry", where the root Node of the Central Registry is shown in a BeanTreeView, with the words: "I wrapped that Node in a FilterNode to provide the 'position' attribute and the 'file extension'. All Children are wrapped too. Then I used an OutlineView to show these two properties. Great tool to find wrong layer entries." I asked him for the code he describes above and he sent it to me. He discussed it here in his blog, while all the code involved can be read below. The result is as follows, where you can see that the OutlineView shows information that my simple implementation (via a BeanTreeView) kept hidden: And so here is the definition of the Node. class LayerPropertiesNode extends FilterNode { public LayerPropertiesNode(Node node) { super(node, isFolder(node) ? Children.create(new LayerPropertiesFactory(node), true) : Children.LEAF); } private static boolean isFolder(Node node) { return null != node.getLookup().lookup(DataFolder.class); } @Override public String getDisplayName() { return getLookup().lookup(FileObject.class).getName(); } @Override public Image getIcon(int type) { FileObject fo = getLookup().lookup(FileObject.class); try { DataObject data = DataObject.find(fo); return data.getNodeDelegate().getIcon(type); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } return super.getIcon(type); } @Override public Image getOpenedIcon(int type) { return getIcon(type); } @Override public PropertySet[] getPropertySets() { Set set = Sheet.createPropertiesSet(); set.put(new PropertySupport.ReadOnly<Integer>( "position", Integer.class, "Position", null) { @Override public Integer getValue() throws IllegalAccessException, InvocationTargetException { FileObject fileEntry = getLookup().lookup(FileObject.class); Integer posValue = (Integer) fileEntry.getAttribute("position"); return posValue != null ? posValue : Integer.valueOf(0); } }); set.put(new PropertySupport.ReadOnly<String>( "ext", String.class, "Extension", null) { @Override public String getValue() throws IllegalAccessException, InvocationTargetException { FileObject fileEntry = getLookup().lookup(FileObject.class); return fileEntry.getExt(); } }); PropertySet[] original = super.getPropertySets(); PropertySet[] withLayer = new PropertySet[original.length + 1]; System.arraycopy(original, 0, withLayer, 0, original.length); withLayer[withLayer.length - 1] = set; return withLayer; } private static class LayerPropertiesFactory extends ChildFactory<FileObject> { private final Node context; public LayerPropertiesFactory(Node context) { this.context = context; } @Override protected boolean createKeys(List<FileObject> list) { FileObject folder = context.getLookup().lookup(FileObject.class); FileObject[] children = folder.getChildren(); List<FileObject> ordered = FileUtil.getOrder(Arrays.asList(children), false); list.addAll(ordered); return true; } @Override protected Node createNodeForKey(FileObject key) { AbstractNode node = new AbstractNode(org.openide.nodes.Children.LEAF, key.isFolder() ? Lookups.fixed(key, DataFolder.findFolder(key)) : Lookups.singleton(key)); return new LayerPropertiesNode(node); } } } Then here is the definition of the Action, which pops up a JPanel, displaying an OutlineView: @ActionID(category = "Tools", id = "de.nigjo.nb.layerview.LayerViewAction") @ActionRegistration(displayName = "#CTL_LayerViewAction") @ActionReferences({ @ActionReference(path = "Menu/Tools", position = 1450, separatorBefore = 1425) }) @Messages("CTL_LayerViewAction=Display XML Layer") public final class LayerViewAction implements ActionListener { @Override public void actionPerformed(ActionEvent e) { try { Node node = DataObject.find(FileUtil.getConfigRoot()).getNodeDelegate(); node = new LayerPropertiesNode(node); node = new FilterNode(node) { @Override public Component getCustomizer() { LayerView view = new LayerView(); view.getExplorerManager().setRootContext(this); return view; } @Override public boolean hasCustomizer() { return true; } }; NodeOperation.getDefault().customize(node); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } } private static class LayerView extends JPanel implements ExplorerManager.Provider { private final ExplorerManager em; public LayerView() { super(new BorderLayout()); em = new ExplorerManager(); OutlineView view = new OutlineView("entry"); view.addPropertyColumn("position", "Position"); view.addPropertyColumn("ext", "Extension"); add(view); } @Override public ExplorerManager getExplorerManager() { return em; } } }

    Read the article

  • YouTube SEO: Video Optimization

    - by Mike Stiles
    SEO optimization is still regarded as one of the primary tools in the digital marketing kit. However and wherever a potential customer is conducting a search, brands want their content to surface in the top results. Makes sense. But without a regular flow of good, relevant content, your SEO opportunities run shallow. We know from several studies video is one of the most engaging forms of content, so why not make sure that in addition to being cool, your videos are helping you win the SEO game? Keywords:-Decide what search phrases make the most sense for your video. Don’t dare use phrases that have nothing to do with the content. You’ll make people mad.-Research those keywords to see how competitive they are. Adjust them so there are still lots of people searching for it, but there are not as many links showing up for it.-Search your potential keywords and phrases to see what comes up. It’s amazing how many people forget to do that. Video Title: -Try to start and/or end with your keyword.-When you search on YouTube, visual action words tend to come up as suggested searches. So try to use action words. Video Description: -Lead with a link to your site (include http://). -Don’t stuff this with your keyword. It leads to bad writing and it won’t work anyway. This is where you convince people to watch, so write for humans. Use some showmanship. -At the end, do a call to action (subscribe, see the whole playlist, visit our social channels, etc.) Video Tags:-Don’t over-tag. 5-10 tags per video is plenty. -If you’re compelled to have more than 10, that means you should probably make more videos specifically targeting all those keywords. Find Linking Pals:-45% of videos are discovered on video sites. But 44% are found through links on blogs and sites.-Write a blog about your video’s content, then link to the video in it. -A good site for finding places to guest blog is myblogguest.com-Once you find good linking partners, they’ll link to your future videos (as long as they’re good and you’re returning the favor). Tap the Power of Similar Videos:-Use Video Reply to associate your video with other topic-related videos. That’s when you make a video responding to or referencing a video made by someone else. Content:-Again, build up a portfolio of videos, not just one that goes after 30 keywords.-Create shorter, sequential videos that pull them deeper into the content and closer to a desired final action.-Organize your video topics separately using Playlists. Playlists show up as a whole in search results like individual videos, so optimize playlists the same as you would for a video. Meta Data:-Too much importance is placed on it. It accounts for only 15% of search success.-YouTube reads Captions or Transcripts to determine what a video is about. If you’re not using them, you’re missing out.-You get the SEO benefit of captions and transcripts whether the viewers has them toggled on or not. Promotion:-This accounts for 25% of search success.-Promote the daylights out of your videos using your social channels and digital assets. Don’t assume it’s going to magically get discovered. -You can pay to promote your video. This could surface it on the YouTube home page, YouTube search results, YouTube related videos, and across the Google content network. Community:-Accounts for 10% of search success.-Make sure your YouTube home page is a fun place to spend time. Carefully pick your featured video, and make sure your Playlists are featured. -Participate in discussions so users will see you’re present. The volume of ratings/comments is as important as the number of views when it comes to where you surface on search. Video Sitemaps:-As with a web site, a video sitemap helps Google quickly index your video.-Google wants to know title, description, play page URL, the URL of the thumbnail image you want, and raw video file location.-Sitemaps are xml files you host or dynamically generate on your site. Once you’ve made your sitemap, sign in and submit it using Google webmaster tools. Just as with the broadcast and cable TV channels, putting a video out there is only step one. You also have to make sure everybody knows it’s there so the largest audience possible can see it. Here’s hoping you get great ratings. @mikestiles

    Read the article

  • GI ????

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false MicrosoftInternetExplorer4 classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} ??????????11gR2 GI ?????????,??????GI????????????????????? ????????GI???????3???,ohasd??,??????,??????? ??,ohasd??? 1. /etc/inittab?????? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null ???,??????? root 4865 1 0 Dec02 ? 00:01:01 /bin/sh /etc/init.d/init.ohasd run ??????????????,???? +init.ohasd ????????? + os????????? + ??S* ohasd????, ??S96ohasd + GI????????(crsctl enable crs) ??,ohasd.bin ??????,????OLR????,??,??ohasd.bin??????,?????OLR??????????????OLR???$GRID_HOME/cdata/${HOSTNAME}.olr 2. ohasd.bin????????agents(orarootagent, oraagent, cssdagnet ? cssdmonitor) ???????????????,?????agent??????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. ???,??????? 1. Mdnsd ??????(Multicast)???????????????????,??????????????????????????? 2. Gpnpd ????,??????????bootstrap ??,??????????????gpnp profile???,?????mdnsd??????,???????????,?????????????,??gpnp profile (<gi_home>/gpnp/profiles/peer/profile.xml)?????????? 3. Gipcd ????,????????????????(cluster interconnect)?????,???????gpnpd???,??,??????????,?????gpnpd ??????? 4. Ocssd.bin ?????????????gpnp profile?????????(Voting Disk),????gpnpd ??????????,?????????????,??ocssd.bin ??????,?????????? + gpnp profile ?????????? + gpnpd ??????? + ??????asm disk ??????????? + ??????????? 5. ??????????:ora.ctssd, ora.asm, ora.cluster_interconnect.haip, ora.crf, ora.crsd ?? ??:????????????????ocssd.bin, gpnpd.bin ? gipcd.bin ????,??gpnpd.bin????,ocssd.bin ? gipcd.bin ?????????,?gpnpd.bin????????,ocssd.bin ? gipcd.bin ????????gpnp profile?????????? ??,????????????,?????crsd????????? 1. Crsd?????????????OCR,????OCR????ASM?,???? ASM??????,??OCR???ASM??????????OCR???????,???????????????? 2. Crsd ?????agents(orarootagent, oraagent_<rdbms_owner>, oraagent_<gi_owner> )???agent????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. 3. ????????  ora.net1.network : ????,?????????????,scanvip, vip, listener?????????????,??????????,vip, scanvip ?listener ??offline,?????????????? ora.<scan_name>.vip:scan???vip??,?????3?? ora.<node_name>.vip : ?????vip ?? ora.<listener_name>.lsnr: ???????????????,?11gR2??,listener.ora???????,????????? ora.LISTENER_SCAN<n>.lsnr: scan ????? ora.<????>.dg: ASM ????????????????mount???,dismount???? ora.<????>.db: ???????11gR2????????????,??????????rac ????????,??????????,???????“USR_ORA_INST_NAME@SERVERNAME(<node name> )”???????,??????????ASM???,???????????????????,??dependency?????????,??????????????????,???dependancy???????,??????(crsctl modify res ……)? ora.<???>.svc:?????????11gR2 ??,?????????,???10gR2??,???????????,srv ?cs ????? ora.cvu :?????11.2.0.2???,???????cluvfy??,???????????????? ora.ons : ONS??,????????,????? ??,?????GI??????????????????? $GRID_HOME/log/<node_name>/ocssd <== ocssd.bin ?? $GRID_HOME/log/<node_name>/gpnpd <== gpnpd.bin ?? $GRID_HOME/log/<node_name>/gipcd <== gipcd.bin ?? $GRID_HOME/log/<node_name>/agent/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/agent/ohasd <== ohasd.bin ?? $GRID_HOME/log/<node_name>/mdnsd <== mdnsd.bin ?? $GRID_HOME/log/<node_name>/client <== ????GI ??(ocrdump, crsctl, ocrcheck, gpnptool??)??????????? $GRID_HOME/log/<node_name>/ctssd <== ctssd.bin ?? $GRID_HOME/log/<node_name>/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/cvu <== cluvfy ????????? $GRID_HOME/bin/diagcollection.sh <== ????????????????? ??,????????(/var/tmp/.oracle ? /tmp/.oracle),??????????????????ipc???,??,?????????????????????,???GI?????????????????????,??????????GI??????????????

    Read the article

  • .NET Winform AJAX Login Services

    - by AdamSane
    I am working on a Windows Form that connects to a ASP.NET membership database and I am trying to use the AJAX Login Service. No matter what I do I keep on getting 404 errors on the Authentication_JSON_AppService.axd call. Web Config Below <?xml version="1.0"?> <!-- Note: As an alternative to hand editing this file you can use the web admin tool to configure settings for your application. Use the Website->Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere"/> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <connectionStrings <!-- Removed --> /> <appSettings/> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <membership defaultProvider="dbSqlMembershipProvider"> <providers> <add name="dbSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="Fire.Common.Properties.Settings.dbFireConnectionString" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="/" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression=""/> </providers> </membership> <roleManager enabled="true" defaultProvider="dbSqlRoleProvider"> <providers> <add connectionStringName="Fire.Common.Properties.Settings.dbFireConnectionString" applicationName="/" name="dbSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> </roleManager> <authentication mode="Forms"> <forms loginUrl="Login.aspx" cookieless="UseCookies" protection="All" timeout="30" requireSSL="false" slidingExpiration="true" defaultUrl="default.aspx" enableCrossAppRedirects="false"/> </authentication> <authorization> <allow users="*"/> <allow users="?"/> </authorization> <customErrors mode="Off"> </customErrors> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" validate="false" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <location path="~/Admin"> <system.web> <authorization> <allow roles="Admin"/> <allow roles="System"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Admin/System"> <system.web> <authorization> <allow roles="System"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Export"> <system.web> <authorization> <allow roles="Export"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Field"> <system.web> <authorization> <allow roles="Field"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Default.aspx"> <system.web> <authorization> <allow roles="Admin"/> <allow roles="System"/> <allow roles="Export"/> <allow roles="Field"/> <deny users="?"/> </authorization> </system.web> </location> <location path="~/Login.aspx"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/App_Themes"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/Includes"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/WebServices"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/Authentication_JSON_AppService.axd"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" type="Microsoft.CSharp.CSharpCodeProvider,System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="OptionInfer" value="true"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule"/> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory"/> <remove name="ScriptHandlerFactoryAppServices"/> <remove name="ScriptResource"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" verb="GET,HEAD" path="ScriptResource.axd" preCondition="integratedMode" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration>

    Read the article

  • xutility file???

    - by user574290
    Hi all. I'm trying to use c code with opencv in face detection and counting, but I cannot build the source. I am trying to compile my project and I am having a lot of problems with a line in the xutility file. the error message show that it error with xutility file. Please help me, how to solve this problem? this is my code // Include header files #include "stdafx.h" #include "cv.h" #include "highgui.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <assert.h> #include <math.h> #include <float.h> #include <limits.h> #include <time.h> #include <ctype.h> #include <iostream> #include <fstream> #include <vector> using namespace std; #ifdef _EiC #define WIN32 #endif int countfaces=0; int numFaces = 0; int k=0 ; int list=0; char filelist[512][512]; int timeCount = 0; static CvMemStorage* storage = 0; static CvHaarClassifierCascade* cascade = 0; void detect_and_draw( IplImage* image ); void WriteInDB(); int found_face(IplImage* img,CvPoint pt1,CvPoint pt2); int load_DB(char * filename); const char* cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; // BEGIN NEW CODE #define WRITEVIDEO char* outputVideo = "c:\\face_counting1_tracked.avi"; //int faceCount = 0; int posBuffer = 100; int persistDuration = 10; //faces can drop out for 10 frames int timestamp = 0; float sameFaceDistThreshold = 30; //pixel distance CvPoint facePositions[100]; int facePositionsTimestamp[100]; float distance( CvPoint a, CvPoint b ) { float dist = sqrt(float ( (a.x-b.x)*(a.x-b.x) + (a.y-b.y)*(a.y-b.y) ) ); return dist; } void expirePositions() { for (int i = 0; i < posBuffer; i++) { if (facePositionsTimestamp[i] <= (timestamp - persistDuration)) //if a tracked pos is older than three frames { facePositions[i] = cvPoint(999,999); } } } void updateCounter(CvPoint center) { bool newFace = true; for(int i = 0; i < posBuffer; i++) { if (distance(center, facePositions[i]) < sameFaceDistThreshold) { facePositions[i] = center; facePositionsTimestamp[i] = timestamp; newFace = false; break; } } if(newFace) { //push out oldest tracker for(int i = 1; i < posBuffer; i++) { facePositions[i] = facePositions[i - 1]; } //put new tracked position on top of stack facePositions[0] = center; facePositionsTimestamp[0] = timestamp; countfaces++; } } void drawCounter(IplImage* image) { // Create Font char buffer[5]; CvFont font; cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, .5, .5, 0, 1); cvPutText(image, "Faces:", cvPoint(20, 20), &font, CV_RGB(0,255,0)); cvPutText(image, itoa(countfaces, buffer, 10), cvPoint(80, 20), &font, CV_RGB(0,255,0)); } #ifdef WRITEVIDEO CvVideoWriter* videoWriter = cvCreateVideoWriter(outputVideo, -1, 30, cvSize(240, 180)); #endif //END NEW CODE int main( int argc, char** argv ) { CvCapture* capture = 0; IplImage *frame, *frame_copy = 0; int optlen = strlen("--cascade="); const char* input_name; if( argc > 1 && strncmp( argv[1], "--cascade=", optlen ) == 0 ) { cascade_name = argv[1] + optlen; input_name = argc > 2 ? argv[2] : 0; } else { cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; input_name = argc > 1 ? argv[1] : 0; } cascade = (CvHaarClassifierCascade*)cvLoad( cascade_name, 0, 0, 0 ); if( !cascade ) { fprintf( stderr, "ERROR: Could not load classifier cascade\n" ); fprintf( stderr, "Usage: facedetect --cascade=\"<cascade_path>\" [filename|camera_index]\n" ); return -1; } storage = cvCreateMemStorage(0); //if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0') ) // capture = cvCaptureFromCAM( !input_name ? 0 : input_name[0] - '0' ); //else capture = cvCaptureFromAVI( "c:\\face_counting1.avi" ); cvNamedWindow( "result", 1 ); if( capture ) { for(;;) { if( !cvGrabFrame( capture )) break; frame = cvRetrieveFrame( capture ); if( !frame ) break; if( !frame_copy ) frame_copy = cvCreateImage( cvSize(frame->width,frame->height), IPL_DEPTH_8U, frame->nChannels ); if( frame->origin == IPL_ORIGIN_TL ) cvCopy( frame, frame_copy, 0 ); else cvFlip( frame, frame_copy, 0 ); detect_and_draw( frame_copy ); if( cvWaitKey( 30 ) >= 0 ) break; } cvReleaseImage( &frame_copy ); cvReleaseCapture( &capture ); } else { if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0')) cvNamedWindow( "result", 1 ); const char* filename = input_name ? input_name : (char*)"lena.jpg"; IplImage* image = cvLoadImage( filename, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } else { /* assume it is a text file containing the list of the image filenames to be processed - one per line */ FILE* f = fopen( filename, "rt" ); if( f ) { char buf[1000+1]; while( fgets( buf, 1000, f ) ) { int len = (int)strlen(buf); while( len > 0 && isspace(buf[len-1]) ) len--; buf[len] = '\0'; image = cvLoadImage( buf, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } } fclose(f); } } } cvDestroyWindow("result"); #ifdef WRITEVIDEO cvReleaseVideoWriter(&videoWriter); #endif return 0; } void detect_and_draw( IplImage* img ) { static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} }; double scale = 1.3; IplImage* gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 ); IplImage* small_img = cvCreateImage( cvSize( cvRound (img->width/scale), cvRound (img->height/scale)), 8, 1 ); CvPoint pt1, pt2; int i; cvCvtColor( img, gray, CV_BGR2GRAY ); cvResize( gray, small_img, CV_INTER_LINEAR ); cvEqualizeHist( small_img, small_img ); cvClearMemStorage( storage ); if( cascade ) { double t = (double)cvGetTickCount(); CvSeq* faces = cvHaarDetectObjects( small_img, cascade, storage, 1.1, 2, 0/*CV_HAAR_DO_CANNY_PRUNING*/, cvSize(30, 30) ); t = (double)cvGetTickCount() - t; printf( "detection time = %gms\n", t/((double)cvGetTickFrequency()*1000.) ); if (faces) { //To save the detected faces into separate images, here's a quick and dirty code: char filename[6]; for( i = 0; i < (faces ? faces->total : 0); i++ ) { /* CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, colors[i%8], 3, 8, 0 );*/ // Create a new rectangle for drawing the face CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); // Find the dimensions of the face,and scale it if necessary pt1.x = r->x*scale; pt2.x = (r->x+r->width)*scale; pt1.y = r->y*scale; pt2.y = (r->y+r->height)*scale; // Draw the rectangle in the input image cvRectangle( img, pt1, pt2, CV_RGB(255,0,0), 3, 8, 0 ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, CV_RGB(255,0,0), 3, 8, 0 ); //update counter updateCounter(center); int y=found_face(img,pt1,pt2); if(y==0) countfaces++; }//end for printf("Number of detected faces: %d\t",countfaces); }//end if //delete old track positions from facePositions array expirePositions(); timestamp++; //draw counter drawCounter(img); #ifdef WRITEVIDEO cvWriteFrame(videoWriter, img); #endif cvShowImage( "result", img ); cvDestroyWindow("Result"); cvReleaseImage( &gray ); cvReleaseImage( &small_img ); }//end if } //end void int found_face(IplImage* img,CvPoint pt1,CvPoint pt2) { /*if (faces) {*/ CvSeq* faces = cvHaarDetectObjects( img, cascade, storage, 1.1, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(40, 40) ); int i=0; char filename[512]; for( i = 0; i < (faces ? faces->total : 0); i++ ) {//int scale = 1, i=0; //i=iface; //char filename[512]; /* extract the rectanlges only */ // CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); //IplImage* gray_img = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); IplImage* clone = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, img->nChannels ); IplImage* gray = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, 1 ); cvCopy (img, clone, 0); cvNamedWindow ("ROI", CV_WINDOW_AUTOSIZE); cvCvtColor( clone, gray, CV_RGB2GRAY ); face_rect.x = pt1.x; face_rect.y = pt1.y; face_rect.width = abs(pt1.x - pt2.x); face_rect.height = abs(pt1.y - pt2.y); cvSetImageROI ( gray, face_rect); //// * rectangle = cvGetImageROI ( clone ); face_rect = cvGetImageROI ( gray ); cvShowImage ("ROI", gray); k++; char *name=0; name=(char*) calloc(512, 1); sprintf(name, "Image%d.pgm", k); cvSaveImage(name, gray); //////////////// for(int j=0;j<512;j++) filelist[list][j]=name[j]; list++; WriteInDB(); //int found=SIFT("result.txt",name); cvResetImageROI( gray ); //return found; return 0; // }//end if }//end for }//end void void WriteInDB() { ofstream myfile; myfile.open ("result.txt"); for(int i=0;i<512;i++) { if(strcmp(filelist[i],"")!=0) myfile << filelist[i]<<"\n"; } myfile.close(); } Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 13 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 18 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 23 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 10 error C2868: 'std::iterator_traits<_Iter>::value_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 25 error C2868: 'std::iterator_traits<_Iter>::reference' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 20 error C2868: 'std::iterator_traits<_Iter>::pointer' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 5 error C2868: 'std::iterator_traits<_Iter>::iterator_category' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 15 error C2868: 'std::iterator_traits<_Iter>::difference_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 9 error C2602: 'std::iterator_traits<_Iter>::value_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 24 error C2602: 'std::iterator_traits<_Iter>::reference' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 19 error C2602: 'std::iterator_traits<_Iter>::pointer' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 4 error C2602: 'std::iterator_traits<_Iter>::iterator_category' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 14 error C2602: 'std::iterator_traits<_Iter>::difference_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 7 error C2146: syntax error : missing ';' before identifier 'value_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 22 error C2146: syntax error : missing ';' before identifier 'reference' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 17 error C2146: syntax error : missing ';' before identifier 'pointer' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 2 error C2146: syntax error : missing ';' before identifier 'iterator_category' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 12 error C2146: syntax error : missing ';' before identifier 'difference_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 6 error C2039: 'value_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 21 error C2039: 'reference' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 16 error C2039: 'pointer' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 1 error C2039: 'iterator_category' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 11 error C2039: 'difference_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766

    Read the article

  • Please verify my layout: bottom button keeps coming up over keyboard

    - by steff
    Hi everyone, I have a layout which does almost what I want. There's just one bug regarding the button at the bottom. I should stay at the bottom at all times. But whenever I bring up the soft-keyboard the button will be displayed above the keyboard. This is not what I want but it should become covered by the keyboard. Moreover, I'd be happy if you could comment on how the layout's built. Thanks, steff <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical"> <LinearLayout android:id="@+id/l_layout_tags" android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal"> <TextView android:text="TAGS:" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <AutoCompleteTextView android:id="@+id/actv_tags" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:imeOptions="actionDone" /> <ImageButton android:id="@+id/btn_add_tag" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@android:drawable/ic_input_add" android:onClick="addTag"/> </LinearLayout> <ScrollView android:id="@+id/sv_scroll_contents" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/l_layout_tags" android:scrollbarFadeDuration="2000" > <TableLayout android:id="@+id/t_layout_contents" android:layout_width="fill_parent" android:layout_height="wrap_content" android:stretchColumns="1" android:paddingRight="5dip"> <TableRow android:id="@+id/tr_template"> <ImageView android:id="@+id/iv_blank" android:src="@android:color/transparent" /> <EditText android:id="@+id/et_content1" android:gravity="top" android:maxWidth="200dp" android:imeOptions="actionDone" /> </TableRow> </TableLayout> </ScrollView> <LinearLayout android:id="@+id/l_layout_media_btns" android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_centerHorizontal="true" android:layout_below="@id/sv_scroll_contents" > <ImageButton android:id="@+id/btn_camera" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@android:drawable/ic_menu_camera" android:onClick="takePicture" /> <ImageButton android:id="@+id/btn_video" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@android:drawable/ic_menu_camera" /> <ImageButton android:id="@+id/btn_audio" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@android:drawable/ic_btn_speak_now" /> <ImageButton android:id="@+id/btn_sketch" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@android:drawable/ic_menu_edit" /> </LinearLayout> <ImageButton android:id="@+id/btn_save_note" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:src="@android:drawable/ic_menu_upload" /> </RelativeLayout>

    Read the article

  • Android RelativeLayout fill_parent unexpected behavior in a ListView with varying row heights

    - by Jameel Al-Aziz
    I'm currently working on a small update to a project and I'm having an issue with Relative_Layout and fill_parent in a list view. I'm trying to insert a divider between two sections in each row, much like the divider in the call log of the default dialer. I checked out the Android source code to see how they did it, but I encountered a problem when replicating their solution. To start, here is my row item layout: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout android:id="@+id/RelativeLayout01" android:layout_width="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android" android:padding="10dip" android:layout_height="fill_parent" android:maxHeight="64dip" android:minHeight="?android:attr/listPreferredItemHeight"> <ImageView android:id="@+id/infoimage" android:layout_width="wrap_content" android:layout_height="wrap_content" android:clickable="true" android:src="@drawable/info_icon_big" android:layout_alignParentRight="true" android:layout_centerVertical="true"/> <View android:id="@+id/divider" android:background="@drawable/divider_vertical_dark" android:layout_marginLeft="11dip" android:layout_toLeftOf="@+id/infoimage" android:layout_width="1px" android:layout_height="fill_parent" android:layout_marginTop="5dip" android:layout_marginBottom="5dip" android:layout_marginRight="4dip"/> <TextView android:id="@+id/TextView01" android:textAppearance="?android:attr/textAppearanceLarge" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerVertical="true" android:layout_toRightOf="@+id/ImageView01" android:layout_toLeftOf="@+id/divider" android:gravity="left|center_vertical" android:layout_marginLeft="4dip" android:layout_marginRight="4dip"/> <ImageView android:id="@+id/ImageView01" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:background="@drawable/bborder" android:layout_centerVertical="true"/> </RelativeLayout> The issue I'm facing is that each row has a thumbnail of varying height (ImageView01). If I set the RelativeLayout's layout_height property to fill_parent, the divider does not scale vertically to fill the row (it just remains a 1px dot). If I set layout_height to "?android:attr/listPreferredItemHeight", the divider fills the row, but the thumbnails shrink. I've done some debugging in the getView() method of the adapter, and it seems that the divider's height is not being set properly once the row has it's proper height. Here is a portion of the getView() method: public View getView(int position, View view, ViewGroup parent) { if (view == null) { view = inflater.inflate(R.layout.tag_list_item, parent, false); } The rest of the method simply sets the appropriate text and images for the row. Also, I create the inflater object in the adapter's constructor with: inflater = LayoutInflater.from(context); Am I missing something essential? Or does fill_parent just not work with dynamic heights?

    Read the article

  • Error deploying web application on Weblogic 10.3 using maven 2: "Can't find wsdl /wsdls/wsat.wsdl"

    - by Marcos Carceles
    Hi, I'm using maven for deploying a web application in my Weblogic 10.3 server remotely. I created my pom file based on the indication on this previous question: Using maven as build tool for Weblogic 10.3 My pom.xml file is: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.balfourbeatty.horizon.maven.test</groupId> <artifactId>maven-test-webapp</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> <name>maven-test-webapp Maven Webapp</name> <url>http://maven.apache.org</url> <properties> <weblogic.version>10.3</weblogic.version> </properties> <build> <plugins> <plugin> <groupId>org.apache.myfaces.trinidadbuild</groupId> <artifactId>maven-jdev-plugin</artifactId> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>weblogic-maven-plugin</artifactId> <version>2.9.1</version> <configuration> <name>maven-test-webapp</name> <adminServerHostName>******************</adminServerHostName> <adminServerPort>****</adminServerPort> <adminServerProtocol>t3</adminServerProtocol> <userId>******</userId> <password>*****</password> <upload>true</upload> <remote>true</remote> <verbose>true</verbose> <debug>true</debug> <targetNames>WLS_Spaces</targetNames> <noExit>true</noExit> <projectPackaging>war</projectPackaging> </configuration> <dependencies> <dependency> <groupId>com.sun</groupId> <artifactId>tools</artifactId> <version>1.6</version> <scope>system</scope> <systemPath>${java.home}/../lib/tools.jar</systemPath> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>weblogic</artifactId> <version>${weblogic.version}</version> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>webservices</artifactId> <version>${weblogic.version}</version> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.full</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.i18n</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.rmi.client</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>javax.enterprise.deploy</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>webserviceclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.wls</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.identity</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wlclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.transaction</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.classloaders</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wljmsclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.management.core</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wls-api</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.descriptor</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.logging</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.socket.api</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.digest</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.workmanager</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.lifecycle</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.wrapper</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wlsafclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.management.jmx</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.descriptor.wl</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>javax.mail</artifactId> <version>10.3</version> </dependency> </dependencies> </plugin> </plugins> <finalName>maven-test-webapp</finalName> </build> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.codehaus.mojo</groupId> <artifactId>weblogic-maven-plugin</artifactId> <version>2.9.1</version> </dependency> </dependencies> <distributionManagement> <!-- use the following if you're not using a snapshot version. --> <repository> <id>internal</id> <name>Archiva Managed Internal Repository</name> <url>http://localhost:8180/archiva/repository/internal</url> </repository> <!-- use the following if you ARE using a snapshot version. --> <snapshotRepository> <id>snapshots</id> <name>Archiva Managed Snapshot Repository</name> <url>http://localhost:8180/archiva/repository/snapshots</url> </snapshotRepository> </distributionManagement> </project> All the dependencies are already resolved properly, as they are in the local archiva repository. The application does not contain any web-service, being just a "hello world" application. /index.jsp /WEB-INF/web.xml The error I get is: [BasicOperation.execute():423] : Initiating deploy operation for app, maven-test-webapp, on targets: [BasicOperation.execute():425] : WLS_Spaces Task 14 initiated: [Deployer:149026]deploy application maven-test-webapp on WLS_Spaces. dumping Exception stack Task 14 failed: [Deployer:149026]deploy application maven-test-webapp on WLS_Spaces. Target state: deploy failed on Server WLS_Spaces weblogic.wsee.ws.WsException: When processing WebService module 'maven-test-webapp.war'. Can't find wsdl /wsdls/wsat.wsdl at weblogic.wsee.deploy.WSEEWebModule.loadWsdlDefinitions(WSEEWebModule.java:159) at weblogic.wsee.deploy.WSEEModule.loadWsdl(WSEEModule.java:334) at weblogic.wsee.deploy.WSEEAnnotationProcessor.isWsdlHasPolicy(WSEEAnnotationProcessor.java:312) at weblogic.wsee.deploy.WSEEAnnotationProcessor.process(WSEEAnnotationProcessor.java:91) at weblogic.wsee.deploy.WSEEAnnotationProcessor.process(WSEEAnnotationProcessor.java:51) at weblogic.wsee.deploy.WSEEModule.prepare(WSEEModule.java:102) at weblogic.wsee.deploy.ServletDeployListener.contextPrepared(ServletDeployListener.java:26) at weblogic.servlet.internal.EventsManager$FireContextPreparedAction.run(EventsManager.java:503) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.EventsManager.notifyContextPreparedEvent(EventsManager.java:162) at weblogic.servlet.internal.WebAppServletContext.initContextListeners(WebAppServletContext.java:1782) at weblogic.servlet.internal.WebAppServletContext.prepare(WebAppServletContext.java:1136) at weblogic.servlet.internal.HttpServer.doPostContextInit(HttpServer.java:449) at weblogic.servlet.internal.HttpServer.loadWebApp(HttpServer.java:424) at weblogic.servlet.internal.WebAppModule.registerWebApp(WebAppModule.java:924) at weblogic.servlet.internal.WebAppModule.prepare(WebAppModule.java:356) at weblogic.application.internal.flow.ScopedModuleDriver.prepare(ScopedModuleDriver.java:176) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:199) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:391) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:83) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:59) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:43) at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:1221) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:83) at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:367) at weblogic.application.internal.SingleModuleDeployment.prepare(SingleModuleDeployment.java:39) at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:154) at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:60) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.createAndPrepareContainer(ActivateOperation.java:207) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:98) at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217) at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747) at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216) at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250) at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:157) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:12) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:45) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Does anyone have any idea on what could the problem be? Many thanks!

    Read the article

  • Objective-C NSMutableArray Count Causes EXC_BAD_ACCESS

    - by JoshEH
    I've been stuck on this for days and each time I come back to it I keep making my code more and more confusing to myself, lol. Here's what I'm trying to do. I have table list of charges, I tap on one and brings up a model view with charge details. Now when the model is presented a object is created to fetch a XML list of users and parses it and returns a NSMutableArray via a custom delegate. I then have a button that presents a picker popover, when the popover view is called the user array is used in an initWithArray call to the popover view. I know the data in the array is right, but when [pickerUsers count] is called I get an EXC_BAD_ACCESS. I assume it's a memory/ownership issue but nothing seems to help. Any help would be appreciated. Relevant code snippets: Charge Popover (Charge details model view): @interface ChargePopoverViewController ..... NSMutableArray *pickerUserList; @property (nonatomic, retain) NSMutableArray *pickerUserList; @implementation ChargePopoverViewController @synthesize whoOwesPickerButton, pickerUserList; - (void)viewDidLoad { JEHWebAPIPickerUsers *fetcher = [[JEHWebAPIPickerUsers alloc] init]; fetcher.delegate = self; [fetcher fetchUsers]; } -(void) JEHWebAPIFetchedUsers:(NSMutableArray *)theData { [pickerUserList release]; pickerUserList = theData; } - (void) pickWhoPaid: (id) sender { UserPickerViewController* content = [[UserPickerViewController alloc] initWithArray:pickerUserList]; UIPopoverController *popover = [[UIPopoverController alloc] initWithContentViewController:content]; [popover presentPopoverFromRect:whoPaidPickerButton.frame inView:self.view permittedArrowDirections:UIPopoverArrowDirectionAny animated:YES]; content.delegate = self; } User Picker View Controller @interface UserPickerViewController ..... NSMutableArray *pickerUsers; @property(nonatomic, retain) NSMutableArray *pickerUsers; @implementation UserPickerViewController @synthesize pickerUsers; -(UserPickerViewController*) initWithArray:(NSMutableArray *)theUsers { self = [super init]; if ( self ) { self.pickerUsers = theUsers; } return self; } - (NSInteger)pickerView:(UIPickerView *)thePickerView numberOfRowsInComponent:(NSInteger)component { // Dies Here EXC_BAD_ACCESS, but NSLog(@"The content of array is%@",pickerUsers); shows correct array data return [pickerUsers count]; } I can provide additional code if it might help. Thanks in advance.

    Read the article

  • How to automatically resize an EditText widget (with some attributes) in a TableLayout

    - by steff
    Hi everyone, I have a layout issue. What I do is this: create TableLayout in xml with zero children: <TableLayout android:id="@+id/t_layout_contents" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/l_layout_tags" android:stretchColumns="1" android:paddingLeft="5dip" android:paddingRight="5dip" /> Insert first row programmatically in onCreate(): tLayoutContents = (TableLayout)findViewById(R.id.t_layout_contents); NoteElement nr_1 = new NoteElement(this); tLayoutContents.addView(nr_1); Class "NoteElement" extends TableRow. The 1st row just consists of a blank ImageView as a placeholder and an EditText to enter text. NoteElement's constructor looks like this: public NoteElement(Context c) { super(c); this.context = c; defaultText = c.getResources().getString(R.string.create_note_help_text); imageView = new ImageView(context); imageView.setImageResource(android.R.color.transparent); LayoutParams params = new LayoutParams(0); imageView.setLayoutParams(params); addView(imageView); addView(addTextField()); } Method addTextField() specifies the attributes for the EditText widget: private EditText addTextField() { editText = new EditText(context); editText.setImeOptions(EditorInfo.IME_ACTION_DONE); editText.setMinLines(4); editText.setRawInputType(InputType.TYPE_TEXT_FLAG_MULTI_LINE); editText.setHint(R.string.create_note_et_blank_text); editText.setAutoLinkMask(Linkify.ALL); editText.setPadding(5, 0, 0, 0); editText.setGravity(Gravity.TOP); editText.setVerticalScrollBarEnabled(true); LayoutParams params = new LayoutParams(1); editText.setLayoutParams(params); return editText; } So far, so good. But my problem occurs as soon as the available space for the chars is depleted. The EditText does not resize itself but switches to a single line EditText. I am desperatly looking for a way in which the EditText resizes itself in its height dynamically, being dependant on the inserted text length. Does anyone have a hint on this? Thanks & regards, steff

    Read the article

  • Wildcards in jnlp template file

    - by Andy
    Since the last security changes in Java 7u40, it is required to sign a JNLP file. This can either be done by adding the final JNLP in JNLP-INF/APPLICATION.JNLP, or by providing a template JNLP in JNLP-INF/APPLICATION_TEMPLATE.JNLP in the signed main jar. The first way works well, but we would like to allow to pass a previously unknown number of runtime arguments to our application. Therefore, our APPLICATION_TEMPLATE.JNLP looks like this: <?xml version="1.0" encoding="UTF-8"?> <jnlp codebase="*"> <information> <title>...</title> <vendor>...</vendor> <description>...</description> <offline-allowed /> </information> <security> <all-permissions/> </security> <resources> <java version="1.7+" href="http://java.sun.com/products/autodl/j2se" /> <jar href="launcher/launcher.jar" main="true"/> <property name="jnlp...." value="*" /> <property name="jnlp..." value="*" /> </resources> <application-desc main-class="..."> * </application-desc> </jnlp> The problem is the * inside of the application-desc tag. It is possible to wildcard a fixed number of arguments using multiple argument tags (see code below), but then it is not possible to provide more or less arguments to the application (Java Webstart will no start with an empty argument tag). <application-desc main-class="..."> <argument>*</argument> <argument>*</argument> <argument>*</argument> </application-desc> Does someone can confirm this problem and/or has a solution for passing a previously undefined number of runtime arguments to the Java application? Thanks alot!

    Read the article

  • NHibernate.NHibernateException: Unable to locate row for retrieval of generated properties: [MyNames

    - by Brad Heller
    It looks like all of my mappings are compiling correctly, I'm able to validly get a session from session factory. However, when I try to ISession.SaveOrUpdate(obj); I get this. Can anyone please help point me in the right direction? private Configuration configuration; protected Configuration Configuration { get { configuration = configuration ?? GetNewConfiguration(); return configuration; } } protected ISession GetNewSession() { var sessionFactory = Configuration.BuildSessionFactory(); var session = sessionFactory.OpenSession(); return session; } [TestMethod] public void TestSessionSave() { var company = new Company(); company.Name = "Test Company 1"; DateTime savedAt = DateTime.Now; using (var session = GetNewSession()) { try { session.SaveOrUpdate(company); } catch (Exception e) { throw e; } } Assert.IsTrue((company.CreationDate != null && company.CreationDate > savedAt), "Company was saved, creation date was set."); } For those that might be interested, here is my mapping for this class: <!-- Company --> <class name="MyNamespace.Company,MyLibrary" table="Companies"> <id name="Id" column="Id"> <generator class="native" /> </id> <property name="ExternalId" column="GUID" generated="insert" /> <property name="Name" column="Name" type="string" /> <property name="CreationDate" column="CreationDate" generated="insert" /> <property name="UpdatedDate" column="UpdatedDate" generated="always" /> </class> <!-- End Company --> Finally, here is my config -- I'm just connecting to a SQL Server CE instance for this for testing purposes. <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory name=""> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlServerCeDriver</property> <property name="connection.connection_string">Data Source="D:\Build\MyProject\Source\MyProject.UnitTests\MyProject.TestDatabase.sdf"</property> <property name="dialect">NHibernate.Dialect.MsSqlCeDialect</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> </session-factory> </hibernate-configuration> Thanks!

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >