Search Results

Search found 10597 results on 424 pages for 'dynamic attributes'.

Page 400/424 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • REST to Objects in C#

    RESTful interfaces for web services are all the rage for many Web 2.0 sites.  If you want to consume these in a very simple fashion, LINQ to XML can do the job pretty easily in C#.  If you go searching for help on this, youll find a lot of incomplete solutions and fairly large toolkits and frameworks (guess how I know this) this quick article is meant to be a no fluff just stuff approach to making this work. POCO Objects Lets assume you have a Model that you want to suck data into from a RESTful web service.  Ideally this is a Plain Old CLR Object, meaning it isnt infected with any persistence or serialization goop.  It might look something like this: public class Entry { public int Id; public int UserId; public DateTime Date; public float Hours; public string Notes; public bool Billable;   public override string ToString() { return String.Format("[{0}] User: {1} Date: {2} Hours: {3} Notes: {4} Billable {5}", Id, UserId, Date, Hours, Notes, Billable); } } Not that this isnt a completely trivial object.  Lets look at the API for the service.  RESTful HTTP Service In this case, its TickSpots API, with the following sample output: <?xml version="1.0" encoding="UTF-8"?> <entries type="array"> <entry> <id type="integer">24</id> <task_id type="integer">14</task_id> <user_id type="integer">3</user_id> <date type="date">2008-03-08</date> <hours type="float">1.00</hours> <notes>Had trouble with tribbles.</notes> <billable>true</billable> # Billable is an attribute inherited from the task <billed>true</billed> # Billed is an attribute to track whether the entry has been invoiced <created_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</created_at> <updated_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</updated_at> # The following attributes are derived and provided for informational purposes: <user_email>[email protected]</user_email> <task_name>Remove converter assembly</task_name> <sum_hours type="float">2.00</sum_hours> <budget type="float">10.00</budget> <project_name>Realign dilithium crystals</project_name> <client_name>Starfleet Command</client_name> </entry> </entries> Im assuming in this case that I dont necessarily care about all of the data fields the service is returning I just need some of them for my applications purposes.  Thus, you can see there are more elements in the <entry> XML than I have in my Entry class. Get The XML with C# The next step is to get the XML.  The following snippet does the heavy lifting once you pass it the appropriate URL: protected XElement GetResponse(string uri) { var request = WebRequest.Create(uri) as HttpWebRequest; request.UserAgent = ".NET Sample"; request.KeepAlive = false;   request.Timeout = 15 * 1000;   var response = request.GetResponse() as HttpWebResponse;   if (request.HaveResponse == true && response != null) { var reader = new StreamReader(response.GetResponseStream()); return XElement.Parse(reader.ReadToEnd()); } throw new Exception("Error fetching data."); } This is adapted from the Yahoo Developer article on Web Service REST calls.  Once you have the XML, the last step is to get the data back as your POCO. Use LINQ-To-XML to Deserialize POCOs from XML This is done via the following code: public IEnumerable<Entry> List(DateTime startDate, DateTime endDate) { string additionalParameters = String.Format("start_date={0}&end_date={1}", startDate.ToShortDateString(), endDate.ToShortDateString()); string uri = BuildUrl("entries", additionalParameters);   XElement elements = GetResponse(uri);   var entries = from e in elements.Elements() where e.Name.LocalName == "entry" select new Entry { Id = int.Parse(e.Element("id").Value), UserId = int.Parse(e.Element("user_id").Value), Date = DateTime.Parse(e.Element("date").Value), Hours = float.Parse(e.Element("hours").Value), Notes = e.Element("notes").Value, Billable = bool.Parse(e.Element("billable").Value) }; return entries; }   For completeness, heres the BuildUrl method for my TickSpot API wrapper: // Change these to your settings protected const string projectDomain = "DOMAIN.tickspot.com"; private const string authParams = "[email protected]&password=MyTickSpotPassword";   protected string BuildUrl(string apiMethod, string additionalParams) { if (projectDomain.Contains("DOMAIN")) { throw new ApplicationException("You must update your domain in ProjectRepository.cs."); } if (authParams.Contains("MyTickSpotPassword")) { throw new ApplicationException("You must update your email and password in ProjectRepository.cs."); } return string.Format("https://{0}/api/{1}?{2}&{3}", projectDomain, apiMethod, authParams, additionalParams); } Thats it!  Now go forth and consume XML and map it to classes you actually want to work with.  Have fun! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Resolving data redundancy up front

    - by okeofs
    Introduction As all of us do when confronted with a problem, the resource of choice is to ‘Google it’. This is where the plot thickens. Recently I was asked to stage data from numerous databases which were to be loaded into a data warehouse. To make a long story short, I was looking for a manner in which to obtain the table names from each database, to ascertain potential overlap.   As the source data comes from a SQL database created from dumps of a third party product,  one could say that there were +/- 95 tables for each database.   Yes I know that first instinct is to use the system stored procedure “exec sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'”. However, if one stops to think about this, it would be nice to have all the results in a temporary or disc based  table; which in itself , implies additional labour. This said,  I decided to ‘re-invent’ the wheel. The full code sample may be found at the bottom of this article.   Define a few temporary tables and variables   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ -- A temp table to hold the names of my databases CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )     --A temp table with the same database names as above, HOWEVER using an --Identity number (recNO) as a loop variable. --You will note below that I loop through until I reach 25 (see below) as at --that point the system databases, the reporting server database etc begin. --1- 24 are user databases. These are really what I was looking for. --Whilst NOT the best solution,it works and the code was meant as a quick --and dirty. CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )   --My output table showing the database name and table name CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   Insert the database names into a temporary table I pull the database names using the system stored procedure sp_databases   INSERT INTO #rawdata1 EXEC sp_databases Go   Insert the results from #rawdata1 into a table containing a record number  #rawdata11 so that I can LOOP through the extract   INSERT into #rawdata11 select * from  #rawdata1   We now declare 3 more variables:  @kounter is used to keep track of our position within the loop. @databasename is used to keep track of the’ current ‘ database name being used in the current pass of the loop;  as inorder to obtain the tables for that database we  need to issue a ‘USE’ statement, an insert command and other related code parts. This is the challenging part. @sql is a varchar(max) variable used to contain the ‘USE’ statement PLUS the’ insert ‘ code statements. We now initalize @kounter to 1 .   declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1   The Loop The astute reader will remember that the temporary table #rawdata11 contains our  database names  and each ‘database row’ has a record number (recNo). I am only interested in record numbers under 25. I now set the value of the temporary variable @DatabaseName (see below) .Note that I used the row number as a part of the predicate. Now, knowing the database name, I can create dynamic T-SQL to be executed using the sp_sqlexec stored procedure (see the code in red below). Finally, after all the tables for that given database have been placed in temporary table ##rawdata3, I increment the counter and continue on. Note that I used a global temporary table to ensure that the result set persists after the termination of the run. At some stage, I plan to redo this part of the code, as global temporary tables are not really an ideal solution.    WHILE (@kounter < 25)  BEGIN  select @DatabaseName = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END   The full code extract   Here is the full code sample.   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   INSERT INTO #rawdata1 EXEC sp_databases go INSERT into #rawdata11 select * from  #rawdata1 declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1 WHILE (@kounter < 25)  BEGIN  select @databasename = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END    select * from ##rawdata3  where table_name like '%SalesOrderHeader%'

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • SOA Implementation Challenges

    Why do companies think that if they put up a web service that they are doing Service-Oriented Architecture (SOA)? Unfortunately, the IT and business world love to run on the latest hype or buzz words of which very few even understand the meaning. One of the largest issues companies have today as they consider going down the path of SOA, is the lack of knowledge regarding the architectural style and the over usage of the term SOA. So how do we solve this issue?I am sure most of you are thinking by now that you know what SOA is because you developed a few web services.  Isn’t that SOA, right? No, that is not SOA, but instead Just Another Web Service (JAWS). For us to better understand what SOA is let’s look at a few definitions.Douglas K. Bary defines service-oriented architecture as a collection of services. These services are enabled to communicate with each other in order to pass data or coordinating some activity with other services.If you look at this definition closely you will notice that Bary states that services communicate with each other. Let us compare this statement with my first statement regarding companies that claim to be doing SOA when they have just a collection of web services. In order for these web services to for an SOA application they need to be interdependent on one another forming some sort of architectural hierarchy. Just because a company has a few web services does not mean that they are all interconnected.SearchSOA from TechTarget.com states that SOA defines how two computing entities work collectively to enable one entity to perform a unit of work on behalf of another. Once again, just because a company has a few web services does not guarantee that they are even working together let alone if they are performing work for each other.SearchSOA also points out service interactions should be self-contained and loosely-coupled so that all interactions operate independent of each other.Of all the definitions regarding SOA Thomas Erl’s seems to shed the most light on this concept. He states that “SOA establishes an architectural model that aims to enhance the efficiency, agility, and productivity of an enterprise by positioning services as the primary means through which solution logic is represented in support of the realization of the strategic goals associated with service-oriented computing.” (Erl, 2011) Once again this definition proves that a collection of web services does not mean that a company is doing SOA. However, it does mean that a company has a collection of web services, and that is it.In order for a company to start to go down the path of SOA, they must take  a hard look at their existing business process while abstracting away any technology so that they can define what is they really want to accomplish. Once a company has done this, they can begin to factor out common sub business process like credit card process, user authentication or system notifications in to small components that can be built independent of each other and then reassembled to form new and dynamic services that are loosely coupled and agile in that they can change as a business grows.Another key pitfall of companies doing SOA is the fact that they let vendors drive their architecture. Why do companies do this? Vendors’ do not hold your company’s success as their top priority; in fact they hold their own success as their top priority by selling you as much stuff as you are willing to buy. In my experience companies tend to strive for the maximum amount of benefits with a minimal amount of cost. Does anyone else see any conflicts between this and the driving force behind vendors.Mike Kavis recommends in an article written in CIO.com that companies need to figure out what they need before they talk to a vendor or at least have some idea of what they need. It is important to thoroughly evaluate each vendor and watch them perform a live demo of their system so that you as the company fully understand what kind of product or service the vendor is actually offering. In addition, do research on each vendor that you are considering, check out blog posts, online reviews, and any information you can find on the vendor through various search engines.Finally he recommends companies to verify any recommendations supplied by a vendor. From personal experience this is very important. I can remember when the company I worked for purchased a $200,000 add-on to their phone system that never actually worked as it was intended. In fact, just after my departure from the company started the process of attempting to get their money back from the vendor. This potentially could have been avoided if the company had done the research before selecting this vendor to ensure that their product and vendor would live up to their claims. I know that some SOA vendor offer free training regarding SOA because they know that there are a lot of misconceptions about the topic. Superficially this is a great thing for companies to take part in especially if the company is starting to implement SOA architecture and are still unsure about some topics or are looking for some guidance regarding the topic. However beware that some companies will focus on their product line only regarding the training. As an example, InfoWorld.com claims that companies providing deep seminars disguised as training, focusing more about ESBs and SOA governance technology, and less on how to approach and solve the architectural issues of the attendees.In short, it is important to remember that we as software professionals are responsible for guiding a business’s technology sections should be well informed and fully understand any new concepts that may be considered for implementation. As I have demonstrated already a company that has a few web services does not mean that they are doing SOA.  Additionally, we must not let the new buzz word of the day drive our technology, but instead our technology decisions should be driven from research and proven experience. Finally, it is important to rely on vendors when necessary, however, always take what they say with a grain of salt while cross checking any claims that they may make because we have to live with the aftermath of a system after the vendors are gone.   References: Barry, D. K. (2011). Service-oriented architecture (SOA) definition. Retrieved 12 12, 2011, from Service-Architecture.com: http://www.service-architecture.com/web-services/articles/service-oriented_architecture_soa_definition.html Connell, B. (2003, 9). service-oriented architecture (SOA). Retrieved 12 12, 2011, from SearchSOA: http://searchsoa.techtarget.com/definition/service-oriented-architecture Erl, T. (2011, 12 12). Service-Oriented Architecture. Retrieved 12 12, 2011, from WhatIsSOA: http://www.whatissoa.com/p10.php InfoWorld. (2008, 6 1). Should you get your SOA knowledge from SOA vendors? . Retrieved 12 12, 2011, from InfoWorld.com: http://www.infoworld.com/d/architecture/should-you-get-your-soa-knowledge-soa-vendors-453 Kavis, M. (2008, 6 18). Top 10 Reasons Why People are Making SOA Fail. Retrieved 12 13, 2011, from CIO.com: http://www.cio.com/article/438413/Top_10_Reasons_Why_People_are_Making_SOA_Fail?page=5&taxonomyId=3016  

    Read the article

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • Using xsl:variable in a xsl:foreach select statment

    - by Nefariousity
    I'm trying to iterate through an xml document using xsl:foreach but I need the select=" " to be dynamic so I'm using a variable as the source. Here's what I've tried: ... <xsl:template name="SetDataPath"> <xsl:param name="Type" /> <xsl:variable name="Path_1">/Rating/Path1/*</xsl:variable> <xsl:variable name="Path_2">/Rating/Path2/*</xsl:variable> <xsl:if test="$Type='1'"> <xsl:value-of select="$Path_1"/> </xsl:if> <xsl:if test="$Type='2'"> <xsl:value-of select="$Path_2"/> </xsl:if> <xsl:template> ... <!-- Set Data Path according to Type --> <xsl:variable name="DataPath"> <xsl:call-template name="SetDataPath"> <xsl:with-param name="Type" select="/Rating/Type" /> </xsl:call-template> </xsl:variable> ... <xsl:for-each select="$DataPath"> ... The foreach threw an error stating: "XslTransformException - To use a result tree fragment in a path expression, first convert it to a node-set using the msxsl:node-set() function." When I use the msxsl:node-set() function though, my results are blank. I'm aware that I'm setting $DataPath to a string, but shouldn't the node-set() function be creating a node set from it? Am I missing something? When I don't use a variable: <xsl:for-each select="/Rating/Path1/*"> I get the proper results. Here's the XML data file I'm using: <Rating> <Type>1</Type> <Path1> <sarah> <dob>1-3-86</dob> <user>Sarah</user> </sarah> <joe> <dob>11-12-85</dob> <user>Joe</user> </joe> </Path1> <Path2> <jeff> <dob>11-3-84</dob> <user>Jeff</user> </jeff> <shawn> <dob>3-5-81</dob> <user>Shawn</user> </shawn> </Path2> </Rating> My question is simple, how do you run a foreach on 2 different paths?

    Read the article

  • WPF Styles and Tooltips Question

    - by A.R.
    I have a style that I am using to make dynamic tooltips on certain text boxes like so. <Style TargetType="{x:Type TextBox}"> <Setter Property="MinWidth" Value="100"/> <Style.Triggers> <Trigger Property="Validation.HasError" Value="True"> <!-- item of interest --> <Setter Property="ToolTip"> <Setter.Value> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <Binding RelativeSource="{RelativeSource Self}" Path="Tag"/> </MultiBinding> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> This works very well, but if I want to use a more complex tooltip I can't figure out how to bind to 'Tag' anymore for the converter value. For example; ... <Setter Property="ToolTip"> <Setter.Value> <StackPanel> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <!-- item of interest --> <Binding RelativeSource=" what goes here?? "/> </MultiBinding> </TextBlock.Text> </TextBlock> <Image/> </StackPanel> </Setter.Value> </Setter> ... I have tried several flavors of 'FindAncestor' and what not for the relative source, but I can't get anything to work. Any ideas?? UPDATE: 12-29-2010 : Here is the correct code, answer provided by our friend Goblin below. Works perfectly! ... <Setter Property="ToolTip"> <Setter.Value> <!-- Item of interest --> <ToolTip DataContext="{Binding Path=PlacementTarget, RelativeSource={x:Static RelativeSource.Self}}"> <StackPanel> <Image/> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource ErrorMessageConverter}"> <Binding Path="Tag"/> </MultiBinding> </TextBlock.Text> </TextBlock> </StackPanel> </ToolTip> </Setter.Value> </Setter> ...

    Read the article

  • protobuf-net: incorrect wire-type exception deserializing Guid properties

    - by Paul Smith
    I'm having issues deserializing certain Guid properties of ORM-generated entities using protobuf-net. Here's a simplified example of the code (reproduces most elements of the scenario, but doesn't reproduce the behavior; I can't expose our internal entities, so I'm looking for clues to account for the exception). Say I have a class, Account with an AccountID read-only guid, and an AccountName read-write string. I serialize & immediately deserialize a clone. Deserializing throws an Incorrect wire-type deserializing Guid exception while deserializing. Here's example usage... Account acct = new Account() { AccountName = "Bob's Checking" }; Debug.WriteLine(acct.AccountID.ToString()); using (MemoryStream ms = new MemoryStream()) { ProtoBuf.Serializer.Serialize<Account>(ms, acct); Debug.WriteLine(Encoding.UTF8.GetString(ms.GetBuffer())); ms.Position = 0; Account clone = ProtoBuf.Serializer.Deserialize<Account>(ms); Debug.WriteLine(clone.AccountID.ToString()); } And here's an example ORM'd class (simplified, but demonstrates the relevant semantics I can think of). Uses a shell game to deserialize read-only properties by exposing the backing field ("can't write" essentially becomes "shouldn't write," but we can scan code for instances of assigning to these fields, so the hack works for our purposes). Again, this does not reproduce the exception behavior; I'm looking for clues as to what could: [DataContract()] [Serializable()] public partial class Account { public Account() { _accountID = Guid.NewGuid(); } [XmlAttribute("AccountID")] [DataMember(Name = "AccountID", Order = 1)] public Guid _accountID; /// <summary> /// A read-only property; XML, JSON and DataContract serializers all seem /// to correctly recognize the public backing field when deserializing: /// </summary> [IgnoreDataMember] [XmlIgnore] public Guid AccountID { get { return this._accountID; } } [IgnoreDataMember] protected string _accountName; [DataMember(Name = "AccountName", Order = 2)] [XmlAttribute] public string AccountName { get { return this._accountName; } set { this._accountName = value; } } } XML, JSON and DataContract serializers all seem to serialize / deserialize these object graphs just fine, so the attribute arrangement basically works. I've tried protobuf-net with lists vs. single instances, different prefix styles, etc., but still always get the 'incorrect wire-type ... Guid' exception when deserializing. So the specific questions is, is there any known explanation / workaround for this? I'm at a loss trying to trace what circumstances (in the real code but not the example) could be causing it. We hope not to have to create a protobuf dependency directly in the entity layer; if that's the case, we'll probably create proxy DTO entities with all public properties having protobuf attributes. (This is a subjective issue I have with all declarative serialization models; it's a ubiquitous pattern & I understand why it arose, but IMO, if we can put a man on the moon, then "normal" should be to have objects and serialization contracts decoupled. ;-) ) Thanks!

    Read the article

  • ASP.NET MVC 2 Mdel encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • NHibernate which cache to use for WinForms application

    - by chiccodoro
    I have a C# WinForms application with a database backend (oracle) and use NHibernate for O/R mapping. I would like to reduce communication to the database as much as possible since the network in here is quite slow, so I read about second level caching. I found this quite good introduction, which lists the following available cache implementations. I'm wondering which implementation I should use for my application. The caching should be simple, it should not significantly slow down the first occurrence of a query, and it should not take much memory to load the implementing assemblies. (With NHibernate and Castle, the application already takes up to 80 MB of RAM!) Velocity: uses Microsoft Velocity which is a highly scalable in-memory application cache for all kinds of data. Prevalence: uses Bamboo.Prevalence as the cache provider. Bamboo.Prevalence is a .NET implementation of the object prevalence concept brought to life by Klaus Wuestefeld in Prevayler. Bamboo.Prevalence provides transparent object persistence to deterministic systems targeting the CLR. It offers persistent caching for smart client applications. SysCache: Uses System.Web.Caching.Cache as the cache provider. This means that you can rely on ASP.NET caching feature to understand how it works. SysCache2: Similar to NHibernate.Caches.SysCache, uses ASP.NET cache. This provider also supports SQL dependency-based expiration, meaning that it is possible to configure certain cache regions to automatically expire when the relevant data in the database changes. MemCache: uses memcached; memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Basically a distributed hash table. SharedCache: high-performance, distributed and replicated memory object caching system. See here and here for more info My considerations so far were: Velocity seems quite heavyweight and overkill (the files totally take 467 KB of disk space, haven't measured the RAM it takes so far because I didn't manage to make it run, see below) Prevalence, at least in my first attempt, slowed down my query from ~0.5 secs to ~5 secs, and caching didn't work (see below) SysCache seems to be for ASP.NET, not for winforms. MemCache and SharedCache seem to be for distributed scenarios. Which one would you suggest me to use? There would also be a built-in implementation, which of course is very lightweight, but the referenced article tells me that I "(...) should never use this cache provider for production code but only for testing." Besides the question which fits best into my situation I also faced problems with applying them: Velocity complained that "dcacheClient" tag not specified in the application configuration file. Specify valid tag in configuration file," although I created an app.config file for the assembly and pasted the example from this article. Prevalence, as mentioned above, heavily slowed down my first query, and the next time the exact same query was executed, another select was sent to the database. Maybe I should "externalize" this topic into another post. I will do that if someone tells me it is absolutely unusual that a query is slowed down so much and he needs further details to help me.

    Read the article

  • VS2010 Assembly Load Error

    - by Nate
    I am getting the following error when I try to build an ASP.NET 4 project in Visual Studio 2010: "Could not load file or assembly 'file:///C:\Dev\project\trunk\bin\Elmah.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515)". I have verified that the dll does, in fact, exist, and is getting copied to the bin folder correctly. I have also tried removing and then re-adding the reference to the project. The build only fails when I switch the Solution Configuration to "Release". It does not fail when the Solution Configuration is set to "Debug". The only difference between the two configurations (that I know of) is shown in the following Web.config transform, Web.Release.config: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="SqlServer" connectionString="" providerName="System.Data.SqlClient" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors mode="On" xdt:Transform="Replace"> <error statusCode="404" redirect="lost.htm" /> <error statusCode="500" redirect="uhoh.htm" /> </customErrors> </system.web> </configuration> I have tried using Fusion Log Viewer to track down the assembly binding issue, but it looks like it is finding and loading the assembly correctly. Here is the log: *** Assembly Binder Log Entry (6/8/2010 @ 10:01:54 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll Running under executable c:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\NETFX 4.0 Tools\sgen.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = User LOG: Where-ref bind. Location = C:\Dev\project\trunk\bin\Elmah.dll LOG: Appbase = file:///c:/Program Files (x86)/Microsoft SDKs/Windows/v7.0A/bin/NETFX 4.0 Tools/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = sgen.exe Calling assembly : (Unknown). === LOG: This bind starts in LoadFrom load context. WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load(). LOG: No application configuration file found. LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config. LOG: Attempting download of new URL file:///C:/Dev/project/trunk/bin/Elmah.dll. LOG: Assembly download was successful. Attempting setup of file: C:\Dev\project\trunk\bin\Elmah.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: Elmah, Version=1.1.11517.0, Culture=neutral, PublicKeyToken=null LOG: Re-apply policy for where-ref bind. LOG: Where-ref bind Codebase does not match what is found in default context. Keep the result in LoadFrom context. LOG: Binding succeeds. Returns assembly from C:\Dev\project\trunk\bin\Elmah.dll. LOG: Assembly is loaded in LoadFrom load context. I feel like there is a fundamental lack of understanding on my part as to what exactly is going on here. Any explanation/help is much appreciated!

    Read the article

  • Facebook require_login() in iFrame App

    - by LapKom
    Hi, I have serious problem with iframe application. I need to use many external JS libraries and other dynamic stuuf so FMBL application can't be done. When I call require_login() I get applicaition installing dialog when app is not already installed, which is ok. But then after authorization application enters an endless redirect loop with parameters like auth_token, installed and so. Yesterday I managed to fix this, but today it's broken again... What the heck is happening with FB? It's driving me crazy to find a sollution, none of ones found on net doesn't seem to be working. So far I tried: http://abhirama.wordpress.com/2010/03/07/facebook-iframe-xfbml-app/ (7th march 2010!) http://forum.developers.facebook.com/viewtopic.php?pid=156092 http://www.keywordintellect.com/facebook-development/how-to-set-up-a-facebook-iframe-application-in-php-in-5-minutes/ http://www.markdeepwell.com/2010/02/validating-a-facebook-session-within-an-iframe/ http://forum.developers.facebook.com/viewtopic.php?pid=210449 http://www.ajaxlines.com/ajax/stuff/article/facebook_fbml_rendering_in_iframe_application.php http://www.aratide.com/php/solving-the-break-out-issue-in-iframe-facebook-applications/ None of the above worked... According to those and some FB docs: http://wiki.developers.facebook.com/index.php/FB_RequireFeatures http://wiki.developers.facebook.com/index.php/Cross_Domain_Communication_Channel My example test files look as follow: <?php //Link in library. require_once '../application/vendor/Facebook/facebook.php'; //Authentication Keys $appapikey = 'XXXX'; $appsecret = 'XXXX'; //Construct the class $facebook = new Facebook($appapikey, $appsecret); //Require login $user_id = $facebook->require_login(); ?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <title></title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> This is you: <fb:name uid="<?php echo $user_id?>"></fb:name> <?php var_dump($facebook->$this->facebook->api_client->friends_get())?> <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function(){ FB.Facebook.init("<?=$appapikey?>", "xd_receiver.html"); }); </script> </body> </html> And cross-domain file xd_receiver.html is: <!doctype html public "-//w3c//dtd xhtml 1.0 strict//en" "http://www.w3.org/tr/xhtml1/dtd/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>cross-domain receiver page</title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/XdCommReceiver.js" type="text/javascript"></script> </body> </html> How do I get it working? I'm using Kohana framework to do this and already replaced header('Location') with url::redirect() in facebook php library.

    Read the article

  • Jetty 7 hightide distribution, JSP and JSTL support

    - by Lior Cohen
    Hello guys. I've been struggling with Jetty 7 and its support for JSP and JSTL. My JSP file: <%@ page language="java" contentType="text/html; charset=utf-8" pageEncoding="utf-8" %> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <head> <title>blah</title> </head> <body> <table id="data"> <tr class="columns"> <td>Hour</td> <c:forEach var="campaign" items="${campaigns}"> <td>${campaign}</td> </c:forEach> </tr> <c:forEach var="hour" items="${results}"> <tr> <td class="hour">${hour.key}</td> <c:forEach var="campaign" items="${campaigns}"> <td>${hour[campaign]}</td> </c:forEach> </tr> </c:forEach> </table> </body> </html> The JSP portions above work as expected. JSTL, however, does not. The campaigns and results variables are request attributes set by a servlet. I get the following errors: WARN: ... compiler.TagLibraryInfoImpl: Unknown element (deferred-value) in attribute WARN: ... compiler.TagLibraryInfoImpl: Unknown element (deferred-value) in attribute WARN: ... compiler.TagLibraryInfoImpl: Unknown element (deferred-value) in attribute ERROR: ... javax.servlet.ServletException: java.lang.AbstractMethodError: javax.servlet.jsp.PageContext.getELContext()Ljavax/el/ELContext; I am not bundling any jar files into my .war file deployed to jetty. The version of jetty I'm using is: jetty-hightide-7.0.1.v20091125 The classpath: /usr/local/jetty/lib/jetty-xml-7.0.1.v20091125.jar:/usr/local/jetty/lib/servlet-api-2.5.jar:/usr/local/jetty/lib/jetty-http-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-continuation-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-server-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-security-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-servlet-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-webapp-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-deploy-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-servlets-7.0.1.v20091125.jar:/usr/local/jetty/lib/jsp/ant-1.6.5.jar:/usr/local/jetty/lib/jsp/core-3.1.1.jar:/usr/local/jetty/lib/jsp/jetty-jsp-2.1-7.0.1.v20091125.jar:/usr/local/jetty/lib/jsp/jsp-2.1-glassfish-9.1.1.B60.25.p2.jar:/usr/local/jetty/lib/jsp/jsp-api-2.1-glassfish-9.1.1.B60.25.p2.jar:/usr/local/jetty/resources:/usr/local/jetty/lib/jetty-util-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-io-7.0.1.v20091125.jar Any help would be greatly appreciated. Thanks in advance, Lior.

    Read the article

  • Core Data - JSON (TouchJSON) on iPhone

    - by Urizen
    I have the following code which seems to go on indefinitely until the app crashes. It seems to happen with the recursion in the datastructureFromManagedObject method. I suspect that this method: 1) looks at the first managed object and follows any relationship property recursively. 2) examines the object at the other end of the relationship found at point 1 and repeats the process. Is it possible that if managed object A has a to-many relationship with object B and that relationship is two-way (i.e an inverse to-one relationship to A from B - e.g. one department has many employees but each employee has only one department) that the following code gets stuck in infinite recursion as it follows the to-one relationship from object B back to object A and so on. If so, can anyone provide a fix for this so that I can get my whole object graph of managed objects converted to JSON. #import "JSONUtils.h" @implementation JSONUtils - (NSDictionary*)dataStructureFromManagedObject:(NSManagedObject *)managedObject { NSDictionary *attributesByName = [[managedObject entity] attributesByName]; NSDictionary *relationshipsByName = [[managedObject entity] relationshipsByName]; //getting the values correspoinding to the attributes collected in attributesByName NSMutableDictionary *valuesDictionary = [[managedObject dictionaryWithValuesForKeys:[attributesByName allKeys]] mutableCopy]; //sets the name for the entity being encoded to JSON [valuesDictionary setObject:[[managedObject entity] name] forKey:@"ManagedObjectName"]; NSLog(@"+++++++++++++++++> before the for loop"); //looks at each relationship for the given managed object for (NSString *relationshipName in [relationshipsByName allKeys]) { NSLog(@"The relationship name = %@",relationshipName); NSRelationshipDescription *description = [relationshipsByName objectForKey:relationshipName]; if (![description isToMany]) { NSLog(@"The relationship is NOT TO MANY!"); [valuesDictionary setObject:[self dataStructureFromManagedObject:[managedObject valueForKey:relationshipName]] forKey:relationshipName]; continue; } NSSet *relationshipObjects = [managedObject valueForKey:relationshipName]; NSMutableArray *relationshipArray = [[NSMutableArray alloc] init]; for (NSManagedObject *relationshipObject in relationshipObjects) { [relationshipArray addObject:[self dataStructureFromManagedObject:relationshipObject]]; } [valuesDictionary setObject:relationshipArray forKey:relationshipName]; } return [valuesDictionary autorelease]; } - (NSArray*)dataStructuresFromManagedObjects:(NSArray*)managedObjects { NSMutableArray *dataArray = [[NSArray alloc] init]; for (NSManagedObject *managedObject in managedObjects) { [dataArray addObject:[self dataStructureFromManagedObject:managedObject]]; } return [dataArray autorelease]; } //method to call for obtaining JSON structure - i.e. public interface to this class - (NSString*)jsonStructureFromManagedObjects:(NSArray*)managedObjects { NSLog(@"-------------> just before running the recursive method"); NSArray *objectsArray = [self dataStructuresFromManagedObjects:managedObjects]; NSLog(@"-------------> just before running the serialiser"); NSString *jsonString = [[CJSONSerializer serializer] serializeArray:objectsArray]; return jsonString; } - (NSManagedObject*)managedObjectFromStructure:(NSDictionary*)structureDictionary withManagedObjectContext:(NSManagedObjectContext*)moc { NSString *objectName = [structureDictionary objectForKey:@"ManagedObjectName"]; NSManagedObject *managedObject = [NSEntityDescription insertNewObjectForEntityForName:objectName inManagedObjectContext:moc]; [managedObject setValuesForKeysWithDictionary:structureDictionary]; for (NSString *relationshipName in [[[managedObject entity] relationshipsByName] allKeys]) { NSRelationshipDescription *description = [[[managedObject entity]relationshipsByName] objectForKey:relationshipName]; if (![description isToMany]) { NSDictionary *childStructureDictionary = [structureDictionary objectForKey:relationshipName]; NSManagedObject *childObject = [self managedObjectFromStructure:childStructureDictionary withManagedObjectContext:moc]; [managedObject setValue:childObject forKey:relationshipName]; continue; } NSMutableSet *relationshipSet = [managedObject mutableSetValueForKey:relationshipName]; NSArray *relationshipArray = [structureDictionary objectForKey:relationshipName]; for (NSDictionary *childStructureDictionary in relationshipArray) { NSManagedObject *childObject = [self managedObjectFromStructure:childStructureDictionary withManagedObjectContext:moc]; [relationshipSet addObject:childObject]; } } return managedObject; } //method to call for obtaining managed objects from JSON structure - i.e. public interface to this class - (NSArray*)managedObjectsFromJSONStructure:(NSString *)json withManagedObjectContext:(NSManagedObjectContext*)moc { NSError *error = nil; NSArray *structureArray = [[CJSONDeserializer deserializer] deserializeAsArray:[json dataUsingEncoding:NSUTF32BigEndianStringEncoding] error:&error]; NSAssert2(error == nil, @"Failed to deserialize\n%@\n%@", [error localizedDescription], json); NSMutableArray *objectArray = [[NSMutableArray alloc] init]; for (NSDictionary *structureDictionary in structureArray) { [objectArray addObject:[self managedObjectFromStructure:structureDictionary withManagedObjectContext:moc]]; } return [objectArray autorelease]; } @end

    Read the article

  • How to write a test for accounts controller for forms authenticate

    - by Anil Ali
    Trying to figure out how to adequately test my accounts controller. I am having problem testing the successful logon scenario. Issue 1) Am I missing any other tests.(I am testing the model validation attributes separately) Issue 2) Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() and Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() test are not passing. I have narrowed it down to the line where i am calling _membership.validateuser(). Even though during my mock setup of the service i am stating that i want to return true whenever validateuser is called, the method call returns false. Here is what I have gotten so far AccountController.cs [HandleError] public class AccountController : Controller { private IMembershipService _membershipService; public AccountController() : this(null) { } public AccountController(IMembershipService membershipService) { _membershipService = membershipService ?? new AccountMembershipService(); } [HttpGet] public ActionResult LogOn() { return View(); } [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (_membershipService.ValidateUser(model.UserName,model.Password)) { if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); } return RedirectToAction("Index", "Overview"); } ModelState.AddModelError("*", "The user name or password provided is incorrect."); } return View(model); } } AccountServices.cs public interface IMembershipService { bool ValidateUser(string userName, string password); } public class AccountMembershipService : IMembershipService { public bool ValidateUser(string userName, string password) { throw new System.NotImplementedException(); } } AccountControllerFacts.cs public class AccountControllerFacts { public static AccountController GetAccountControllerForLogonSuccess() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(true); return controller; } public static AccountController GetAccountControllerForLogonFailure() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(false); return controller; } public class LogOn { [Fact] public void Get_ReturnsViewResultWithDefaultViewName() { // Arrange var controller = GetAccountControllerForLogonSuccess(); // Act var result = controller.LogOn(); // Assert Assert.IsType<ViewResult>(result); Assert.Empty(((ViewResult)result).ViewName); } [Fact] public void Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, null); var redirectresult = (RedirectToRouteResult) result; // Assert Assert.IsType<RedirectToRouteResult>(result); Assert.Equal("Overview", redirectresult.RouteValues["controller"]); Assert.Equal("Index", redirectresult.RouteValues["action"]); } [Fact] public void Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var redirectResult = (RedirectResult) result; // Assert Assert.IsType<RedirectResult>(result); Assert.Equal("someurl", redirectResult.Url); } [Fact] public void Put_ReturnsViewIfInvalidModelState() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); controller.ModelState.AddModelError("*","Invalid model state."); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); } [Fact] public void Put_ReturnsViewIfLogonFailed() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); Assert.Equal(false,viewResult.ViewData.ModelState.IsValid); } } }

    Read the article

  • Why does my sharepoint web part event handler lose the sender value on postback?

    - by vishal shah
    I have a web part which is going to be a part of pair of connected web parts. For simplicity, I am just describing the consumer web part. This web part has 10 link buttons on it. And they are rendered in the Render method instead ofCreateChildControls as this webpart will be receiving values based on input from the provider web part. Each Link Button has a text which is decided dynamically based on the input from provider web part. When I click on any of the Link Buttons, the event handler is triggered but the text on the Link Button shows up as the one set in CreateChildControls. When I trace the code, I see that the CreateChildControls gets called before the event handler (and i think that resets my Link Buttons). How do I get the event handler to show me the dynamic text instead? Here is the code... public class consWebPart : Microsoft.SharePoint.WebPartPages.WebPart { private bool _error = false; private LinkButton[] lkDocument = null; public consWebPart() { this.ExportMode = WebPartExportMode.All; } protected override void CreateChildControls() { if (!_error) { try { base.CreateChildControls(); lkDocument = new LinkButton[101]; for (int i = 0; i < 10; i++) { lkDocument[i] = new LinkButton(); lkDocument[i].ID = "lkDocument" + i; lkDocument[i].Text = "Initial Text"; lkDocument[i].Style.Add("margin", "10 10 10 10px"); this.Controls.Add(lkDocument[i]); lkDocument[i].Click += new EventHandler(lkDocument_Click); } } catch (Exception ex) { HandleException(ex); } } } protected override void Render(HtmlTextWriter writer) { writer.Write("<table><tr>"); for (int i = 0; i < 10; i++) { writer.Write("<tr>"); lkDocument[i].Text = "LinkButton" + i; writer.Write("<td>"); lkDocument[i].RenderControl(writer); writer.Write("</td>"); writer.Write("</tr>"); } writer.Write("</table>"); } protected void lkDocument_Click(object sender, EventArgs e) { string strsender = sender.ToString(); LinkButton lk = (LinkButton)sender; } protected override void OnLoad(EventArgs e) { if (!_error) { try { base.OnLoad(e); this.EnsureChildControls(); } catch (Exception ex) { HandleException(ex); } } } private void HandleException(Exception ex) { this._error = true; this.Controls.Clear(); this.Controls.Add(new LiteralControl(ex.Message)); } }

    Read the article

  • pdfptable format problem?

    - by raj
    Hi, I am using the code below to convert Html string to PDF.I notice that at times, it does print the tables in PDF but no styles(no border, color etc..).Below is the html string: Can anyone suggest me where am I missing something? Hi userThe Message ID 56456 has been assigned to you for Edition. AUTO VERIFY FOR CONGROUP cccccccc FAILEDSSID ssss message, RPTD BY rrrrSSID ssss RLD ll message, RPTD BY rrrrSSID ssss DEV ddd message, RPTD BY rrrr Table 12 EMC9998W message format <table style="border: medium solid #00FF00; width:100%; table-layout: auto; visibility: visible;" title="EMC9998W message format"> <tr> <td> <b>Exception code</b></td> <td> <b>Meaning</b></td> <td> <b>Message format</b></td> </tr> <tr> <td > 1460 </td> <td> DYNAMIC SPARING INVOKED</td> <td> 1</td> </tr> <tr> <td > 147D REMOTE </td> <td > LINK DIRECTOR PROBLEM/FAILURE</td> <td> 2</td> </tr> </table> Where MSG FORMAT 1 EMC9998W SSID ssss message, RPTD BY rrrr MSG FORMAT 2 EMC9998W SSID ssss message, RPTD BY rrrr MSG FORMAT 3 EMC9998W SSID ssss message, RPTD BY rrrr private void HtmltoPdf(string s, Paragraph p,Document doc) { string strLine = s; byte[] byteArray = Encoding.ASCII.GetBytes(strLine); MemoryStream stream = new MemoryStream(byteArray); StreamReader reader2 = new StreamReader(stream); StringReader sr2 = new StringReader(reader2.ReadToEnd()); iTextSharp.text.html.simpleparser.HTMLWorker worker = new HTMLWorker(doc); ArrayList elementlist = HTMLWorker.ParseToList(sr2, null); Phrase ph = new Phrase(); for (int k = 0; k < elementlist.Count; ++k) { IElement ielement = (IElement)elementlist[k]; ArrayList chunks = ielement.Chunks; if (ielement.Type == Element.PTABLE) { PdfPTable pt = (PdfPTable)ielement; pt.DefaultCell.Border = 2; PdfPTable t = new PdfPTable(1);//(new float[] {1f,1f,1f}); ph.Add(t); PdfPCell pcell = new PdfPCell(new Paragraph(ph)); t.AddCell(pcell); p.Add(t); foreach (PdfPRow row in t.Rows) { } } else if (ielement.Type == Element.LIST) { } else { ph.Clear(); ph.Add((IElement)elementlist[k]); p.Add(new Paragraph(ph)); } } sr2.Close(); reader2.Close(); stream.Close(); }

    Read the article

  • RSpec test failing looking for a new set of eyes

    - by TheDelChop
    Guys, Here my issuse: I've got two models: class User < ActiveRecord::Base # Setup accessible (or protected) attributes for your model attr_accessible :email, :username has_many :tasks end class Task < ActiveRecord::Base belongs_to :user end with this simple routes.rb file TestProj::Application.routes.draw do |map| resources :users do resources :tasks end end this schema: ActiveRecord::Schema.define(:version => 20100525021007) do create_table "tasks", :force => true do |t| t.string "name" t.integer "estimated_time" t.datetime "created_at" t.datetime "updated_at" t.integer "user_id" end create_table "users", :force => true do |t| t.string "email" t.string "password" t.string "password_confirmation" t.datetime "created_at" t.datetime "updated_at" t.string "username" end add_index "users", ["email"], :name => "index_users_on_email", :unique => true add_index "users", ["username"], :name => "index_users_on_username", :unique => true end and this controller for my tasks: class TasksController < ApplicationController before_filter :load_user def new @task = @user.tasks.new end private def load_user @user = User.find(params[:user_id]) end end Finally here is my test: require 'spec_helper' describe TasksController do before(:each) do @user = Factory(:user) @task = Factory(:task) end #GET New describe "GET New" do before(:each) do User.stub!(:find).with(@user.id.to_s).and_return(@user) @user.stub_chain(:tasks, :new).and_return(@task) end it "should return a new Task" do @user.tasks.should_receive(:new).and_return(@task) get :new, :user_id => @user.id end end end This test fails with the following output: 1) TasksController GET New should return a new Task Failure/Error: get :new, :user_id => @user.id undefined method `abstract_class?' for Object:Class # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:1234:in `class_of_active_record_descendant' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:900:in `base_class' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:655:in `reset_table_name' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:647:in `table_name' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:932:in `arel_table' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:927:in `unscoped' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/named_scope.rb:30:in `scoped' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:405:in `find' # ./app/controllers/tasks_controller.rb:15:in `load_user' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:431:in `_run__1954900289__process_action__943997142__callbacks' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:405:in `send' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:405:in `_run_process_action_callbacks' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:88:in `send' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:88:in `run_callbacks' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/abstract_controller/callbacks.rb:17:in `process_action' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/metal/rescue.rb:8:in `process_action' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/abstract_controller/base.rb:113:in `process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/abstract_controller/rendering.rb:39:in `sass_old_process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/gems/haml-3.0.0.beta.3/lib/sass/plugin/rails.rb:26:in `process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/metal/testing.rb:12:in `process_with_new_base_test' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/test_case.rb:390:in `process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/test_case.rb:328:in `get' # ./spec/controllers/tasks_controller_spec.rb:20 # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:209:in `inject' Can anybody help me understand what's going on here? It seems to be an RSpec problem since the controller action actually works, but I could be wrong. Thanks, Joe

    Read the article

  • ASP.NET XML as Datasource error

    - by nekko
    Hello I am trying to use an XML as a datasource in ASP and then display it as a datagrid. The XML has the following format: <?xml version="1.0" encoding="UTF-8"?> <people type="array"> <person> <id type="integer"></id> <first_name></first_name> <last_name></last_name> <title></title> <company></company> <tags> </tags> <locations> <location primary="false" label="work"> <email></email> <website></website> <phone></phone> <cell></cell> <fax></fax> <street_1/> <street_2/> <city/> <state/> <postal_code/> <country/> </location> </locations> <notes></notes> <created_at></created_at> <updated_at></updated_at> </person> </people> When I try to run the simple page I receive the following error Server Error in '/' Application. The data source for GridView with id 'GridView1' did not have any properties or attributes from which to generate columns. Ensure that your data source has content. Here is my page code <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Default.aspx.vb" Inherits="shout._Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:XmlDataSource ID="XmlDataSource1" runat="server" DataFile="~/App_Data/people.xml" XPath="people/person"></asp:XmlDataSource> <asp:GridView ID="GridView1" runat="server" AllowPaging="True" DataSourceID="XmlDataSource1"> </asp:GridView> </div> </form> </body> </html> Please help. Thanks in advance.

    Read the article

  • Cisco PIX 8.0.4, static address mapping not working?

    - by Bill
    upgrading a working Pix running 5.3.1 to 8.0.4. The memory/IOS upgrade went fine, but the 8.0.4 configuration is not quite working 100%. The 5.3.1 config on which it was based is working fine. Basically, I have three networks (inside, outside, dmz) with some addresses on the dmz statically mapped to outside addresses. The problem seems to be that those addresses can't send or receive traffic from the outside (Internet.) Stuff on the DMZ that does not have a static mapping seems to work fine. So, basically: Inside - outside: works Inside - DMZ: works DMZ - inside: works, where the rules allow it DMZ (non-static) - outside: works But: DMZ (static) - outside: fails Outside - DMZ: fails (So, say, udp 1194 traffic to .102, http to .104) I suspect there's something I'm missing with the nat/global section of the config, but can't for the life of me figure out what. Help, anyone? The complete configuration is below. Thanks for any thoughts! ! PIX Version 8.0(4) ! hostname firewall domain-name asasdkpaskdspakdpoak.com enable password xxxxxxxx encrypted passwd xxxxxxxx encrypted names ! interface Ethernet0 nameif outside security-level 0 ip address XX.XX.XX.100 255.255.255.224 ! interface Ethernet1 nameif inside security-level 100 ip address 192.168.68.1 255.255.255.0 ! interface Ethernet2 nameif dmz security-level 10 ip address 192.168.69.1 255.255.255.0 ! boot system flash:/image.bin ftp mode passive dns server-group DefaultDNS domain-name asasdkpaskdspakdpoak.com access-list acl_out extended permit udp any host XX.XX.XX.102 eq 1194 access-list acl_out extended permit tcp any host XX.XX.XX.104 eq www access-list acl_dmz extended permit tcp host 192.168.69.10 host 192.168.68.17 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq 5901 access-list acl_dmz extended permit udp host 192.168.69.103 any eq ntp access-list acl_dmz extended permit udp host 192.168.69.103 any eq domain access-list acl_dmz extended permit tcp host 192.168.69.103 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8080 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8099 access-list acl_dmz extended permit tcp host 192.168.69.105 any eq www access-list acl_dmz extended permit tcp host 192.168.69.103 any eq smtp access-list acl_dmz extended permit tcp host 192.168.69.105 host 192.168.68.103 eq ssh access-list acl_dmz extended permit tcp host 192.168.69.104 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq https pager lines 24 mtu outside 1500 mtu inside 1500 mtu dmz 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 static (dmz,outside) XX.XX.XX.103 192.168.69.11 netmask 255.255.255.255 static (inside,dmz) 192.168.68.17 192.168.68.17 netmask 255.255.255.255 static (inside,dmz) 192.168.68.100 192.168.68.100 netmask 255.255.255.255 static (inside,dmz) 192.168.68.101 192.168.68.101 netmask 255.255.255.255 static (inside,dmz) 192.168.68.102 192.168.68.102 netmask 255.255.255.255 static (inside,dmz) 192.168.68.103 192.168.68.103 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.104 192.168.69.100 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.105 192.168.69.105 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.102 192.168.69.10 netmask 255.255.255.255 access-group acl_out in interface outside access-group acl_dmz in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.97 1 route dmz 10.71.83.0 255.255.255.0 192.168.69.10 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet 192.168.68.17 255.255.255.255 inside telnet timeout 5 ssh timeout 5 console timeout 0 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! service-policy global_policy global prompt hostname context Cryptochecksum:2d1bb2dee2d7a3e45db63a489102d7de

    Read the article

  • Question about how to use strong typed dataset in N-tier application for .NET

    - by sb
    Hello All, I need some expert advice on strong typed data sets in ADO.NET that are generated by the Visual Studio. Here are the details. Thank you in advance. I want to write a N-tier application where Presentation layer is in C#/windows forms, Business Layer is a Web service and Data Access Layer is SQL db. So, I used Visual Studio 2005 for this and created 3 projects in a solution. project 1 is the Data access layer. In this I have used visual studio data set generator to create a strong typed data set and table adapter (to test I created this on the customers table in northwind). The data set is called NorthWindDataSet and the table inside is CustomersTable. project 2 has the web service which exposes only 1 method which is GetCustomersDataSet. This uses the project1 library's table adapter to fill the data set and return it to the caller. To be able to use the NorthWindDataSet and table adapter, I added a reference to the project 1. project 3 is a win forms app and this uses the web service as a reference and calls that service to get the data set. In the process of building this application, in the PL, I added a reference to the DataSet generated above in the project 1 and in form's load I call the web service and assign the received DataSet from the web service to this dataset. But I get the error: "Cannot implicitly convert type 'PL.WebServiceLayerReference.NorthwindDataSet' to 'BL.NorthwindDataSet' e:\My Documents\Visual Studio 2008\Projects\DataSetWebServiceExample\PL\Form1.cs". Both the data sets are same but because I added references from different locations, I am getting the above error I think. So, what I did was I added a reference to project 1 (which defines the data set) to project 3 (the UI) and used the web service to get the DataSet and assing to the right type, now when the project 3 (which has the web form) runs, I get the below runtime exception. "System.InvalidOperationException: There is an error in XML document (1, 5058). --- System.Xml.Schema.XmlSchemaException: Multiple definition of element 'http://tempuri.org/NorthwindDataSet.xsd:Customers' causes the content model to become ambiguous. A content model must be formed such that during validation of an element information item sequence, the particle contained directly, indirectly or implicitly therein with which to attempt to validate each item in the sequence in turn can be uniquely determined without examining the content or attributes of that item, and without any information about the items in the remainder of the sequence." I think this might be because of some cross referenceing errors. My question is, is there a way to use the visual studio generated DataSets in such a way that I can use the same DataSet in all layers (for reuse) but separate the Table Adapter logic to the Data Access Layer so that the front end is abstracted from all this by the web service? If I have hand write the code I loose the goodness the data set generator gives and also if there are columns added later I need to add it by hand etc so I want to use the visual studio wizard as much as possible. thanks for any help on this. sb

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >