Search Results

Search found 4274 results on 171 pages for 'references'.

Page 37/171 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • using ILMerge with .NET 4 libraries

    - by Sarah Vessels
    I'm having trouble using ILMerge in my post-build after upgrading from .NET 3.5/Visual Studio 2008 to .NET 4/Visual Studio 2010. I have a Solution with several projects whose target framework is set to ".NET Framework 4". I use the following ILMerge command to merge the individual project DLLs into a single DLL: if not $(ConfigurationName) == Debug if exist "C:\Program Files (x86)\Microsoft\ILMerge\ILMerge.exe" "C:\Program Files (x86)\Microsoft\ILMerge\ILMerge.exe" /lib:"C:\Windows\Microsoft.NET\Framework64\v4.0.30319" /lib:"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies" /keyfile:"$(SolutionDir)$(SolutionName).snk" /targetplatform:v4 /out:"$(SolutionDir)bin\development\$(SolutionName).dll" "$(SolutionDir)Connection\$(OutDir)Connection.dll" ...other project DLLs... /xmldocs If I leave off specifying the location of the .NET 4 framework directory, I get an "Unresolved assembly reference not allowed: System" error from ILMerge. If I leave off specifying the location of the MSTest directory, I get an "Unresolved assembly reference not allowed: Microsoft.VisualStudio.QualityTools.UnitTestFramework" error. The ILMerge command above works and produces a DLL. When I reference that DLL in another .NET 4 C# project, however, and try to use code within it, I get the following warning: The primary reference "MyILMergedDLL" could not be resolved because it has an indirect dependency on the .NET Framework assembly "mscorlib, Version=4.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" which has a higher version "4.0.65535.65535" than the version "4.0.0.0" in the current target framework. If I then remove the /targetplatform:v4 flag and try to use MyILMergedDLL.dll, I get the following error: The type 'System.Xml.Serialization.IXmlSerializable' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. It doesn't seem like I should have to do that. Whoever uses my MyILMergedDLL.dll API should not have to add references to whatever libraries it references. How can I get around this?

    Read the article

  • Silverlight WCF serialization DataContract(IsReference=true) problem

    - by Ciaran
    Hi, I'm have a Silverlight 3 UI that access WCF services which in turn access respositories that use NHibernate. To overcome some NHibernate lazy loading issues with WCF I'm using my own DataContract surrogate as described here: http://timvasil.com/blog14/post/2008/02/WCF-serialization-with-NHibernate.aspx. In here I'm setting preserveObjectReferences = true My model contains cycles (i.e. Customer with IList[Order]) When I retrieve an object from my service it works fine, however when I try and send that same object back to the wcf service I get the error: System.ServiceModel.CommunicationException was unhandled by user code Message=There was an error while trying to serialize parameter http://tempuri.org/:searchCriteria. The InnerException message was 'Object graph ...' contains cycles and cannot be serialized if references are not tracked. Consider using the DataContractAttribute with the IsReference property set to true.' So cyclical references are now a problem in Silverlight, so I try change my DataContract to be [DataContract(IsReference=true)] but now when I try to retrieve an object from my service I get the following exception: System.ExecutionEngineException was unhandled Message=Exception of type 'System.ExecutionEngineException' was thrown. InnerException: Any ideas?

    Read the article

  • Silverlight WCF serialization [DataContract(IsReference=true)] problem

    - by Ciaran
    Hi, I'm have a Silverlight 3 UI that access WCF services which in turn access respositories that use NHibernate. To overcome some NHibernate lazy loading issues with WCF I'm using my own DataContract surrogate as described here: http://timvasil.com/blog14/post/2008/02/WCF-serialization-with-NHibernate.aspx. In here I'm setting preserveObjectReferences = true My model contains cycles (i.e. Customer with Collection). When I retrieve an object from my service it works fine, however when I try and send that same object back to the wcf service I get the error: System.ServiceModel.CommunicationException was unhandled by user code Message=There was an error while trying to serialize parameter http://tempuri.org/:searchCriteria. The InnerException message was 'Object graph ...' contains cycles and cannot be serialized if references are not tracked. Consider using the DataContractAttribute with the IsReference property set to true.' So cyclical references are now a problem in Silverlight, so I try change my DataContract to be [DataContract(IsReference=true)] but now when I try to retrieve an object from my service I get the following exception: System.ServiceModel.CommunicationException was unhandled by user code Message=The remote server returned an error: NotFound. It shouldn't be this hard to do something so trivial...

    Read the article

  • Subsonic : Can’t decide which property to consider the Key? foreign key issue.

    - by AJ
    Hi I am trying to select count of rows from a table which has foreign keys of two tables. The C# code threw the error mentioned below. So, I added a primary key column to the table (schema as follows) and regenerated the code. But still the same error is coming. Error : Can't decide which property to consider the Key - you can create one called 'ID' or mark one with SubSonicPrimaryKey attribute sqLite Table schema CREATE TABLE "AlbumDocuments" ("Id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "AlbumId" INTEGER NOT NULL CONSTRAINT fk_AlbumId REFERENCES Albums(Id) , "DocumentId" INTEGER NOT NULL CONSTRAINT fk_DocumentId REFERENCES Documents(Id)) C# code int selectAlbumDocumentsCount = new SubSonic.Query.Select() .From<DocSafeDB.DataLayer.AlbumDocumentsTable>() .Where(DocSafeDB.DataLayer.AlbumDocumentsTable.AlbumIdColumn).In(request.AlbumId) .Execute(); Not sure what I should be doing next as I can't do where against primary key because I don;t have that info. So my questions are: How do I select count of rows against foreign key column? Is primary key required in this scenario? I have several things but not sure why its not working. To me it looks like a very normal use case. Please advise. Thanks AJ

    Read the article

  • MVVM Light Toolkit - RelayCommands, DelegateCommands, and ObservableObjects

    - by DanM
    I just started experimenting with Laurent Bugnion's MVVM Light Toolkit. I think I'm going to really like it, but I have a couple questions. Before I get to them, I need to explain where I'm coming from. I currently use a combination of Josh Smith's MVVM Foundation and another project on Codeplex called MVVM Toolkit. I use ObservableObject and Messenger from MVVM Foundation and DelegateCommand and CommandReference from MVVM Toolkit. The only real overlap between MVVM Foundation and MVVM Tookit is that they both have an implementation for ICommand: MVVM Foundation has a RelayCommand and MVVM Tookit has a DelegateCommand. Of these two, DelegateCommand appears to be more sophisticated. It employs a CommandManagerHelper that uses weak references to avoid memory leaks. With that said, a couple questions: Why does MVVM Light Toolkit use RelayCommand rather than DelegateCommand? Is the use of weak references in an ICommand unnecessary or not recommended for some reason? Why is there no ObservableObject in MVVM Light? ObservableObject is basically just the part of ViewModelBase that implements INotifyPropertyChanged, but it's very convenient to have as a separate class because view-models are not the only objects that need to implement INotifyPropertyChanged. For example, let's say you have DataGrid that binds to a list of Person objects. If any of the properties in Person can change while the user is viewing the DataGrid, Person would need to implement INotifyPropertyChanged. (I realize that if Person is auto-generated using something like LinqToSql, it will probably already implement INotifyPropertyChanged, but there are cases where I need to make a view-specific version of entity model objects, say, because I need to include a command to support a button column in a DataGrid.) Thanks.

    Read the article

  • How do I use Fluent NHibernate with .NET 4.0?

    - by Tomas Lycken
    I want to learn to use Fluent NHibernate, and I'm working in VS2010 Beta2, compiling against .NET 4, but I'm experiencing some problems. Summary My main problem (at the moment) is that the namespace FluentNHibernate isn't available even though I've imported all the .dll assemblies mentioned in this guide. This is what I've done: 1. I downloaded the Fluent NHibernate source from here, extracted the .zip and opened the solution in VS. A dialog asked me if I wanted to convert the solution to a VS2010 solution, so I did. 2. I then went into each project's properties and configured all of them to compile for .NET 4, and built the entire solution. 3. I copied all the .dll files from /bin/Debug/ in the FluentNHibernate to a new folder on my local hard drive. 4. In my example project, I referenced FluentNHibernate.dll and NHibernate.dll from the new folder. This is my problem: If I right-click on FluentNHibernate in the References list and select "View in Object Browser...", it shows up correctly. Now, when I try to create a mapping class, I can't import FluentNHibernate. This code: using FluentNHibernate.Mapping; namespace FluentNHExample.Mappings { } generates an error on the using statement, saying The type or namespace 'FluentNHibernate' could not be found (are you missing a using directive or an assembly reference?). The FluentNHibernate assembly is still in the list of References of my project, but if I try to browse the assembly in Object Browser again, it can't be found. What is causing this?

    Read the article

  • WSDLs generated by Axis2 services can't be read by SoapUI or WSDL2Java

    - by RJCantrell
    I'm converting some services from Apache SOAP to Axis2, so the Java service classes already exist. I created a new project in Eclipse, imported the source, made sure that the Axis2 project facets were installed, and Axis2 emitter properties are correct. Then, in Eclipse, I selected the service class and chose "Create Web Service," choosing the Axis2 runtime. The service is up and running on my PC, and when I append "?wsdl" to the service's path, I do indeed get a WSDL that I save locally. Attempting to import this into SoapUI to build a client gives the error: ERROR:org.apache.xmlbeans.XmlException: C:\projects\soapUI\Axis2\DALService.wsdl:0: error: src-resolve: type 'SOAPException@http://www.w3.org/2001/XMLSchema' not found. The type it's referring to (SOAPException) is a holdover from the Apache SOAP services, and in the service code, I changed all "import" references in the service code (not the WSDL) from org.apache.soap.SOAPException (the old Apache SOAP package name) to javax.xml.soap.SOAPException (the Axis2 location). The code compiles and works, once I can access it, but I can't access it without generating a client. Any thoughts as to why changing the namespace of an object would keep the generated WSDLs from having the proper namespace references?

    Read the article

  • Schliemann's method of programming language learning

    - by DVK
    Background: 19th-century German archeologist Heinrich Schliemann was of course famous for his successful quest to find and excavate the city of Troy (an actual archeological site for the Troy of Homer's Iliad). However, he is just as famous for being an astonishing learner of languages - within the space of two years, he taught himself fluent Dutch, English, French, Spanish, Italian and Portuguese, and later went on to learn seven more, including both modern and ancient Greek. One of the methods he famously used was comparison of a known text, e.g. take a book in a language one is fluent in, take a good translation of a book in a language you wish to learn, and go over them in parallel. (various sources cited the book used by Schliemann to be the Bible, or, as the link above states, a novel). Now, for the actual question. Has anyone used (or heard of) an equivalent of Schliemann's method for learning a new programming language? E.g. instead of basing the leaning on references and tutorials, take a somewhat comprehensive set of programs known to have high-quality code in both languages implementing similar/identical algorithms and learn by comparing them? I'm curious about either personal experiences of applying such an approach, or references to something published, or existance of codebases which could be used for such an approach? What got me thinking about the idea was Project Euler and some code snippets I saw on SO, in C++, Perl and Lisp.

    Read the article

  • Rhino Mocks, Dependency Injection, and Separation of Concerns

    - by whatispunk
    I am new to mocking and dependency injection and need some guidance. My application is using a typical N-Tier architecture where the BLL references the DAL, and the UI references the BLL but not the DAL. Pretty straight forward. Lets say, for example, I have the following classes: class MyDataAccess : IMyDataAccess {} class MyBusinessLogic {} Each exists in a separate assembly. I want to mock MyDataAccess in the tests for MyBusinessLogic. So I added a constructor to the MyBusinessLogic class to take an IMyDataAccess parameter for the dependency injection. But now when I try to create an instance of MyBusinessLogic on the UI layer it requires a reference to the DAL. I thought I could define a default constructor on MyBusinessLogic to set a default IMyDataAccess implementation, but not only does this seem like a codesmell it didn't actually solve the problem. I'd still have a public constructor with IMyDataAccess in the signature. So the UI layer still requires a reference to the DAL in order to compile. One possible solution I am toying with is to create an internal constructor for MyBusinessLogic with the IMyDataAccess parameter. Then I can use an Accessor from the test project to call the constructor. But there's still that smell. What is the common solution here. I must just be doing something wrong. How could I improve the architecture?

    Read the article

  • Could not load type in Custom Profile provider

    - by Cragly
    I am writing a small console application in C# that references a custom assembly that implements custom .net Profile provider. I have added the following sections to my app.config file which references the custom class and assembly. <system.web> <profile defaultProvider="MyCompanyProfileProvider" inherits="MyCompany.Web.User.GenericProfile" automaticSaveEnabled="false"> <providers> <clear/> <add name="MyCompanyProfileProvider" connectionStringName="defaultDatabase" applicationName="/myApplication" type="MyCompany.Web.ProfileProvider, MyCompany.Web"/> </providers> <properties> <add name="JobRoleId" type="System.Int32"/> <add name="LastCompetencyId" type="System.Int32" defaultValue="0"/> <add name="MixSettings" type="System.Xml.XmlDocument"/> </properties> </profile></system.web> However when I run the app in debug mode I get the following error as if it is looking in the System.Web assembly rather than one specified in the app.config file. Could not load type 'MyCompany.Web.User.GenericProfile' from assembly 'System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. I have a local web app that also uses the assembly and custom Profile provider and that work without any problems. I have checked the referenced assembly is being copied to the output directory. Any ideas??

    Read the article

  • Django authentication in django nonrel on GAE

    - by tooba
    I'm using the Django nonrel project on a google app engine project running locally in development. I've created my own models and these are fine when they are saved and retrieved in the datastore. I'm hoping to use django.contrib.auth to provide the user functionality. I can use the shell to create users and these get assigned an ID. When I create one of my own models which references User I have to pass in a user ID as it quite rightly fails otherwise. However, checking via the gae admin interface I can't see the User model in the datastore for the users I've created via the shell. Nor can I retreive the user details from one of my models which references them. Calls to mymodel.user.username return nothing. Nor can I log into admin using the username and password I've set up. I can see saved versions of the models I've made in the gae admin app. I get the impression that users are being created somewhere other than the datastore. Is there something else I need to do to use the standard contrib.auth users with django-nonrel and gae?

    Read the article

  • MSBuild fails, but building inside Visual Studio works fine

    - by Matt
    C#, .NET 2.0 I have an ASP.NET website in a solution, with 2 other projects (used as library references). When I build (debug or release) in Visual Studio, everything works fine. However, building with MSBuild fails. This build had been working (it's actually invoked via a nAnt task). The only thing that has changed is that I have a new user control whose Type I am referencing in my code behind. The offending code is in my ASPX code behind. MessageAlert is the UserControl: MessageAlert userControl = this.LoadControl("~/UserControls/MessageAlert.ascx") as MessageAlert; userControl.UserMessage = message; this.UserMessages.Controls.Add(userControl); In order to get Visual Studio to recognize the type 'MessageAlert' I had to: 1) Set the ClassName="MessageAlert" in the @Control markup at the top of the user control (because using the auto-generated UserControls_MessageAlert wasn't working either) 2) Register the user control in the markup of my ASPX, using an @Register 3) Add a "using ASP" to the top of my code behind After those steps, I could successfully reference the MessageAlert type in my codebehind from visual studio. But from MSBuild I get "The type or namespace name 'MessageAlert' could not be found (are you missing a using directive or an assembly reference?) " The MSBuild execution is very simple - it points the the very same solution file and sets the configuration property to release. It seems, based on the # of steps I had to go through to get Type references to MessageAlert in Visual Studio, that there is something missing in the MSBuild process. But what? Doesn't Visual Studio in fact invoke MSBuild behind the scenes? Is there a better way to reference a UserControl type in the code behind of an ASPX? EDIT: To clarify, the MessageAlert user control is not in the other referenced assemblies/projects. I mentioned them because, together with the website, the compose the Solution file, which is the same sln file being built by MS Build.

    Read the article

  • dumping the source code for an anonymous function

    - by intuited
    I'm working with a lot of anonymous functions, ie functions declared as part of a dictionary, aka "methods". It's getting pretty painful to debug, because I can't tell what function the errors are happening in. Vim's backtraces look like this: Error detected while processing function NamedFunction..2111..2105: line 1: E730: using List as a String This trace shows that the error occurred in the third level down the stack, on the first line of anonymous function #2105. IE NamedFunction called anonymous function #2111, which called anonymous function #2105. NamedFunction is one declared through the normal function NamedFunction() ... endfunction syntax; the others were declared using code like function dict.func() ... endfunction. So obviously I'd like to find out which function has number 2105. Assuming that it's still in scope, it's possible to find out what Dictionary entry references it by dumping all of the dictionary variables that might contain that reference. This is sort of awkward and it's difficult to be systematic about it, though I guess I could code up a function to search through all of the loaded dictionaries for a reference to that function, watching out for circular references. Although to be really thorough, it would have to search not only script-local and global dictionaries, but buffer-local dictionaries as well; is there a way to access another buffer's local variables? Anyway I'm wondering if it's possible to dump the source code for the anonymous function instead. This would be a lot easier and probably more reliable.

    Read the article

  • Hidden features of Perl?

    - by Adam Bellaire
    What are some really useful but esoteric language features in Perl that you've actually been able to employ to do useful work? Guidelines: Try to limit answers to the Perl core and not CPAN Please give an example and a short description Hidden Features also found in other languages' Hidden Features: (These are all from Corion's answer) C# Duff's Device Portability and Standardness Quotes for whitespace delimited lists and strings Aliasable namespaces Java Static Initalizers JavaScript Functions are First Class citizens Block scope and closure Calling methods and accessors indirectly through a variable Ruby Defining methods through code PHP Pervasive online documentation Magic methods Symbolic references Python One line value swapping Ability to replace even core functions with your own functionality Other Hidden Features: Operators: The bool quasi-operator The flip-flop operator Also used for list construction The ++ and unary - operators work on strings The repetition operator The spaceship operator The || operator (and // operator) to select from a set of choices The diamond operator Special cases of the m// operator The tilde-tilde "operator" Quoting constructs: The qw operator Letters can be used as quote delimiters in q{}-like constructs Quoting mechanisms Syntax and Names: There can be a space after a sigil You can give subs numeric names with symbolic references Legal trailing commas Grouped Integer Literals hash slices Populating keys of a hash from an array Modules, Pragmas, and command-line options: use strict and use warnings Taint checking Esoteric use of -n and -p CPAN overload::constant IO::Handle module Safe compartments Attributes Variables: Autovivification The $[ variable tie Dynamic Scoping Variable swapping with a single statement Loops and flow control: Magic goto for on a single variable continue clause Desperation mode Regular expressions: The \G anchor (?{}) and '(??{})` in regexes Other features: The debugger Special code blocks such as BEGIN, CHECK, and END The DATA block New Block Operations Source Filters Signal Hooks map (twice) Wrapping built-in functions The eof function The dbmopen function Turning warnings into errors Other tricks, and meta-answers: cat files, decompressing gzips if needed Perl Tips See Also: Hidden features of C Hidden features of C# Hidden features of C++ Hidden features of Java Hidden features of JavaScript Hidden features of Ruby Hidden features of PHP Hidden features of Python

    Read the article

  • How do you build a Windows Workflow Project with NAnt 0.90?

    - by LockeCJ
    I'm trying to build a Windows Workflow (WF) project using NAnt, but it doesn;t seem to be able to build the ".xoml" and ".rules" files. Here is the code of the csc task that I'm using: <csc debug="${build.Debug}" warninglevel="${build.WarningLevel}" target="library" output="${path::combine(build.OutputDir,assembly.Name+'.dll')}" verbose="${build.Verbose}" doc="${path::combine(build.OutputDir,assembly.Name+'.xml')}"> <sources basedir="${assembly.BaseDir}"> <include name="**/*.cs" /> <include name="**/*.xoml" /> <include name="**/*.rules" /> </sources> <resources basedir="${assembly.BaseDir}"> <include name="**/*.xsd" /> <include name="**/*.resx" /> </resources> <references> ... </references> </csc> Here's the output: Compiling 21 files to 'c:\Output\MyWorkFlowProject.dll'. [csc] c:\Projects\MyWorkFlowProject\AProcessFlow.xoml(1,1): error CS0116: A namespace does not directly contain members such as fields or methods [csc] c:\Projects\MyWorkFlowProject\BProcessFlow.xoml(1,1): error CS0116: A namespace does not directly contain members such as fields or methods [csc] c:\Projects\MyWorkFlowProject\CProcessFlow.rules(1,1): error CS0116: A namespace does not directly contain members such as fields or methods [csc] c:\Projects\MyWorkFlowProject\CProcessFlow.xoml(1,1): error CS0116: A namespace does not directly contain members such as fields or methods

    Read the article

  • Cross join (pivot) with n-n table containing values

    - by Styx31
    I have 3 tables : TABLE MyColumn ( ColumnId INT NOT NULL, Label VARCHAR(80) NOT NULL, PRIMARY KEY (ColumnId) ) TABLE MyPeriod ( PeriodId CHAR(6) NOT NULL, -- format yyyyMM Label VARCHAR(80) NOT NULL, PRIMARY KEY (PeriodId) ) TABLE MyValue ( ColumnId INT NOT NULL, PeriodId CHAR(6) NOT NULL, Amount DECIMAL(8, 4) NOT NULL, PRIMARY KEY (ColumnId, PeriodId), FOREIGN KEY (ColumnId) REFERENCES MyColumn(ColumnId), FOREIGN KEY (PeriodId) REFERENCES MyPeriod(PeriodId) ) MyValue's rows are only created when a real value is provided. I want my results in a tabular way, as : Column | Month 1 | Month 2 | Month 4 | Month 5 | Potatoes | 25.00 | 5.00 | 1.60 | NULL | Apples | 2.00 | 1.50 | NULL | NULL | I have successfully created a cross-join : SELECT MyColumn.Label AS [Column], MyPeriod.Label AS [Period], ISNULL(MyValue.Amount, 0) AS [Value] FROM MyColumn CROSS JOIN MyPeriod LEFT OUTER JOIN MyValue ON (MyValue.ColumnId = MyColumn.ColumnId AND MyValue.PeriodId = MyPeriod.PeriodId) Or, in linq : from p in MyPeriods from c in MyColumns join v in MyValues on new { c.ColumnId, p.PeriodId } equals new { v.ColumnId, v.PeriodId } into values from nv in values.DefaultIfEmpty() select new { Column = c.Label, Period = p.Label, Value = nv.Amount } And seen how to create a pivot in linq (here or here) : (assuming MyDatas is a view with the result of the previous query) : from c in MyDatas group c by c.Column into line select new { Column = line.Key, Month1 = line.Where(l => l.Period == "Month 1").Sum(l => l.Value), Month2 = line.Where(l => l.Period == "Month 2").Sum(l => l.Value), Month3 = line.Where(l => l.Period == "Month 3").Sum(l => l.Value), Month4 = line.Where(l => l.Period == "Month 4").Sum(l => l.Value) } But I want to find a way to create a resultset with, if possible, Month1, ... properties dynamic. Note : A solution which results in a n+1 query : from c in MyDatas group c by c.Column into line select new { Column = line.Key, Months = from l in line group l by l.Period into period select new { Period = period.Key, Amount = period.Sum(l => l.Value) } }

    Read the article

  • User Control SiteMap Provider Rendering Error

    - by Serexx
    I have created a Custom Server Control that renders the SiteMap as a pure UL for CSS styling. At run-time it renders properly but in VS2008 Design View VS shows this error: Error Rendering Control - menuMainAn unhandled exception has occurred. The provider 'AspNetXmlSiteMapProvider' specified for the defaultProvider does not exist in the providers collection. I have 'AspNetXmlSiteMapProvider' specified in web.config as per here : link text While I am happy that the code runs properly, the Designer error is bothersome if the underlying issue might cause the code to break in some circumstances so I need to understand what is going on... The code explicitly references the sitemap in the Render Method with : int level = 1; string ul = string.Format("<div class='{0}' id='{1}'>{2}</div>", CssClassName, this.ID.ToString(), EnumerateNodesRecursive(SiteMap.RootNode, level)); output.Write(ul); and the recursive method called referrences SiteMap.CurrentNode. Otherwise there are no explicit sitemap references in the code. Does anyone have any ideas why Deisgner is complaining?

    Read the article

  • Fluent NHibernate - Cascade delete a child object when no explicit parent->child relationship exists

    - by John Price
    I've got an application that keeps track of (for the sake of an example) what drinks are available at a given restaurant. My domain model looks like: class Restaurant { public IEnumerable<RestaurantDrink> GetRestaurantDrinks() { ... } //other various properties } class RestaurantDrink { public Restaurant Restaurant { get; set; } public Drink { get; set; } public string DrinkVariety { get; set; } //"fountain drink", "bottled", etc. //other various properties } class Drink { public string Name { get; set; } public string Manufacturer { get; set; } //other various properties } My db schema is (I hope) about what you'd expect; "RestaurantDrinks" is essentially a mapping table between Restaurants and Drinks with some extra properties (like "DrinkVariety" tacked on). Using Fluent NHibernate to set up mappings, I've set up a "HasMany" relationship from Restaurants to RestaurantDrinks that causes the latter to be deleted when its parent Restaurant is deleted. My question is, given that "Drink" does not have any property on it that explicitly references RestaurantDrinks (the relationship only exists in the underlying database), can I set up a mapping that will cause RestaurantDrinks to be deleted if their associated Drink is deleted? Update: I've been trying to set up the mapping from the "RestaurantDrink" end of things using References(x => x.Drink) .Column("DrinkId") .Cascade.All(); But this doesn't seem to work (I still get an FK violation when deleting a Drink).

    Read the article

  • When writing a game, should you make objects/enemies/etc. have unique ID numbers?

    - by SLC
    I have recently encountered some issues with merely passing references to objects/enemies in a game I am making, and am wondering if I am using the wrong approach. The main issue I have is disposing of enemies and objects, when other enemies or players may still have links to them. For example, if you have a Rabbit, and a Wolf, the Wolf may have selected the Rabbit to be its target. What I am doing, is the wolf has a GameObject Target = null; and when it decides it is hungry, the Target becomes the Rabbit. If the Rabbit then dies, such as another wolf killing it, it cannot be removed from the game properly because this wolf still has a reference to it. In addition, if you are using a decoupled approach, the rabbit could hit by lightning, reducing its health to below zero. When it next updates itself, it realises it has died, and is removed from the game... but there is no way to update everything that is interested in it. If you gave every enemy a unique ID, you could simply use references to that instead, and use a central lookup class that handled it. If the monster died, the lookup class could remove it from its own index, and subsequently anything trying to access it would be informed that it's dead, and then they could act accordingly. Any thoughts on this?

    Read the article

  • XML: How to reprepresent objects with multiple occurences?

    - by savras
    hi, i need to save objects that can occur multiple times. each object is marked with unique identifier. when it is serialized first time all its properties are written. after that only references are used. <actionHistory> <add> <figure id="1" xsi:type="point"> <position x="1" y="2" /> </figure> </add> <change> <target ref="1" /> <property>x</property> <value>3</value> </change> </actionHistory> element 'target' only references to point saved before, but it can contain definition of new figure as well. there is also figure class hierarchy involved. is there any way to express it using xml-schema? any suggestions how to improve code above will be also appreciated.

    Read the article

  • Cascading Deletes in SQL Sever 2008 not working.

    - by Vaccano
    I have the following table setup. Bag | +-> BagID (Guid) +-> BagNumber (Int) BagCommentRelation | +-> BagID (Int) +-> CommentID (Guid) BagComment | +-> CommentID (Guid) +-> Text (varchar(200)) BagCommentRelation has Foreign Keys to Bag and BagComment. So, I turned on cascading deletes for both those Foreign Keys, but when I delete a bag, it does not delete the Comment row. Do need to break out a trigger for this? Or am I missing something? (I am using SQL Server 2008) Note: Posting requested SQL. This is the defintion of the BagCommentRelation table. (I had the type of the bagID wrong (I thought it was a guid but it is an int).) CREATE TABLE [dbo].[Bag_CommentRelation]( [Id] [int] IDENTITY(1,1) NOT NULL, [BagId] [int] NOT NULL, [Sequence] [int] NOT NULL, [CommentId] [int] NOT NULL, CONSTRAINT [PK_Bag_CommentRelation] PRIMARY KEY CLUSTERED ( [BagId] ASC, [Sequence] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Bag] FOREIGN KEY([BagId]) REFERENCES [dbo].[Bag] ([Id]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Bag] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Comment] FOREIGN KEY([CommentId]) REFERENCES [dbo].[Comment] ([CommentId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Comment] GO The row in this table deletes but the row in the comment table does not.

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • NAnt not running NUnit tests

    - by ctford
    I'm using NUnit 2.5 and NAnt 0.85 to compile a .NET 3.5 library. Because NAnt 0.85 doesn't support .NET 3.5 out of the box, I've added an entry for the 3.5 framework to NAnt.exe.config. 'MyLibrary' builds, but when I hit the "test" target to execute the NUnit tests, none of them seem to run. [nunit2] Tests run: 0, Failures: 0, Not run: 0, Time: 0.012 seconds Here are the entries in my NAnt.build file for the building and running the tests: <target name="build_tests" depends="build_core"> <mkdir dir="Target" /> <csc target="library" output="Target\Test.dll" debug="true"> <references> <include name="Target\MyLibrary.dll"/> <include name="Libraries\nunit.framework.dll"/> </references> <sources> <include name="Test\**\*.cs" /> </sources> </csc> </target> <target name="test" depends="build_tests"> <nunit2> <formatter type="Plain" /> <test assemblyname="Target\Test.dll" /> </nunit2> </target> Is there some versioning issue I need to be aware of? Test.dll runs fine in the NUnit GUI. The testing dll is definitely being found, because if I move it I get the following error: Failure executing test(s). If you assembly is not build using NUnit 2.2.8.0... Could not load file or assembly 'Test' or one of its dependencies... I would be grateful if anyone could point me in the right direction or describe a similary situation they have encountered. Edit I have since tried it with NAnt 0.86 beta 1, and the same problem occurs.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >