Search Results

Search found 35433 results on 1418 pages for 'document based'.

Page 524/1418 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Database operations through SQL: Database Restore ...

    - by zc0000
    If using sql to perform a database restoring , care the sql-codes very well. Any trivial mistake may prevent a successful execution. Cases are listed here based on simple experiments. with operation: MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name' If logical file name not correctly set , following error is obtained: Logical file 'FILE_NAME' is not part of database 'DATABASE_NAME'. Use RESTORE FILELISTONLY to list the logical file names. RESTORE DATABASE is terminating abnormally. To be continue...

    Read the article

  • how to implemet a nice scanline effect using libgdx

    - by Alexandre GUIDET
    I am working on an old school platformer based on libgdx. My first attempt is to make a little texture and fill a rect above the whole screen, but it seems that I am messing arround with the orthographic camera (I use two camera, one for the tilemap and one to project the scanline filter). Sometime the texture is stuck on the tilemap and sometime it is too large and cover the whole screen in black. Is my approach correct using two camera? Does someone have a solution to achieve this retro effect using libgdx (see maldita castilla)? Thanks

    Read the article

  • Creating Descriptive Flex Field (DFF) Bean in OAF

    - by Manoj Madhusoodanan
    In this blog I will explain how to add a custom DFF in a custom OAF page.I am using XXCUST_DFF_DEMO table to store the DFF values.Also I am using custom DFF named XXCUST_PERSON_DFF.  Following steps needs to be performed to create this solution. 1) Register the custom table in Oracle Application2) Register the DFF3) Define the segments of DFF4) Create BC4J components for OAF and OA Page which holds the DFF I will explain the steps in detail below. Register the custom table in Oracle Application I am using custom DFF here so I have to register the custom table which I am going to capture the values.Please click here to see the table script. I am using the AD_DD package to register the custom table.Please click here to see the table registration script. Please verify the table has registered successfully. Navigation: Application Developer > Application > Database > Table Table has registered successfully. Register the DFF Next step is to register the DFF. Navigate to Application Developer > Flex Field > Descriptive > Register. Give details as below. Click on Reference Fields and set the Reference Field as ATTRIBUTE_CATEGORY. Click on the Columns button to verify that the columns ATTRIBUTE_CATEGORY,ATTRIBUTE1 .... ATTRIBUTE30 are enabled. DFF has registered successfully. Define the segments of DFF Here I am going to define the segments of the DFF.Navigate to Application Developer > Flex Field > Descriptive > Segments.Query for "XXCUST - Person DFF". Uncheck "Freeze Flexfield Definition". In my DFF the reference field I want to display a value set which has values "Permanent" and "Contractor". So define a value set  XXCUST_EMPLOYMENT_TYPE. Navigation: Application Developer > Flex Field > Descriptive > Validation > Sets After that assign the values to above created value sets. Navigation: Application Developer > Flex Field > Descriptive > Validation > Values Assign XXCUST_EMPLOYMENT_TYPE to Context Field Valueset. Setup the Context Field Values based on below table. Context Code Segments Global Data Elements Phone Number Email Fax Contractor Manager Extension Number CSP Name Permanent Extension Number Access Card Number Phone Number,Email and Fax displays always.When user choose Context Value as "Contractor" Manager Extension Number and CSP Name will show.In case of "Permanent" Extension Number and Access Card Number will show.  Assign value set also as follows. For Global Data Elements following are the segments. For "Contractor" following are the segments. For "Permanent" following are the segments. Check the "Freeze Flexfield Definition" check box and save.Standard concurrent program "Flexfield View Generator" will generate XXCUST_DFF_DEMO_DFV view which we mentioned in the DFF registration step.  Now the DFF has created successfully and ready to use. Create BC4J components for OAF and OA Page which holds the DFF Create the BC4J components ( EO,VO and AM) appropriately.Create the page based on the created VO.For DFF create an item of type "flex" with following property.  Note: You cannot create a flex item directly under a messageComponentLayout region, but you can create a messageLayout region under the messageComponentLayout region and add the flex item under the messageLayout region. In the Segment List property give the segment names which you want to display.The syntax of this is Global Data Elements|SEGMENT 1|...|SEGMENT N||[Context Code1]|SEGMENT 1|...|SEGMENT N||[Context Code2]|SEGMENT 1|...|SEGMENT N||... Eg: Global Data Elements|Phone Number|Email|Fax||Contractor|Manager Extension Number|CSP Name||Permanent|Extension Number|Access Card Number When you change the Context Value corresponding segments will display automatically by PPR in the page. You can attach partial action to the DFF bean programmatically so that you can identify the action related to DFF. pageContext.getParameter(EVENT_PARAM) will return "FLEX_CONTEXT_CHANGEDPersonDFF" when you change the DFF Context. Page is ready and you can test. When you choose "Contract" following output you can see. When you choose "Permanent" following output you can see.  Give proper values and press Apply.You can see values populated in the table.

    Read the article

  • Parallelism in .NET – Part 13, Introducing the Task class

    - by Reed
    Once we’ve used a task-based decomposition to decompose a problem, we need a clean abstraction usable to implement the resulting decomposition.  Given that task decomposition is founded upon defining discrete tasks, .NET 4 has introduced a new API for dealing with task related issues, the aptly named Task class. The Task class is a wrapper for a delegate representing a single, discrete task within your decomposition.  We will go into various methods of construction for tasks later, but, when reduced to its fundamentals, an instance of a Task is nothing more than a wrapper around a delegate with some utility functionality added.  In order to fully understand the Task class within the new Task Parallel Library, it is important to realize that a task really is just a delegate – nothing more.  In particular, note that I never mentioned threading or parallelism in my description of a Task.  Although the Task class exists in the new System.Threading.Tasks namespace: Tasks are not directly related to threads or multithreading. Of course, Task instances will typically be used in our implementation of concurrency within an application, but the Task class itself does not provide the concurrency used.  The Task API supports using Tasks in an entirely single threaded, synchronous manner. Tasks are very much like standard delegates.  You can execute a task synchronously via Task.RunSynchronously(), or you can use Task.Start() to schedule a task to run, typically asynchronously.  This is very similar to using delegate.Invoke to execute a delegate synchronously, or using delegate.BeginInvoke to execute it asynchronously. The Task class adds some nice functionality on top of a standard delegate which improves usability in both synchronous and multithreaded environments. The first addition provided by Task is a means of handling cancellation via the new unified cancellation mechanism of .NET 4.  If the wrapped delegate within a Task raises an OperationCanceledException during it’s operation, which is typically generated via calling ThrowIfCancellationRequested on a CancellationToken, or if the CancellationToken used to construct a Task instance is flagged as canceled, the Task’s IsCanceled property will be set to true automatically.  This provides a clean way to determine whether a Task has been canceled, often without requiring specific exception handling. Tasks also provide a clean API which can be used for waiting on a task.  Although the Task class explicitly implements IAsyncResult, Tasks provide a nicer usage model than the traditional .NET Asynchronous Programming Model.  Instead of needing to track an IAsyncResult handle, you can just directly call Task.Wait() to block until a Task has completed.  Overloads exist for providing a timeout, a CancellationToken, or both to prevent waiting indefinitely.  In addition, the Task class provides static methods for waiting on multiple tasks – Task.WaitAll and Task.WaitAny, again with overloads providing time out options.  This provides a very simple, clean API for waiting on single or multiple tasks. Finally, Tasks provide a much nicer model for Exception handling.  If the delegate wrapped within a Task raises an exception, the exception will automatically get wrapped into an AggregateException and exposed via the Task.Exception property.  This exception is stored with the Task directly, and does not tear down the application.  Later, when Task.Wait() (or Task.WaitAll or Task.WaitAny) is called on this task, an AggregateException will be raised at that point if any of the tasks raised an exception.  For example, suppose we have the following code: Task taskOne = new Task( () => { throw new ApplicationException("Random Exception!"); }); Task taskTwo = new Task( () => { throw new ArgumentException("Different exception here"); }); // Start the tasks taskOne.Start(); taskTwo.Start(); try { Task.WaitAll(new[] { taskOne, taskTwo }); } catch (AggregateException e) { Console.WriteLine(e.InnerExceptions.Count); foreach (var inner in e.InnerExceptions) Console.WriteLine(inner.Message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, our routine will print: 2 Different exception here Random Exception! Note that we had two separate tasks, each of which raised two distinctly different types of exceptions.  We can handle this cleanly, with very little code, in a much nicer manner than the Asynchronous Programming API.  We no longer need to handle TargetInvocationException or worry about implementing the Event-based Asynchronous Pattern properly by setting the AsyncCompletedEventArgs.Error property.  Instead, we just raise our exception as normal, and handle AggregateException in a single location in our calling code.

    Read the article

  • Two python distributions, sudo picking the wrong one

    - by DHK
    I'm back to Linux after an over 10 year abstinence (fool me thinks). And a little rusty in the sys admin department. I'm faced with an issue with my python distribution. I'm using Python 2.7, but based on the Anaconda flavour. I followed the standard guidance but recently I discovered an issue that I'm not sure how to fix. Under sudo, the standard Python as comes with Ubuntu is provided. Under my user account python points to the Anaconda version: dhk@localhost:~/home/$which python /opt/anaconda/bin/python dhk@localhost:~/home/$sudo which python /usr/bin/python This is an issue as using sudo pip [anything] usually acts on the wrong directory, yet I cannot use it without sudo.

    Read the article

  • Importing Multiple Schemas to a Model in Oracle SQL Developer Data Modeler

    - by thatjeffsmith
    Your physical data model might stretch across multiple Oracle schemas. Or maybe you just want a single diagram containing tables, views, etc. spanning more than a single user in the database. The process for importing a data dictionary is the same, regardless if you want to suck in objects from one schema, or many schemas. Let’s take a quick look at how to get started with a data dictionary import. I’m using Oracle SQL Developer in this example. The process is nearly identical in Oracle SQL Developer Data Modeler – the only difference being you’ll use the ‘File’ menu to get started versus the ‘File – Data Modeler’ menu in SQL Developer. Remember, the functionality is exactly the same whether you use SQL Developer or SQL Developer Data Modeler when it comes to the data modeling features – you’ll just have a cleaner user interface in SQL Developer Data Modeler. Importing a Data Dictionary to a Model You’ll want to open or create your model first. You can import objects to an existing or new model. The easiest way to get started is to simply open the ‘Browser’ under the View menu. The Browser allows you to navigate your open designs/models You’ll see an ‘Untitled_1′ model by default. I’ve renamed mine to ‘hr_sh_scott_demo.’ Now go back to the File menu, and expand the ‘Data Modeler’ section, and select ‘Import – Data Dictionary.’ This is a fancy way of saying, ‘suck objects out of the database into my model’ Connect! If you haven’t already defined a connection to the database you want to reverse engineer, you’ll need to do that now. I’m going to assume you already have that connection – so select it, and hit the ‘Next’ button. Select the Schema(s) to be imported Select one or more schemas you want to import The schemas selected on this page of the wizard will dictate the lists of tables, views, synonyms, and everything else you can choose from in the next wizard step to import. For brevity, I have selected ALL tables, views, and synonyms from 3 different schemas: HR SCOTT SH Once I hit the ‘Finish’ button in the wizard, SQL Developer will interrogate the database and add the objects to our model. The Big Model and the 3 Little Models I can now see ALL of the objects I just imported in the ‘hr_sh_scott_demo’ relational model in my design tree, and in my relational diagram. Quick Tip: Oracle SQL Developer calls what most folks think of as a ‘Physical Model’ the ‘Relational Model.’ Same difference, mostly. In SQL Developer, a Physical model allows you to define partitioning schemes, advanced storage parameters, and add your PL/SQL code. You can have multiple physical models per relational models. For example I might have a 4 Node RAC in Production that uses partitioning, but in test/dev, only have a single instance with no partitioning. I can have models for both of those physical implementations. The list of tables in my relational model Wouldn’t it be nice if I could segregate the objects based on their schema? Good news, you can! And it’s done by default Several of you might already know where I’m going with this – SUBVIEWS. You can easily create a ‘SubView’ by selecting one or more objects in your model or diagram and add them to a new SubView. SubViews are just mini-models. They contain a subset of objects from the main model. This is very handy when you want to break your model into smaller, more digestible parts. The model information is identical across the model and subviews, so you don’t have to worry about making a change in one place and not having it propagate across your design. SubViews can be used as filters when you create reports and exports as well. So instead of generating a PDF for everything, just show me what’s in my ‘ABC’ subview. But, I don’t want to do any work! Remember, I’m really lazy. More good news – it’s already done by default! The schemas are automatically used to create default SubViews Auto-Navigate to the Object in the Diagram In the subview tree node, right-click on the object you want to navigate to. You can ask to be taken to the main model view or to the SubView location. If you haven’t already opened the SubView in the diagram, it will be automatically opened for you. The SubView diagram only contains the objects from that SubView Your SubView might still be pretty big, many dozens of objects, so don’t forget about the ‘Navigator‘ either! In summary, use the ‘Import’ feature to add existing database objects to your model. If you import from multiple schemas, take advantage of the default schema based SubViews to help you manage your models! Sometimes less is more!

    Read the article

  • SQLAuthority News – Storage and SQL Server Capacity Planning and configuration – SharePoint Server 2

    - by pinaldave
    Just a day ago, I was asked how do you plan SQL Server Storage Capacity. Here is the excellent article published by Microsoft regarding SQL Server capacity planning for SharePoint 2010. This article touches all the vital areas of this subject. Here are the bullet points for the same. Gather storage and SQL Server space and I/O requirements Choose SQL Server version and edition Design storage architecture based on capacity and IO requirements Determine memory requirements Understand network topology requirements Configure SQL Server Validate storage performance and reliability Read the original article published by Microsoft here: Storage and SQL Server Capacity Planning and configuration – SharePoint Server 2010. The question to all the SharePoint developers and administrator that if they use the whitepapers and articles to decide the capacity or they just start with application and as they progress they plan the storage? Please let me know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology Tagged: SharePoint

    Read the article

  • Identifying Data Model Changes Between EBS 12.1.3 and Prior EBS Releases

    - by Steven Chan
    The EBS 12.1.3 Release Content Document (RCD, Note 561580.1) summarizes the latest functional and technology stack-related updates in a specific release.  The E-Business Suite Electronic Technical Reference Manual (eTRM) summarizes the database objects in a specific EBS release.  Those are useful references, but sometimes you need to find out which database objects have changed between one EBS release and another.  This kind of information about the differences or deltas between two releases is useful if you have customized or extended your EBS instance and plan to upgrade to EBS 12.1.3. Where can you find that information?Answering that question has just gotten a lot easier.  You can now use a new EBS Data Model Comparison Report tool:EBS Data Model Comparison Report Overview (Note 1290886.1)This new tool lists the database object definition changes between the following source and target EBS releases:EBS 11.5.10.2 and EBS 12.1.3EBS 12.0.4 and EBS 12.1.3EBS 12.1.1 and EBS 12.1.3EBS 12.1.2 and EBS 12.1.3For example, here's part of the report comparing Bill of Materials changes between 11.5.10.2 and 12.1.3:

    Read the article

  • Consolidate Data in Private Clouds, But Consider Security and Regulatory Issues

    - by Troy Kitch
    The January 13 webcast Security and Compliance for Private Cloud Consolidation will provide attendees with an overview of private cloud computing based on Oracle's Maximum Availability Architecture and how security and regulatory compliance affects implementations. Many organizations are taking advantage of Oracle's Maximum Availability Architecture to drive down the cost of IT by deploying private cloud computing environments that can support downtime and utilization spikes without idle redundancy. With two-thirds of sensitive and regulated data in organizations' databases private cloud database consolidation means organizations must be more concerned than ever about protecting their information and addressing new regulatory challenges. Join us for this webcast to learn about greater risks and increased threats to private cloud data and how Oracle Database Security Solutions can assist in securely consolidating data and meet compliance requirements. Register Now.

    Read the article

  • SQL SERVER – SQL Server 2008 with Service Pack 2

    - by pinaldave
    SQL Server 2008 Service Pack 2 has been already released earlier. I suggest that all of you who are running SQL Server 2008 I suggest you updated to SQL Server 2008 Service Pack 2. Download SQL Server 2008 – Service Pack 2 from here. Please note, this is not SQL Server 2008 R2 but it is SQL Server 2008 – Service Pack 2. Test Lab Guide of sQL Server 2008 with Service Pack 2 is also released by Microsoft. This document contains an introduction to SQL Server 2008 with Service Pack 2 and step-by-step instructions for extending the Base Configuration test lab to include a SQL Server 2008 with Service Pack 2 server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • What a Performance! MySQL 5.5 and InnoDB 1.1 running on Oracle Linux

    - by zeynep.koch(at)oracle.com
    The MySQL performance team in Oracle has recently completed a series of benchmarks comparing Read / Write and Read-Only performance of MySQL 5.5 with the InnoDB and MyISAM storage engines. Compared to MyISAM, InnoDB delivered 35x higher throughput on the Read / Write test and 5x higher throughput on the Read-Only test, with 90% scalability across 36 CPU cores. A full analysis of results and MySQL configuration parameters are documented in a new whitepaperIn addition to the benchmark, the new whitepaper, also includes:- A discussion of the use-cases for each storage engine- Best practices for users considering the migration of existing applications from MyISAM to InnoDB- A summary of the performance and scalability enhancements introduced with MySQL 5.5 and InnoDB 1.1.The benchmark itself was based on Sysbench, running on AMD Opteron "Magny-Cours" processors, and Oracle Linux with the Unbreakable Enterprise Kernel You can learn more about MySQL 5.5 and InnoDB 1.1 from here and download it from here to test whether you witness performance gains in your real-world applications.  By Mat Keep

    Read the article

  • Documentation utility for OpenEdge ABL

    - by glowcoder
    I have a large system in OpenEdge ABL that could use some documentation-love. Currently a team member is working on a utility that can find methods and functions and make some "Javadoc-esque" html pages out of it. It's pretty rough around the edges. Okay, it's like sawblades around the edges. I'm trying to find something like Javadoc or Doxygen that is capable of parsing OpenEdge ABL to generate some kind of API documentation. I know the market for OpenEdge isn't the best, but there is a lot of stuff that's passed along by word of mouth. It's difficult to search for because it used to be called "Progress" which throws off your search queries with non-relevant information. I'm also open to a system that lets you define the regex's to look for to define your own syntax. Then it parses and gives you an output based on that. Thanks!

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Podcast: Advanced MVVM with Josh Smith

    - by craigshoemaker
    Author, Microsoft MVP and accomplished pianist Josh Smith, Sr. UX Developer at IdentityMine, joins the show to discuss some of Model View ViewModel’s more advanced scenarios. Full Speed: download Fast Version: download Josh shares is experience using MVVM gives some real-world advice on: Using modal dialogs Evils and virtues of code behind in views Use of attached behaviors Undo/redo strategies Working with animations Building a task based architecture for managing communication between View and ViewModel Frameworks in the MVVM space The Book Get first-hand experience implementing the solutions to the challenges discussed in the show by reading Josh’s new book ‘Advanced MVVM’. Resources The following resources are mentioned in the show: Laurent Bugnion's mix talk ‘Understanding the Model-View-ViewModel Pattern Josh Smith’s MVVM Foundation Laurent Bugnion’s MVVM Light framework Rob Eisenberg's Caliburn

    Read the article

  • ARM TechCon 2013: Oracle, ARM expand collaboration on servers, Internet of Things

    - by terrencebarr
    Over the years, Oracle has been making big investments in Java for ARM-based devices. This week, Oracle and ARM announced further expanding their collaboration on a number of fronts, from additional hardware platforms, porting layers, and optimized communication protocols, to 64-bit ARMv8 support, and IoT architectures. Henrik Stahl, VP of Product Management in the Java Platform Group at Oracle, just posted an excellent summary: “ARM TechCon 2013: Oracle, ARM expand collaboration on servers, Internet of Things”. Highly recommended reading. Cheers, – TerrenceFiled under: Embedded Tagged: 6LoWPAN, ARM, CoAP, Freescale, Gemalto, iot, Java Embedded, Java ME Embedded, Java SE Embedded, Lego Mindstorms, OpenJDK, Qualcomm, Raspberry Pi, TechCon

    Read the article

  • Do I lose anything by coding in c# and using free online vb.net code convertors?

    - by Gullu
    The company I work for uses vb.net since there are many programmers who moved up from vb6 to vb.net. Basically more vb.net resources in the company for support/maintenance vs c#. I am a c# coder and was wondering if I could just continue coding in c# and just use the many online free c# to vb.net code convertors. That way, I will be more productive and also be more marketable since there are more c# jobs compared to vb.net jobs. I have done vb6 many years ago and I am comfortable debugging vb.net code. It's just the primary coding language. I am more comfortable in c#. Will I lose anything if I use this approach. (code conversion). Based on what i read online the future of vb.net is really "Dim". Please advise. thank you

    Read the article

  • Calculating estimated data loss with Always on

    - by blakmk
    Ever wondered how calculate estimated data loss (time) for always on. The metric in the always on dashboard shows the metric quite nicely but there does seem to be a lack of documentation about where the metrics ---come from. Heres a script that calculates the data loss ( lag ) so you can set up alerts based on your DR SLA's:       WITH DR_CTE ( replica_server_name, database_name, last_commit_time) AS                 (                                 select ar.replica_server_name, database_name, rs.last_commit_time                                 from master.sys.dm_hadr_database_replica_states  rs                                 inner join master.sys.availability_replicas ar on rs.replica_id = ar.replica_id                                 inner join sys.dm_hadr_database_replica_cluster_states dcs on dcs.group_database_id = rs.group_database_id and rs.replica_id = dcs.replica_id                                 where replica_server_name != @@servername                 ) select ar.replica_server_name, dcs.database_name, rs.last_commit_time, DR_CTE.last_commit_time 'DR_commit_time', datediff(ss,  DR_CTE.last_commit_time, rs.last_commit_time) 'lag_in_seconds' from master.sys.dm_hadr_database_replica_states  rs inner join master.sys.availability_replicas ar on rs.replica_id = ar.replica_id inner join sys.dm_hadr_database_replica_cluster_states dcs on dcs.group_database_id = rs.group_database_id and rs.replica_id = dcs.replica_id inner join DR_CTE on DR_CTE.database_name = dcs.database_name where ar.replica_server_name = @@servername order by lag_in_seconds desc

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • DirectX11 Swap Chain RGBA vs BGRA Format

    - by Nathan
    I was wondering if anyone could elaborate any further on something that's been bugging me. In DirectX9 the main supported back buffer formats were D3DFMT_X8R8B8G8 and D3DFMT_A8R8G8B8 (Both being BGRA in layout). http://msdn.microsoft.com/en-us/library/windows/desktop/bb174314(v=vs.85).aspx With the initial version of DirectX10 their was no support for BGRA and all the textbooks and online tutorials recommend DXGI_FORMAT_R8G8B8A8_UNORM (being RGBA in layout). Now with DirectX11 BGRA is supported again and it seems as if microsoft recommends using a BGRA format as the back buffer format. http://msdn.microsoft.com/en-us/library/windows/apps/hh465096.aspx Are there any suggestions or are there performance implications of using one or the other? (I assume not as obviously by specifying the format of the underlying resource the runtime will handle what bits your passing through and than infer how to utilise them based on the format.)

    Read the article

  • Form, function and complexity in rule processing

    Tim Bass posted on Orwellian Event Processing.I was involved in a heated exchange in the comments, and he has more recently published a post entitled Disadvantages of Rule-Based Systems (Part 1).Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions.I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Form, function and complexity in rule processing

    - by Charles Young
    Tim Bass posted on ‘Orwellian Event Processing’. I was involved in a heated exchange in the comments, and he has more recently published a post entitled ‘Disadvantages of Rule-Based Systems (Part 1)’. Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions. I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation of my comments. For me, the ‘red rag’ lay in Tim’s claim that “...rules alone are highly inefficient for most classes of (not simple) problems” and a later paragraph that appears to equate the simplicity of form (‘IF-THEN-ELSE’) with simplicity of function.   It is not the first time Tim has expressed these views and not the first time I have responded to his assertions.   Indeed, Tim has a long history of commenting on the subject of complex event processing (CEP) and, less often, rule processing in ‘robust’ terms, often asserting that very many other people’s opinions on this subject are mistaken.   In turn, I am of the opinion that, certainly in terms of rule processing, which is an area in which I have a specific interest and knowledge, he is often mistaken. There is no simple answer to the fundamental question ‘what is a rule?’ We use the word in a very fluid fashion in English. Likewise, the term ‘rule processing’, as used widely in IT, is equally difficult to define simplistically. The best way to envisage the term is as a ‘centre of gravity’ within a wider domain. That domain contains many other ‘centres of gravity’, including CEP, statistical analytics, neural networks, natural language processing and so much more. Whole communities tend to gravitate towards and build themselves around some of these centres. The term 'rule processing' is associated with many different technology types, various software products, different architectural patterns, the functional capability of many applications and services, etc. There is considerable variation amongst these different technologies, techniques and products. Very broadly, a common theme is their ability to manage certain types of processing and problem solving through declarative, or semi-declarative, statements of propositional logic bound to action-based consequences. It is generally important to be able to decouple these statements from other parts of an overall system or architecture so that they can be managed and deployed independently.  As a centre of gravity, ‘rule processing’ is no island. It exists in the context of a domain of discourse that is, itself, highly interconnected and continuous.   Rule processing does not, for example, exist in splendid isolation to natural language processing.   On the contrary, an on-going theme of rule processing is to find better ways to express rules in natural language and map these to executable forms.   Rule processing does not exist in splendid isolation to CEP.   On the contrary, an event processing agent can reasonably be considered as a rule engine (a theme in ‘Power of Events’ by David Luckham).   Rule processing does not live in splendid isolation to statistical approaches such as Bayesian analytics. On the contrary, rule processing and statistical analytics are highly synergistic.   Rule processing does not even live in splendid isolation to neural networks. For example, significant research has centred on finding ways to translate trained nets into explicit rule sets in order to support forms of validation and facilitate insight into the knowledge stored in those nets. What about simplicity of form?   Many rule processing technologies do indeed use a very simple form (‘If...Then’, ‘When...Do’, etc.)   However, it is a fundamental mistake to equate simplicity of form with simplicity of function.   It is absolutely mistaken to suggest that simplicity of form is a barrier to the efficient handling of complexity.   There are countless real-world examples which serve to disprove that notion.   Indeed, simplicity of form is often the key to handling complexity. Does rule processing offer a ‘one size fits all’. No, of course not.   No serious commentator suggests it does.   Does the design and management of large knowledge bases, expressed as rules, become difficult?   Yes, it can do, but that is true of any large knowledge base, regardless of the form in which knowledge is expressed.   The measure of complexity is not a function of rule set size or rule form.  It tends to be correlated more strongly with the size of the ‘problem space’ (‘search space’) which is something quite different.   Analysis of the problem space and the algorithms we use to search through that space are, of course, the very things we use to derive objective measures of the complexity of a given problem. This is basic computer science and common practice. Sailing a Dreadnaught through the sea of information technology and lobbing shells at some of the islands we encounter along the way does no one any good.   Building bridges and causeways between islands so that the inhabitants can collaborate in open discourse offers hope of real progress.

    Read the article

  • Advice for creating fog of war on tilemap game XNA

    - by Sigh-AniDe
    I have created a 2D Tile Based game. The game has a single path that the sprite can move on. I want to create to make the entire screen black except for where the sprite is. The sprite has commands such as left where he keeps looking left, right where he keeps looking right and move forward where he moves in the direction he is looking. I want to make it such that when he looks left then the tile on the left of the sprites fog should be removed and so on for all the sides. What is the best way to do this? Is there any tutorials that you guys know of that i can take a look at? Thanks

    Read the article

  • Download Free PowerShell Quick Reference Guides from Microsoft

    - by Akemi Iwaya
    Are you just getting started with learning PowerShell or tired of looking up less frequently used commands? Then this terrific set of PowerShell quick reference guides from Microsoft is just what you need! The first guide focuses on commonly-used Windows PowerShell commands and is available in a single .doc format document. The other guides are available as a set (six files) in .pdf format and focus on: tips, shortcuts, and common operations in Windows PowerShell 3.0, Windows PowerShell Workflow, Windows PowerShell ISE, Windows PowerShell Web Access, Server Manager for Windows Server 2012, WinRM, WMI, and WS-Man. Keep in mind that you can select all the guides or just the ones you need to download for the PowerShell 3.0 set. Windows PowerShell Quick Reference [Microsoft] Windows PowerShell 3.0 and Server Manager Quick Reference Guides [Microsoft] [via The Windows Club here and here]     

    Read the article

  • The 2010 JavaOne Java EE 6 Panel: Where We Are and Where We're Going

    - by janice.heiss(at)oracle.com
    An informative article, based on a 2010 JavaOne (San Francisco, California) panel session, surveys a variety of expert perspectives on Java EE 6.The panel, moderated by Oracle's Alexis Moussine-Pouchkine, consisted of:* Adam Bien, Consultant Author/ Speaker, adam-bien.com* Emmanuel Bernard, Principal Software Engineer, JBoss by Red Hat,* David Blevins, Senior Software Engineer, and co-founder of the OpenEJB project and a     founder of Apache Geronimo* Roberto Chinnici, Technical Staff Consulting Member, Oracle* Jim Knutson, Java EE Architect, IBM* Reza Rahman, Lead Engineer, Caucho Technology, Inc.,* Krasimir Semerdzhiev, Development Architect, SAP Labs BulgariaThe panel addressed such topics as Platform and API Adoption, Contexts and Dependency Injection (CDI), Java EE vs. Spring, the impact of Java EE 6 on tooling and testing, Java EE.next, along with a variety of audience questions. Read the entire article for the whole picture.

    Read the article

  • What should be tested in Javascript?

    - by Nathan Hoad
    At work, we've just started on a heavily Javascript based application (actually using Coffeescript, but still), of which I've been implementing an automated test system using JsTestDriver and fabric. We've never written something with this much Javascript, so up until now we've never done any Javascript testing. I'm unsure what exactly we should be testing in our unit tests. We've written JQuery plugins for various things, so it's quite obvious that they should be verified for correctness as much as possible with JsTestDriver, but everyone else in my team seems to think that we should be testing the page level Javascript as well. I don't think we should be testing page level Javascript as unit tests, but instead using a system like Selenium to verify everything works as expected. My main reasoning for this is that at the moment, page level Javascript tests are guaranteed to fail through JsTestDriver, because they're trying to access elements on the DOM that can't possibly exist. So, what should be unit tested in Javascript?

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >