Search Results

Search found 4999 results on 200 pages for 'derived instances'.

Page 78/200 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How to enable SSIS as data source type on SQL Server Reporing Services SSRS 2008 R2

    when you create a data source in SSRS 2008 R2 (Nov CTP), you won't be able to get SSIS listed as a data source type. Therefore applications that are already using it as a data source or applications that require it as a data source get stuck. Let's learn how to enable and get SSIS listed back as a data source in SSRS 2008 R2. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • Using Validation Groups Inside ASP.NET User Controls

    - by bipinjoshi
    Validation groups allow you to validate data entry controls in groups. Server controls such as validation controls, Button and TextBox have ValidationGroup property that takes a string value. All the server controls having the same ValidationGroup value act as one validation group. Validation groups come handy in situations where you wish to validate only a small set of controls from many controls housed on a Web Form. Using validation groups is quite easy and straightforward. However, if you have a validation group inside a user control and there are more than one user control instances on a Web Form you face some problem.http://www.binaryintellect.net/articles/13427d3d-1f98-4dc0-849b-72e95b8b66a2.aspx 

    Read the article

  • What would you do if your client required you not to use object-oriented programming?

    - by gunbuster363
    Would you try to persuade your client that using object-oriented programming is much cleaner? Or would you try to follow what he required and give him crappy code? Now I am writing a program to simulate the activity of ants in a grid. The ant can move around, pick up things and drop things. The problem is while the action of the ants and the positions of each ant can be tracked by class attributes easily (and we can easily create many instances of such ants) my client said that since he has a background in functional programming he would like the simulation to be made using functional programming. What would you do?

    Read the article

  • Worst practices in C++, common mistakes ...

    - by Felix Dombek
    After reading this famous rant by Linus Torvalds, I wondered what actually are all the bad things programmers might do in C++. I'm explicitly not referring to typography errors or bad program flow as treated in this question and answers, but to more high-level errors which are not detected by the compiler and do not result in obvious bugs at first run, complete design errors, things which are improbable in C but are likely to be done by newcomers who don't understand the full implications of their code. I also welcome answers pointing out a huge performance decrease where it would not usually be expected. An example of what one of my professors once told me: You have used somewhat too many instances of unneeded inheritance and virtuality. Inheritance makes a design much more complicated (and inefficient because of the RTTI (run-time type inference) subsystem), and it should therefore only be used where it makes sense, e.g. for the actions in the parse table." [I wrote an LR(1) parser generator.] "Because you make intensive use of templates, you practically don't need inheritance."

    Read the article

  • OUM is Flexible and Scalable

    - by user535886
    Flexible and Scalable Traditionally, projects have been focused on satisfying the contents of a requirements document or rigorously conforming to an existing set of work products. Often, especially where iterative and incremental techniques have not been employed, these requirements may be inaccurate, the previous deliverables may be flawed, or the business needs may have changed since the start of the project. Fitness for business purpose, derived from the Dynamic Systems Development Method (DSDM) framework, refers to the focus of delivering necessary functionality within a required timebox. The solution can be more rigorously engineered later, if such an approach is acceptable. Our collective experience shows that applying fit-for-purpose criteria, rather than tight adherence to requirements specifications, results in an information system that more closely meets the needs of the business. In OUM, this principle is extended to refer to the execution of the method processes themselves. Project managers and practitioners are encouraged to scale OUM to be fit-for-purpose for a given situation. It is rarely appropriate to execute every activity within OUM. OUM provides guidance for determining the core set of activities to be executed, the level of detail targeted in those activities and their associated tasks, and the frequency and type of end user deliverables. The project workplan should be developed from this core. The plan should then be scaled up, rather than tailored down, to the level of discipline appropriate to the identified risks and requirements. Even at the task level, models and work products should be completed only to the level of detail required for them to be fit-for-purpose within the current iteration or, at the project level, to suit the business needs of the enterprise and to meet the contractual obligations that govern the project. OUM provides well defined templates for many of its tasks. Use of these templates is optional as determined by the context of the project. Work products can easily be a model in a repository, a prototype, a checklist, a set of application code, or, in situations where a high degree of agility is warranted, simply the tacit knowledge contained in the brain of an analyst or practitioner. For further reading on agility, see Balancing Agility and Discipline: A guide fro the Perplexed.

    Read the article

  • Terminal as desktop background instead of wallper

    - by Janis Erdmanis
    I have come to conclusion that all my needs from nautilus is satisfied with terminal and last file manager. It also dismisses the need for multiple nautilus instances, which makes mess when I forgot how I meant to use different workspaces. The next step for my simplification would be to get rid of any possibility to open nautilus. Also I thought that my interaction with computer is file centred, therefore it makes sense to leave file manager in background of applications. Are there any ways to make terminal as desktop background with which I could interact?

    Read the article

  • How do I generate a level randomly?

    - by Charlton Santana
    I am currently hard coding 10 different instances like the code below, but but I'd like to create many more. Instead of having the same layout for the new level, I was wondering if there is anyway to generate a random X value for each block (this will be how far into the level it is). A level 100,000 pixels wide would be good enough but if anyone knows a system to make the level go on and on, I'd like to know that too. This is basically how I define a block now (with irrelevant code removed): block = new Block(R.drawable.block, 400, platformheight); block2 = new Block(R.drawable.block, 600, platformheight); block3 = new Block(R.drawable.block, 750, platformheight); The 400 is the X position, which I'd like to place randomly through the level, the platformheight variable defines the Y position which I don't want to change.

    Read the article

  • Has any language become greatly popular for something other than its intended purpose?

    - by Jon Purdy
    Take this scenario: A programmer creates a language to solve some problem. He then releases this language to help others solve problems like it. Another programmer discovers it's actually much better for some different category of problems. By virtue of this new application, the language then becomes popular for that application primarily. Are there any instances of this actually occurring? Put another way, does the intended purpose of a language have any bearing on how it's actually used, or whether it becomes popular? Is it even important that a language have an advertised purpose?

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • 3+ monitors, nvidia + intel graphics

    - by gozzilli
    I have seen lots of different posts on this issue, but I cannot figure out what to do. I hope we can collect the available information in one place, so that they can be useful for others. Is it possible to have multiple (3+) monitors running on two different graphics cards? I have 1x nVidia GeForce GTX 550 with 2 DVI ports and 1x Intel integrated graphics, with 2 DVI ports. I understand that they would be running on different instances of X servers. Is that correct? Could someone point me in the right direction to start? On Windows it's so simple, there is no additional thing to do other than going in display preferences and activating all 3+ monitors. They can even be laid out alternating one monitor from one graphics card with another monitor from the other card.

    Read the article

  • Dependency injection and IOC containers in a closed project

    - by Puckl
    Does it make sense to assemble my project with dependency injection containers if I am the only one who will use the code of that project? The question came up when I read this IOC Article http://martinfowler.com/articles/injection.html The justification for using dependency injection in this article is that friends can reuse a class, and replace depending classes with their own classes because they get injected and not instantiated in the class. I would only use it to inject objects where they are needed instead of passing them through layers to their target. (Which is not so bad I learned here: Is it bad practice to pass instances through several layers?) (Maybe I will reuse parts of the project, who knows, but I don´t know if that is a good justification)

    Read the article

  • Understanding the maximum hit-rate supported by a web-server

    - by SNag
    I would like to crawl a publicly available site (and one that's legal to crawl) for a personal project. From a brief trial of the crawler, I gathered that my program hits the server with a new HTTPRequest 8 times in a second. At this rate, as per my estimate, to obtain the full set of data I need about 60 full days of crawling. While the site is legal to crawl, I understand it can still be unethical to crawl at a rate that causes inconvenience to the regular traffic on the site. What I'd like to understand here is -- how high is 8 hits per second to the server I'm crawling? Could I possibly do 4 times that (by running 4 instances of my crawler in parallel) to bring the total effort down to just 15 days instead of 60? How do you find the maximum hit-rate a web-server supports? What would be the theoretical (and ethical) upper-limit for the crawl-rate so as to not adversely affect the server's routine traffic?

    Read the article

  • How to connect to a WCF service using IP of the host machine where the service is hosted?

    - by Kumar
    I have a secured WCF service (https://<MachineName>:sslport/services) self hosted in a machine. Different instances of same service are deployed in differnt machines. From a client app, I am able to connect to theses services through code, i.e. using ChannelFactory() with the same endpoint address. But if I try to access the service using the endpoint address as https://<ipAddress>:sslport/services replacing machines name with machine IP address, I am getting some error stating "could not establish trust relationship". I know this is an error caused by SSL certificate that it could not establish a trust relationship. Are there any settings or any possibilities to make this work?

    Read the article

  • Recommened design pattern to handle multiple compression algorithms for a class hierarchy

    - by sgorozco
    For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?

    Read the article

  • Existing Instance, Shiny New Disks

    - by merrillaldrich
    Migrating an Instance of SQL Server to New Disks I get to do something pretty entertaining this week – migrate SQL instances on a 2008 cluster from one disk array to another! Zut alors! I am so excited I can hardly contain myself, so let’s get started. (Only a DBA could love this stuff, am I right? I know.) Anyway, here’s one method of many to migrate your data. Assumption : this is a host-based migration, which just means I’m using the Windows file system to push the data from one set of SAN disks...(read more)

    Read the article

  • Is my concept in open source license correct?

    - by tester
    I would like to justify whether my concept in the open source license is correct, as you know that, misunderstanding the terms may lead to a serious law sue. Thank you. The main difference among the open source license is whether the license is copyleft. Copyleft license means allow the others to reproduce, modify and distribute the products but the released product is bound by the same licensing restriction. That means they have to use the same license for the modified version. Also, the copyleft license require all the released modified version to be free software. On the other hand, if any others create derived work incorporating non-copyleft licensed code, they can choose any license for the code. The serveral kinds of license and comparsion GPL is a restrictive license. Software requires to released as GPL license if that integrate or is modified from the other GPL license software . The library used in developing GPL license software are also restricted to GPL and LGPL , proprietary software are not allowed to employ (or complied with) in any part of the GPL application. LGPL is similar to GPL , but was more permissive with regarding allow the using of other non-GPL software. BSD is relatively simple license, it allow developer to do anything on the original source code . The license holder do not hold any legal responsibilities for their released product. Apache license is evolved from the BSD license. The legal terms are improved and are written by legal professionals in a more modern way. It covers comprehensive intellectual property ownership and liability issues. Also, are there any popular license beside these? Thank you

    Read the article

  • Few questions about Thunderbird in Ubuntu 11.10

    - by Darwell
    I am happy with Ubuntu choice to replace Evolution with Thunderbird. However, I am having some issues with it. 1. What do you use to make Thunderbird minimize (and disappear from the launcher) when closing it? It should still be running (and give me notifications of new emails) but do not bother me on Launcher or in the list of windows, because I need free space. I tried using FireTray extension, it works quite okay, but it's buggy - sometimes there are two instances of Thunderbird open when using it and also, Thunderbird's global menu disappears after some time of using the extension. 2. Is there a plan to integrate Thunderbird's calendars into Date indicator in Ubuntu? I'd like to see my events when I have clicked on the time. Thanks.

    Read the article

  • Take Two: Comparing JVMs on ARM/Linux

    - by user12608080
    Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java SE-Embedded ARM v7,  it seems, based on feedback, that everyone was more interested in the OpenJDK comparisons to Java SE-E.  In fact there were two main concerns: The fact that the previous article compared Java SE-E 7 against OpenJDK 6 might be construed as an unlevel playing field because version 7 is newer and therefore potentially more optimized. That the generic compiler settings chosen to build the OpenJDK implementations did not put those versions in a particularly favorable light. With those considerations in mind, we'll institute the following changes to this version of the benchmarking: In order to help alleviate an additional concern that there is some sort of benchmark bias, we'll use a different suite, called DaCapo.  Funded and supported by many prestigious organizations, DaCapo's aim is to benchmark real world applications.  Further information about DaCapo can be found at http://dacapobench.org. At the suggestion of Xerxes Ranby, who has been a great help through this entire exercise, a newer Linux distribution will be used to assure that the OpenJDK implementations were built with more optimal compiler settings.  The Linux distribution in this instance is Ubuntu 11.10 Oneiric Ocelot. Having experienced difficulties getting Ubuntu 11.10 to run on the original D2Plug ARMv7 platform, for these benchmarks, we'll switch to an embedded system that has a supported Ubuntu 11.10 release.  That platform is the Freescale i.MX53 Quick Start Board.  It has an ARMv7 Coretex-A8 processor running at 1GHz with 1GB RAM. We'll limit comparisons to 4 JVM implementations: Java SE-E 7 Update 2 c1 compiler (default) Java SE-E 6 Update 30 (c1 compiler is the only option) OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 CACAO build 1.1.0pre2 OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 JamVM build-1.6.0-devel Certain OpenJDK implementations were eliminated from this round of testing for the simple reason that their performance was not competitive.  The Java SE 7u2 c2 compiler was also removed because although quite respectable, it did not perform as well as the c1 compilers.  Recall that c2 works optimally in long-lived situations.  Many of these benchmarks completed in a relatively short period of time.  To get a feel for where c2 shines, take a look at the first chart in this blog. The first chart that follows includes performance of all benchmark runs on all platforms.  Later on we'll look more at individual tests.  In all runs, smaller means faster.  The DaCapo aficionado may notice that only 10 of the 14 DaCapo tests for this version were executed.  The reason for this is that these 10 tests represent the only ones successfully completed by all 4 JVMs.  Only the Java SE-E 6u30 could successfully run all of the tests.  Both OpenJDK instances not only failed to complete certain tests, but also experienced VM aborts too. One of the first observations that can be made between Java SE-E 6 and 7 is that, for all intents and purposes, they are on par with regards to performance.  While it is a fact that successive Java SE releases add additional optimizations, it is also true that Java SE 7 introduces additional complexity to the Java platform thus balancing out any potential performance gains at this point.  We are still early into Java SE 7.  We would expect further performance enhancements for Java SE-E 7 in future updates. In comparing Java SE-E to OpenJDK performance, among both OpenJDK VMs, Cacao results are respectable in 4 of the 10 tests.  The charts that follow show the individual results of those four tests.  Both Java SE-E versions do win every test and outperform Cacao in the range of 9% to 55%. For the remaining 6 tests, Java SE-E significantly outperforms Cacao in the range of 114% to 311% So it looks like OpenJDK results are mixed for this round of benchmarks.  In some cases, performance looks to have improved.  But in a majority of instances, OpenJDK still lags behind Java SE-Embedded considerably. Time to put on my asbestos suit.  Let the flames begin...

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • Framework licensing question [closed]

    - by nosarious
    I have a framework I have been developing but find myself being unable to work on it over the next year. I would like to make it open source in the interim to get others to use it and improve how it works. I would like to consider a licensing system that allows for multiple instances of the software for singular users (ie, a newspaper/magazine or zine hosting the code on their own). I would like to limit it from becoming the basis of a larger hosting service right now because it is intended to be part of a much larger hosting ecosystem which allows for create and share their work. Right now there is no license associated with it, which is why I am not posting a link here. Any help or suggestion on how to handle licensing this code for contributions and use would be appreciated, and if anyone would like to see examples or the github I would be happy to send it.

    Read the article

  • Business Intelligence (BI) Defined

    CIO.com defines Business Intelligence (BI) as a generic reference to a collection of applications that are used to analyze raw organizational data. Typical BI activities include data mining, online analytical processing, querying and reporting. They further explain that the primary reason why a company would utilize BI is to make their more data accessible. The more accessible data is to the users the faster they can identify ways to reduce business cost, discover new business opportunities, and react quickly to adjust prices based on current supply and demand. One area in which a hospital system could use BI derived from a data warehouse can be seen in the Emergency Room (ER) in regards to the number of doctors and nurse they have working during a full moon for each ER location. In order determine this BI needs to determine a trend in the number of patients seen on a full moon, further more they also need to determine the optimal number of staff members working during a full moon be determining the number of employees to patients ration needed to meet standard patient times and also be the most cost effective for the hospital.  This will allow the hospital system to estimate the number of potential patients they will have on the next full moon and adjust their staff schedules accordingly to ensure that patient care is not affected in any way do the influx or lack of influx of patients during this time while also ensuring that they are only working the minimum number of employees to ensure that they still making a profit. Another area where a hospital system could use BI data regards their orders paced to drug and medical supply companies. BI could define trends in prescriptions given to patients, this information could be used for ordering new supplies and forecasting the amount of medicine each hospital needs to keep on site at a given time. For example, a hospital might want to stock up on materials need to set bones in a cast prior to the summer because their BI indicates that a majority of broken bones occur during the summer due to children being out of school and they have more free time.

    Read the article

  • "Walking" along a rotating surface in LimeJS

    - by Dave Lancea
    I'm trying to have a character walk along a plank (a long, thin rectangle) that works like a seesaw, being rotated around a central point by box2d physics (falling objects). I want the left and right arrow keys to move the player up and down the plank, regardless of it's slope, and I don't want to use real physics for the player movement. My idea for achieving this was to compute the coordinate based on the rotation of the plank and the current location "up" or "down" the board. My math is derived from here: http://math.stackexchange.com/questions/143932/calculate-point-given-x-y-angle-and-distance Here's the code I have so far: movement = 0; if(keys[37]){ // Left movement = -3; } if(keys[39]){ // Right movement = 3; } // this.plank is a LimeJS sprite. // getRotation() Should return an angle in degrees var rotation = this.plank.getRotation(); // this.current_plank_location is initialized as 0 this.current_plank_location += movement; var x_difference = this.current_plank_location * Math.cos(rotation); var y_difference = this.current_plank_location * Math.sin(rotation); this.setPosition(seesaw.PLANK_CENTER_X + x_difference, seesaw.PLANK_CENTER_Y + y_difference); This code causes the player to swing around in a circle when they are out of the center of the plank given a slight change in rotation of the plank. Any ideas on how I can get the player position to follow the board position?

    Read the article

  • Why isn't there a typeclass for functions?

    - by Steve314
    I already tried this on Reddit, but there's no sign of a response - maybe it's the wrong place, maybe I'm too impatient. Anyway... In a learning problem I've been messing around with, I realised I needed a typeclass for functions with operations for applying, composing etc. Reasons... It can be convenient to treat a representation of a function as if it were the function itself, so that applying the function implicitly uses an interpreter, and composing functions derives a new description. Once you have a typeclass for functions, you can have derived typeclasses for special kinds of functions - in my case, I want invertible functions. For example, functions that apply integer offsets could be represented by an ADT containing an integer. Applying those functions just means adding the integer. Composition is implemented by adding the wrapped integers. The inverse function has the integer negated. The identity function wraps zero. The constant function cannot be provided because there's no suitable representation for it. Of course it doesn't need to spell things as if it the values were genuine Haskell functions, but once I had the idea, I thought a library like that must already exist and maybe even using the standard spellings. But I can't find such a typeclass in the Haskell library. I found the Data.Function module, but there's no typeclass - just some common functions that are also available from the Prelude. So - why isn't there a typeclass for functions? Is it "just because there isn't" or "because it's not so useful as you think"? Or maybe there's a fundamental problem with the idea? The biggest possible problem I've thought of so far is that function application on actual functions would probably have to be special-cased by the compiler to avoid a looping problem - in order to apply this function I need to apply the function application function, and to do that I need to call the function application function, and to do that...

    Read the article

  • removing an ssrs instance from a scale-out deployment

    - by Alex Bransky
    If you're like me you had at one time connected one of your Reporting Services instances to a report server database that was already in use by another instance.  This allows the instance to show up in the Scale-out Deployment section of the Reporting Services Configuration Manager.  My problem was that the server that got joined to the original server was no longer available as it had been repurposed, and when I clicked Remove Server to remove it from my scale-out it would fail because it couldn't contact the server.  After searching for a solution for quite some time I decided to look around in the report server database tables, and voila!  All I had to do was remove the old server from the Keys table.  I can't guarantee there won't be any side effects to this method, but it worked like a charm for me.

    Read the article

  • BizTalk 2009 - Messages: Last 100 Sent

    - by StuartBrierley
    Having previously talked about the lack of the traditional HAT in BizTalk 2009, the question then becomes how do you replicate some of the functionality that was previsouly relied on? I have already covered the Last 100 Messages Received query so what about sent messages? In BizTalk 2004 we had a query in HAT to return the messages sent in the last day.  While not a direct replacement the following query replicates some of the usefullness of this query in a BizTalk 2009 Hatless environment. Basically we are creating a query to search for the last one hundred tracked messages that were sent by BizTalk: Coming up Messages - last 50 suspended Service instances - last 100

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >