Search Results

Search found 4417 results on 177 pages for 'purpose'.

Page 6/177 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • What is the purpose of Finalization in java?

    - by Karthik
    Different websites are giving different opinions. My understanding is this: To clean up or reclaim the memory that an object occupies, the Garbage collector comes into action. (automatically is invoked???) The garbage collector then dereferences the object. Sometimes, there is no way for the garbage collector to access the object. Then finalize is invoked to do a final clean up processing after which the garbage collector can be invoked. is this right?

    Read the article

  • What is the purpose of unit testing an interface repository

    - by ahsteele
    I am unit testing an ICustomerRepository interface used for retrieving objects of type Customer. As a unit test what value am I gaining by testing the ICustomerRepository in this manner? Under what conditions would the below test fail? For tests of this nature is it advisable to do tests that I know should fail? i.e. look for id 4 when I know I've only placed 5 in the repository I am probably missing something obvious but it seems the integration tests of the class that implements ICustomerRepository will be of more value. [TestClass] public class CustomerTests : TestClassBase { private Customer SetUpCustomerForRepository() { return new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" }, new Login { LoginCustId = 5, LoginName = "tdude2" } } }; } [TestMethod] public void CanGetCustomerById() { // arrange var customer = SetUpCustomerForRepository(); var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • What is the purpose of OCaml's Lazy.lazy_from_val?

    - by Ricardo
    The doc of Lazy.lazy_from_val states that this function is for special cases: val lazy_from_val : 'a -> 'a t lazy_from_val v returns an already-forced suspension of v This is for special purposes only and should not be confused with lazy (v). Which cases are they talking about? If I create a pair of suspended computation from a value like: let l1 = lazy 123 let l2 = Lazy.lazy_from_val 123 What is the difference between these two? Because Lazy.lazy_is_val l1 and Lazy.lazy_is_val l2 both return true saying that the value is already forced!

    Read the article

  • Cross-platform general purpose C++ RPC library

    - by iUm
    Here's the task: Imagine, we have an applications and a plug-in for it (dynamic library). Interface between the application and the plug-in is completely defined. Now I need to run the application and the plug-in on different computers. I wrote a stub for the plug-in on a computer where the real applications is running. And the application loads it and calls its functions like if it were a native plug-in. On the other computer there's a stub instead of the real application, which loads the native plug-in. Now I need to organize RPCs between my stubs over the network, regardless the very transport. Usually, it's not difficult. But there're some restrictions: Application-plug-in interaction can be reenterable (e.g. application calls f1() from plug-in, in f1() plug-in calls g1() from application, in g1() application calls f2() from plug-in and so on...) Any such reenteration should be executed exactly by the same thread, which started the sequence Where can I find a cross-platform C++ RPC library with such features?

    Read the article

  • Is there a general-purpose printf-ish routine defined in any C standard

    - by supercat
    In many C libraries, there is a printf-style routine which is something like the following: int __vgprintf(void *info, (void)(*print_function(void*, char)), const char *format, va_list params); which will format the supplied string and call print_function with the passed-in info value and each character in sequence. A function like fprintf will pass __vgprintf the passed-in file parameter and a pointer to a function which will cast its void* to a FILE* and output the passed-in character to that file. A function like snprintf will create a struct holding a char* and length, and pass the address of that struct to a function which will output each character in sequence, space permitting. Is there any standard for such a function, which could be used if e.g. one wanted a function to output an arbitrary format to a TCP port? A common approach is to allocate a buffer one hopes is big enough, use snprintf to put the data there, and then output the data from the buffer. It would seem cleaner, though, if there were a standard way to to specify that the print formatter should call a user-supplied routine with each character.

    Read the article

  • Purpose of singletons in programming

    - by thecoshman
    This is admittedly a rather loose question. My current understanding of singletons is that they are a class that you set up in such a way that only one instance is ever created. This sounds a lot like a static class to me. The main differnce being that with a static class you don't / can't instance it, you just use it such as Math.pi(). With a singletong class, you would still need to do something like singleton mySingleton = new singleton(); mysingleton.set_name("foo"); singleton otherSingleton = new singleton(); // correct me if i am wrong, but mysingleton == othersingleton right now, yes? // this the following should happen? otherSingleston.set_name("bar"); mysingleton.report_name(); // will output "bar" won't it? Please note, I am asking this language independently, more about the concept. So I am not so worried about actually how to coed such a class, but more why you would wan't to and what thing you would need to consider.

    Read the article

  • What the purpose/difference in using an event-type constructor

    - by phq
    In all examples I can find as well as the automatically generated code i Visual Studio, events are set using the following code: button1.Click += new System.EventHandler(this.button1_Click); But I can also write it visually cleaner by omitting the constructor wrapper: button1.Click += this.button1_Click; Which also compile fine. What is the difference between these two? And why is the first one mostly used/preferred?

    Read the article

  • What is the purpose of this delegate usage?

    - by Kev
    Whilst poking around some code in .NET Reflector in an app I don't have the source code for I found this: if (DeleteDisks) { using (List<XenRef<VDI>>.Enumerator enumerator3 = list.GetEnumerator()) { MethodInvoker invoker2 = null; XenRef<VDI> vdiRef; while (enumerator3.MoveNext()) { vdiRef = enumerator3.Current; if (invoker2 == null) { // // Why do this? // invoker2 = delegate { VDI.destroy(session, vdiRef.opaque_ref); }; } bestEffort(ref caught, invoker2); } } } if (caught != null) { throw caught; } private static void bestEffort(ref Exception caught, MethodInvoker func) { try { func(); } catch (Exception exception) { log.Error(exception, exception); if (caught == null) { caught = exception; } } } Why not call VDI.destroy() directly. Is this just a way of wrapping the same pattern of try { do something } catch { log error } if it's used a lot?

    Read the article

  • Commiting broken code to the repository for the purpose of backing it up

    - by Tim Merrifield
    I was just talking to another developer (more senior than I) and trying to convince him that we should implement continuous integration via Cruise Control. He told me that this will not work because he commits code that does not compile to the repository all the time for the purposes of backing it up. And that automated builds notifying us of failures would be just noise. Committing garbage to the repo sounds bad to me. But I was at a loss of words and didn't know what to say. What is the alternative? What's the best practice for backing up your code on another machine without adding a bunch of pointless revisions? BTW, our version control system is SVN and that probably won't change any time soon.

    Read the article

  • What is the purpose of AnyVal?

    - by DaoWen
    I can't think of any situation where the type AnyVal would be useful, especially with the addition of the Numeric type for abstracting over Int, Long, etc. Are there any actual use cases for AnyVal, or is it just an artifact that makes the type hierarchy a bit prettier? Just to clarify, I know what AnyVal is, I just can't think of any time that I would actually need it in Scala. When would I ever need a type that encompassed Int, Character and Double? It seems like it's just there to make the type hierarchy prettier (i.e. it looks nicer to have AnyVal and AnyRef as siblings rather than having Int, Character, etc. inherit directly from Any).

    Read the article

  • Has any language become greatly popular for something other than its intended purpose?

    - by Jon Purdy
    Take this scenario: A programmer creates a language to solve some problem. He then releases this language to help others solve problems like it. Another programmer discovers it's actually much better for some different category of problems. By virtue of this new application, the language then becomes popular for that application primarily. Are there any instances of this actually occurring? Put another way, does the intended purpose of a language have any bearing on how it's actually used, or whether it becomes popular? Is it even important that a language have an advertised purpose?

    Read the article

  • What's the difference or purpose of a file format like ELF when flat binaries take up less space and can do the same thing?

    - by Sinister Clock
    I will give a better description now. In Linux driver development you need to follow a specification using an ELF file format as a finalized executable, i.e., that right there is not flat, it has headers, entry fields, and is basically carrying more weight than just a flat binary with opcodes. What is the purpose or in-depth difference of a Linux ELF file for a driver to interact with the video hardware, and, say, a bare, flat x86 16-bit binary I write that makes use of emulated graphics mode on a graphics card and writes to memory(besides the fact that the Linux driver probably is specific to making full use of the hardware and not just the emulated, backwards compatible memory accessing scheme). To sum it up, what is a difference or purpose of a binary like ELF with different headers and settings and just a flat binary with the necessary opcodes/instructions/data to do the same thing, just without any specific format? Example: Windows uses PE, Mac uses Mach-O/PEF, Linux uses ELF/FATELF, Unix uses COFF. What do any of them really mean or designate if you can just go flat, especially with a device driver which is system software.

    Read the article

  • What is the purpose of bitdepth for the several components of the framebuffer in glfwWindowHint function of GLFW3?

    - by Rui d'Orey
    I would like to know what are the following "framebuffer related hints" of GLFW3 function glfwWindowHint : GLFW_RED_BITS GLFW_GREEN_BITS GLFW_BLUE_BITS GLFW_ALPHA_BITS GLFW_DEPTH_BITS GLFW_STENCIL_BITS What is the purpose of this? Usually their default values are enough? Where are those bits stored? In a buffer in the GPU? What do they affect? And by that I mean in what way Thank you in advance!

    Read the article

  • What is the purpose of the canonical view volume?

    - by breadjesus
    I'm currently learning OpenGL and haven't been able to find an answer to this question. After the projection matrix is applied to the view space, the view space is "normalized" so that all the points lie within the range [-1, 1]. This is generally referred to as the "canonical view volume" or "normalized device coordinates". While I've found plenty of resources telling me about how this happens, I haven't seen anything about why it happens. What is the purpose of this step?

    Read the article

  • I need to develop a parser. Can I use Lex and Yacc for the purpose?

    - by Scrooge
    I need to extract very particular data from log files(of different types and formats). Since I am a recent college passout; my mind ran to using Lex and Yacc for the purpose. Now I have the following Questions 1. Will it be legal to do so ? (This product I am working for belongs to one of the biggest tech companies in the world.) 2. Also ; I would like to know if I am being too afraid to write my own parser? 3. How can I use Lex and Yacc if my product is Windows based? Please tell me if you need any clarification or extra information.

    Read the article

  • Is there a purpose for using pull requests on my own repo if I am the only developper?

    - by marco-fiset
    So I got started with a real project of mine on GitHub and things are going pretty well and ideas are flowing a lot faster than I initially thought. In order to keep things organized, I setup some branches so I can develop different features separatly. Now when I push my branch to GitHub, I have that section where I have two buttons : Pull Request and Compare with the name of the branch I recently pushed to. I understand the purpose of the Compare button but I don't get why I would want to create a pull request on my own repo. Can someone explain me why I would do that? Is it useful to make pull request on my own repo if I am the only developper?

    Read the article

  • Developing a feature which sole purpose to be taken out?

    - by adib
    What is the name of the pattern in which individual contributors (programmers/designers) developed an artifact for the sole purpose is to serve as a diversion so that management can remove that feature in the final product? This is a folklore I heard from an ex-colleague who used to work at a large game development company. At that company, it is well known that middle management is pressurized to "give inputs" and "make changes" to the product otherwise they risk being seen as not contributing to the project. This situation have delayed many projects because of these superfluous "management inputs". In one project at the above company, the artists and developers created a supernumerary animated character that appears in every cutscene and sticks out like a sore thumb. They designed it in such a way that it can be easily removed before the game is shipped (this was when games were still sold in physical media and not a downloadable product). Obviously the management then voted to remove the animation. On the positive side, management didn't introduced any unnecessary changes that would have delayed the project because they have shown that they provided constructive inputs to the product. This process pattern has a name among game programmers that work in corporates, but I forgot what was the actual name. I believe it's duck-something. Anybody can help pointing out the name and perhaps some rather credible reference to how the pattern develops?.

    Read the article

  • Is Apple getting out of the general purpose development platform business?

    - by Charles E. Grant
    I've been doing general ANSI C/Console C++/Java/Web development on Mac hardware for about ten years. I make no claims of objective superiority over other platforms, it just satisfies my personal tastes. With the success of the iPhone and the related App store there was some speculation that Apple would get out of the general purpose computer market, and become a closed software ecosystem, focusing on consumer appliances. I pooh-poohed the speculation at the time, but this week Apple announced that a) they were opening an App store for the Mac, b) Java applications would not be eligible for the App store, c) the Apple JVM was being deprecated and might not be available for future releases of OS X. I'm not a Java developer per se, but I work in a research lab that occasionally writes Java applications, and also depends on tools written Java. This has the potential to be a huge pain in the butt for us. As of now, there is no other JVM for OS X that we can point our end users to. Soy Latte and OpenJDK might be appropriate for developers, but the complexity of the installation makes them inappropriate for end users. Eventually I expect Oracle/SUN will produce a replacement JVM for OS X. More worrisome to me is that Apple used to specifically advertise that it was an excellent platform for scientific development, because they supported all major language platforms. Is the deprecation of their JVM a sign that this market no longer interests them?

    Read the article

  • What's is the point of PImpl pattern while we can use interface for same purpose in C++?

    - by ZijingWu
    I see a lot of source code which using PIMPL idiom in C++. I assume Its purposes are hidden the private data/type/implementation, so it can resolve dependence, and then reduce compile time and header include issue. But interface class in C++ also have this capability, it can also used to hidden data/type and implementation. And to hidden let the caller just see the interface when create object, we can add an factory method in it declaration in interface header. The comparison is: Cost: The interface way cost is lower, because you doesn't even need to repeat the public wrapper function implementation void Bar::doWork() { return m_impl->doWork(); }, you just need to define the signature in the interface. Well understand: The interface technology is more well understand by every C++ developer. Performance: Interface way performance not worse than PIMPL idiom, both an extra memory access. I assume the performance is same. Following is the pseudocode code to illustrate my question: // Forward declaration can help you avoid include BarImpl header, and those included in BarImpl header. class BarImpl; class Bar { public: // public functions void doWork(); private: // You doesn't need to compile Bar.cpp after change the implementation in BarImpl.cpp BarImpl* m_impl; }; The same purpose can be implement using interface: // Bar.h class IBar { public: virtual ~IBar(){} // public functions virtual void doWork() = 0; }; // to only expose the interface instead of class name to caller IBar* createObject(); So what's the point of PIMPL?

    Read the article

  • Use of title attribute on div for SEO purpose will help? [duplicate]

    - by Niko Jojo
    This question is an exact duplicate of: Should I set the title attribute for content DIV's to explain what they contain? 1 answer Now a days many images display using css like below : <div title="My Logo" class="all_logo mt15">&nbsp;</div> Above div will show logo image, But as using CSS for logo instead of <img> tag. So not take the benefits of alt tag by SEO point of view. My question is : Does title attribute of <DIV> will help in SEO?

    Read the article

  • Are today's general purpose languages at the right level of abstarction ?

    - by KeesDijk
    Today Uncle Bob Martin, a genuine hero, showed this video In this video Bob Martin claims that our programming languages are at the right level for our problems at this time. One of the reasons I get from this video as that he Bob Martin sees us detail managers and our problems are at the detail level. This is the first time I have to disagree with Bob Martin and was wondering what the people at programmers think about this. First there is a difference between MDA and MDE MDA in itself hasn't worked and I blame way to much formalisation at a level you can't formalize these kind of problems. MDE and MDD are still trying to prove themselves and in my mind show great promise. e.g. look at MetaEdit The detail still needs to be management in my mind, but you do so in one place (framework or generators) instead of at multiple places. Right for our kind of problems ? I think depends on what problems you look at. Do the current programming languages keep up with the current demands on time to market ? Are they good at bridging the business IT communication gap ? So what do you think ?

    Read the article

  • Are today's general purpose languages at the right level of abstraction ?

    - by KeesDijk
    Today Uncle Bob Martin, a genuine hero, showed this video In this video Bob Martin claims that our programming languages are at the right level for our problems at this time. One of the reasons I get from this video as that Bob Martin sees us as detail managers and our problems are at the detail level. This is the first time I have to disagree with Bob Martin and was wondering what the people at programmers think about this. First there is a difference between MDA and MDE MDA in itself hasn't worked and I blame way to much formalisation at a level you can't formalize these kind of problems. MDE and MDD are still trying to prove themselves and in my mind show great promise. e.g. look at MetaEdit The detail still needs to be management in my mind, but you do so in one place (framework or generators) instead of at multiple places. Right for our kind of problems ? I think depends on what problems you look at. Do the current programming languages keep up with the current demands on time to market ? Are they good at bridging the business IT communication gap ? So what do you think ?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >