Search Results

Search found 409 results on 17 pages for 'curiosity'.

Page 14/17 | < Previous Page | 10 11 12 13 14 15 16 17  | Next Page >

  • What does flushing thread local memory to global memory mean?

    - by Jack Griffith
    Hi, I am aware that the purpose of volatile variables in Java is that writes to such variables are immediately visible to other threads. I am also aware that one of the effects of a synchronized block is to flush thread-local memory to global memory. I have never fully understood the references to 'thread-local' memory in this context. I understand that data which only exists on the stack is thread-local, but when talking about objects on the heap my understanding becomes hazy. I was hoping that to get comments on the following points: When executing on a machine with multiple processors, does flushing thread-local memory simply refer to the flushing of the CPU cache into RAM? When executing on a uniprocessor machine, does this mean anything at all? If it is possible for the heap to have the same variable at two different memory locations (each accessed by a different thread), under what circumstances would this arise? What implications does this have to garbage collection? How aggressively do VMs do this kind of thing? Overall, I think am trying to understand whether thread-local means memory that is physically accessible by only one CPU or if there is logical thread-local heap partitioning done by the VM? Any links to presentations or documentation would be immensely helpful. I have spent time researching this, and although I have found lots of nice literature, I haven't been able to satisfy my curiosity regarding the different situations & definitions of thread-local memory. Thanks very much.

    Read the article

  • Oracle - Getting Select Count(*) from ... as an output parameter in System.Data.OracleClient

    - by cbeuker
    Greetings all, I have a question. I am trying to build a parametrized query to get me the number of rows from a table in Oracle. Rather simple. However I am an Oracle newbie.. I know in SQL Server you can do something like: Select @outputVariable = count(*) from sometable where name = @SomeOtherVariable and then you can set up an Output parameter in the System.Data.SqlClient to get the @outputVariable. Thinking that one should be able to do this in Oracle as well, I have the following query Select count(*) into :theCount from sometable where name = :SomeValue I set up my oracle parameters (using System.Data.OracleClient - yes I know it will be deprecated in .Net 4 - but that's what I am working with for now) as follows IDbCommand command = new OracleCommand(); command.CommandText = "Select count(*) into :theCount from sometable where name = :SomeValue"); command.CommandType = CommandType.Text; OracleParameter parameterTheCount = new OracleParameter(":theCount ", OracleType.Number); parameterTheCount .Direction = ParameterDirection.Output; command.Parameters.Add(parameterTheCount ); OracleParameter parameterSomeValue = new OracleParameter(":SomeValue", OracleType.VarChar, 40); parameterSomeValue .Direction = ParameterDirection.Input; parameterSomeValue .Value = "TheValueToLookFor"; command.Parameters.Add(parameterSomeValue ); command.Connection = myconnectionObject; command.ExecuteNonQuery(); int theCount = (int)parameterTheCount.Value; At which point I was hoping the count would be in the parameter parameterTheCount that I could readily access. I keep getting the error ora-01036 which http://ora-01036.ora-code.com tells me to check my binding in the sql statement. Am I messing something up in the SQL statement? Am I missing something simple elsewhere? I could just use command.ExecuteScaler() as I am only getting one item, and am probably going to end up using that, but at this point, curiosity has got the better of me. What if I had two parameters I wanted back from my query (ie: select max(ColA), min(ColB) into :max, :min.....) Thanks..

    Read the article

  • Why doesn't gcc remove this check of a non-volatile variable?

    - by Thomas
    This question is mostly academic. I ask out of curiosity, not because this poses an actual problem for me. Consider the following incorrect C program. #include <signal.h> #include <stdio.h> static int running = 1; void handler(int u) { running = 0; } int main() { signal(SIGTERM, handler); while (running) ; printf("Bye!\n"); return 0; } This program is incorrect because the handler interrupts the program flow, so running can be modified at any time and should therefore be declared volatile. But let's say the programmer forgot that. gcc 4.3.3, with the -O3 flag, compiles the loop body (after one initial check of the running flag) down to the infinite loop .L7: jmp .L7 which was to be expected. Now we put something trivial inside the while loop, like: while (running) putchar('.'); And suddenly, gcc does not optimize the loop condition anymore! The loop body's assembly now looks like this (again at -O3): .L7: movq stdout(%rip), %rsi movl $46, %edi call _IO_putc movl running(%rip), %eax testl %eax, %eax jne .L7 We see that running is re-loaded from memory each time through the loop; it is not even cached in a register. Apparently gcc now thinks that the value of running could have changed. So why does gcc suddenly decide that it needs to re-check the value of running in this case?

    Read the article

  • Any high-level languages that can use c libraries?

    - by Isaiah
    I know this question could be in vain, but it's just out of curiosity, and I'm still much a newb^^ Anyways I've been loving python for some time while learning it. My problem is obviously speed issues. I'd like to get into indie game creation, and for the short future, 2d and pygame will work. But I'd eventually like to branch into the 3d area, and python is really too slow to make anything 3d and professional. So I'm wondering if there has ever been work to create a high-level language able to import and use c libraries? I've looked at Genie and it seems to be able to use certain libraries, but I'm not sure to what extent. Will I be able to use it for openGL programing, or in a c game engine? I do know some lisp and enjoy it a lot, but there aren't a great many libraries out there for it. Which leads to the problem: I can't stand C syntax, but C has libraries galore that I could need! And game engines like irrlicht. Is there any language that can be used in place of C around C? Thanks so much guys

    Read the article

  • Accessing functions in a swf-file through javascript [chatroulette.com]

    - by RadiantHeart
    Lately I have been interested in the code behind chatroulette.com. As you probably know it is a peer-to-peer webcam-chat-service written in actionscript, as I understand. What i have been wondering about is weather its possible to extract the ip-address of whomever you are currently communicating with. I have seen services that do that, but they require that you install a program that runs alongside on your computer sniffing UDP-packages. I was wondering if there was a simpler method. What I do know is that the javascript on the page communicates with the application via "ExternalInterface". On this area I am pretty much a novice but according to my limited understanding you cant get information from the flash-application unless you have configured a listener for a call from javascript and then attach a callback to that event. Is this correct or can you access public functions and variables directly through javascript? There is for example a public function like this: public function get outgoingAddress():String{ return (this.__info.outgoingAddress); } Can it be accessed directly through javascript? If it cant be done so easily, is it possible to decompile the .swf-file, change it (add some functions) and recompile it and run it instead? I am hoping someone can satisfy my curiosity here. Here are two links to a decompiled version of the swf-file. The first with line numbering and one without. With line numbering ˜ 3.5 Mbyte Without line numbering ˜ 2.1 Mbyte

    Read the article

  • Double script tags in Google Analytics tracking code

    - by Tom
    This is more a curiosity question than anything else... Google instructs to add the analytics tracking code as follows: <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try{ var pageTracker = _gat._getTracker("UA-xxxxxx-x"); pageTracker._trackPageview(); } catch(err) {} </script> I'm wondering some JS guru here could tell me why they're separating it into two script tags instead of sticking it all inside one. I know that the top part could be put in the header and the bottom part just before body tag to ensure the page loaded before it's tracked, but I'm wondering if there's something more to it. Anyone who'd know that would likely know how to separate the code into two tags anyway. I'm only asking as this is coming from the Goog and is being used by millions of sites... Thanks

    Read the article

  • ASP.Net MVC 2 - How to set up a Cancel button with client side navigation

    - by arame3333
    Thanks to a previous question I found a useful link on multiple buttons. http://weblogs.asp.net/dfindley/archive/2009/05/31/asp-net-mvc-multiple-buttons-in-the-same-form.aspx What I want to do is have a cancel button on my page, similar to this; <button name="button" type="button" onclick="document.location.href=$('#cancelUrl').attr('href')">Cancel</button> <a id="cancelUrl" href="<%: Url.Action("Index", "Home") %>" style="display:none;"></a> However although this code works, I really want to go back to the previous page. For Web Forms I could use the javascript Back() or Go(-1) functions, but they relied on postbacks. I could of course hard code the previous page and controller as I have done above. However I am struggling to find links that explain to me how Url.Action works. Because if I do this, I also need to include an index parameter, and I am not clear how the syntax works for that. It seems odd the amount of coding to do this. Out of curiosity, I am also wondering how you TDD client side code like this.

    Read the article

  • How to access hidden template in unnamed namespace?

    - by Johannes Schaub - litb
    Here is a tricky situation, and i wonder what ways there are to solve it namespace { template <class T> struct Template { /* ... */ }; } typedef Template<int> Template; Sadly, the Template typedef interferes with the Template template in the unnamed namespace. When you try to do Template<float> in the global scope, the compiler raises an ambiguity error between the template name and the typedef name. You don't have control over either the template name or the typedef-name. Now I want to know whether it is possible to: Create an object of the typedefed type Template (i.e Template<int>) in the global namespace. Create an object of the type Template<float> in the global namespace. You are not allowed to add anything to the unnamed namespace. Everything should be done in the global namespace. This is out of curiosity because i was wondering what tricks there are for solving such an ambiguity. It's not a practical problem i hit during daily programming.

    Read the article

  • Would anybody recommend learning J/K/APL?

    - by ozan
    I came across J/K/APL a few months ago while working my way through some project euler problems, and was intrigued, to say the least. For every elegant-looking 20 line python solution I produced, there'd be a gobsmacking 20 character J solution that ran in a tenth of the time. I've been keen to learn some basic J, and have made a few attempts at picking up the vocabulary, but have found the learning curve to be quite steep. To those who are familiar with these languages, would you recommend investing some time to learn one (I'm thinking J in particular)? I would do so more for the purpose of satisfying my curiosity than for career advancement or some such thing. Some personal circumstances to consider, if you care to: I love mathematics, and use it daily in my work (as a mathematician for a startup) but to be honest I don't really feel limited by the tools that I use (like python + NumPy) so I can't use that excuse. I have no particular desire to work in the finance industry, which seems to be the main port of call for K users at least. Plus I should really learn C# as a next language as it's the primary language where I work. So practically speaking, J almost definitely shouldn't be the next language I learn. I'm reasonably familiar with MATLAB so using an array-based programming language wouldn't constitute a tremendous paradigm shift. Any advice from those familiar with these languages would be much appreciated.

    Read the article

  • Is private members hacking a defined behaviour ?

    - by ereOn
    Hi, Lets say I have the following class: class BritneySpears { public: int getValue() { return m_value; }; private: int m_value; }; Which is an external library (that I can't change). I obviously can't change the value of m_value, only read it. Even subclassing BritneySpears won't work. What if I define the following class: class AshtonKutcher { public: int getValue() { return m_value; }; public: int m_value; }; And then do: BritneySpears b; // Here comes the ugly hack AshtonKutcher* a = reinterpret_cast<AshtonKutcher*>(&b); a->m_value = 17; // Print out the value std::cout << b.getValue() << std::endl; I know this is a bad practice. But just for curiosity: is this guaranted to work ? Is it a defined behaviour ? Bonus question: Have you ever had to use such an ugly hack ? Thanks !

    Read the article

  • JavaScript: Double script tags in Google Analytics tracking code

    - by Tom
    This is more a curiosity question than anything else... Google instructs to add the analytics tracking code as follows: <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try{ var pageTracker = _gat._getTracker("UA-xxxxxx-x"); pageTracker._trackPageview(); } catch(err) {} </script> I'm wondering some JS guru here could tell me why they're separating it into two script tags instead of sticking it all inside one. I know that the top part could be put in the header and the bottom part just before body tag to ensure the page loaded before it's tracked, but I'm wondering if there's something more to it. Anyone who'd know that would likely know how to separate the code into two tags anyway. I'm only asking as this is coming from the Goog and is being used by millions of sites... Thanks

    Read the article

  • Automated Legal Processing

    - by Chris S
    Will it ever be possible to make legal systems quantifiable enough to process with computer algorithms? What technologies would have to be in place before this is possible? Are there any existing technologies that are already trying to accomplish this? Out of curiosity, I downloaded the text for laws in my local municipality, and tried applying some simple NLP tricks to extract rules from sentences. I had mixed results. Some sentences were very explicit (e.g. "Cars may not be left in the park overnight"), but other sentences seemed hopelessly vague (e.g. "The council's purpose is to ensure the well-being of the community"). I apologize if this is too open-ended a topic, but I've often wondered what society would look like if legal systems were based on less ambiguous language. Lawyers, and the legal process in general, are so expensive because they have to manually process a complex set of rules codified in ambiguous legal texts. If this system could be represented in software, this huge expense could potentially be eliminated, making the legal system more accessible for everyone.

    Read the article

  • Unit test project doesn't recognize the classes it was generated from

    - by DougLeary
    I have a fairly simple file-system website consisting of one aspx page and several classes in separate .cs files. Everything is on my own HD. The web app itself builds and runs fine. Out of curiosity I decided to try out Visual Studio's nifty, easy-to-use unit test feature. So I opened each class file and clicked Create Unit Tests. VS generated a test project containing a set of test classes and some other files. Easy! But when I try to build or run the test project it throws a series of build errors, one for every class: The type or namespace name 'class-name' could not be found (are you missing a using directive or an assembly reference?). Somebody asked if my test project has a reference to the original project. Well no, because the original project is a file-system website. It has no bin folder and no DLL, so there's nothing to reference as far as I can tell. I would think that since VS generated these unit tests it would generate whatever references it needs, but apparently not. Is generating unit tests for file-system web apps an undocumented no-no, or is there a magic trick to getting it to work?

    Read the article

  • Who likes #regions in Visual Studio?

    - by Nicholas
    Personally I can't stand region tags, but clearly they have wide spread appeal for organizing code, so I want to test the temperature of the water for other MS developer's take on this idea. My personal feeling is that any sort of silly trick to simplify code only acts to encourage terrible coding behavior, like lack of cohesion, unclear intention and poor or incomplete coding standards. One programmer told me that code regions helped encourage coding standards by making it clear where another programmer should put his or her contributions. But, to be blunt, this sounds like a load of horse manure to me. If you have a standard, it is the programmer's job to understand what that standard is... you should't need to define it in every single class file. And, nothing is more annoying than having all of your code collapsed when you open a file. I know that cntrl + M, L will open everything up, but then you have the hideous "hash region definition" open and closing lines to read. They're just irritating. My most stead fast coding philosophy is that all programmer should strive to create clear, concise and cohesive code. Region tags just serve to create noise and redundant intentions. Region tags would be moot in a well thought out and intentioned class. The only place they seem to make sense to me, is in automatically generated code, because you should never have to read that outside of personal curiosity.

    Read the article

  • Is it possible to defer member initialization to the constructor body?

    - by Kjir
    I have a class with an object as a member which doesn't have a default constructor. I'd like to initialize this member in the constructor, but it seems that in C++ I can't do that. Here is the class: #include <boost/asio.hpp> #include <boost/array.hpp> using boost::asio::ip::udp; template<class T> class udp_sock { public: udp_sock(std::string host, unsigned short port); private: boost::asio::io_service _io_service; udp::socket _sock; boost::array<T,256> _buf; }; template<class T> udp_sock<T>::udp_sock(std::string host = "localhost", unsigned short port = 50000) { udp::resolver res(_io_service); udp::resolver::query query(udp::v4(), host, "spec"); udp::endpoint ep = *res.resolve(query); ep.port(port); _sock(_io_service, ep); } The compiler tells me basically that it can't find a default constructor for udp::socket and by my research I understood that C++ implicitly initializes every member before calling the constructor. Is there any way to do it the way I wanted to do it, or is it too "Java-oriented" and not feasible in C++? I worked around the problem by defining my constructor like this: template<class T> udp_sock<T>::udp_sock(std::string host = "localhost", unsigned short port = 50000) : _sock(_io_service) { udp::resolver res(_io_service); udp::resolver::query query(udp::v4(), host, "spec"); udp::endpoint ep = *res.resolve(query); ep.port(port); _sock.bind(ep); } So my question is more out of curiosity and to better understand OOP in C++

    Read the article

  • What happens when we combine RAII and GOTO ?

    - by Robert Gould
    I'm wondering, for no other purpose than pure curiosity (because no one SHOULD EVER write code like this!) about how the behavior of RAII meshes with the use of Goto (lovely idea isn't it). class Two { public: ~Two() { printf("2,"); } }; class Ghost { public: ~Ghost() { printf(" BOO! "); } }; void foo() { { Two t; printf("1,"); goto JUMP; } Ghost g; JUMP: printf("3"); } int main() { foo(); } When running the following code in VS2005 I get the following output: 1,2,3 BOO! However I imagined, guessed, hoped that 'BOO!' wouldn't actually appear as the Ghost should have never been instantiated (IMHO, because I don't know the actual expected behavior of this code). Any Guru out there knows what's up? Just realized that if I instantiate an explicit constructor for Ghost the code doesn't compile... class Ghost { public: Ghost() { printf(" HAHAHA! "); } ~Ghost() { printf(" BOO! "); } }; Ah, the mystery ...

    Read the article

  • Five unique, random numbers from a subset

    - by tau
    I know similar questions come up a lot and there's probably no definitive answer, but I want to generate five unique random numbers from a subset of numbers that is potentially infinite (maybe 0-20, or 0-1,000,000). The only catch is that I don't want to have to run while loops or fill an array. My current method is to simply generate five random numbers from a subset minus the last five numbers. If any of the numbers match each other, then they go to their respective place at the end of the subset. So if the fourth number matches any other number, it will bet set to the 4th from the last number. Does anyone have a method that is "random enough" and doesn't involve costly loops or arrays? Please keep in mind this a curiosity, not some mission-critical problem. I would appreciate it if everyone didn't post "why are you having this problem?" answers. I am just looking for ideas. Thanks a lot!

    Read the article

  • Initial capacity of collection types, i.e. Dictionary, List

    - by Neil N
    Certain collection types in .Net have an optional "Initial Capacity" constructor param. i.e. Dictionary<string, string> something = new Dictionary<string,string>(20); List<string> anything = new List<string>(50); I can't seem to find what the default initial capacity is for these objects on MSDN. If I know I will only be storing 12 or so items in a dictionary, doesn't it make sense to set the initial capacity to something like 20? My reasoning is, assuming the capacity grows like it does for a StringBuiler, which doubles each time the capacity is hit, and each re-allocation is costly, why not pre-set the size to something you know will hold your data, with some extra room just in case? If the initial capacity is 100, and I know I will only need a dozen or so, it seems as though the rest of that allocated RAM is allocated for nothing. Please spare me the "premature optimization" speil for the O(n^n)th time. I know it won't make my apps any faster or save any meaningful amount of memory, this is mostly out of curiosity.

    Read the article

  • Does .Net use Device Dependent or Device Independent Bitmaps?

    - by Brian
    When loading an image into memory, does .Net use DDB, DIB, or something else entirely? If possible, please cite your sources. I'm wondering because we currently have a classic ASP application that is using a 3rd party component to load images that is occasionally creating a “Not enough storage is available to process this command.” error. The error is very inconsistent but tends to happen on larger images (not always, but often). After resetting IIS, processing the same file again typically works just fine. After much research I have found that DDBs tend to have this problem when processing large images because they work out of video memory. Considering that we are running on a web server with an integrated video card and limited shared memory, this could certainly be our problem. We are in the early stages of converting our app to .Net and am wondering if using .Net for this might be a viable alternative to our current method which is why I am asking the question. Any advice is welcome :) but out of curiosity if nothing else, I am really hoping for an answer to the question; does .Net use DDB or DIB?

    Read the article

  • are C functions declared in <c____> headers gauranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); }

    Read the article

  • Efficient way to maintain a sorted list of access counts in Python

    - by David
    Let's say I have a list of objects. (All together now: "I have a list of objects.") In the web application I'm writing, each time a request comes in, I pick out up to one of these objects according to unspecified criteria and use it to handle the request. Basically like this: def handle_request(req): for h in handlers: if h.handles(req): return h return None Assuming the order of the objects in the list is unimportant, I can cut down on unnecessary iterations by keeping the list sorted such that the most frequently used (or perhaps most recently used) objects are at the front. I know this isn't something to be concerned about - it'll make only a miniscule, undetectable difference in the app's execution time - but debugging the rest of the code is driving me crazy and I need a distraction :) so I'm asking out of curiosity: what is the most efficient way to maintain the list in sorted order, descending, by the number of times each handler is chosen? The obvious solution is to make handlers a list of (count, handler) pairs, and each time a handler is chosen, increment the count and resort the list. def handle_request(req): for h in handlers[:]: if h[1].handles(req): h[0] += 1 handlers.sort(reverse=True) return h[1] return None But since there's only ever going to be at most one element out of order, and I know which one it is, it seems like some sort of optimization should be possible. Is there something in the standard library, perhaps, that is especially well-suited to this task? Or some other data structure? (Even if it's not implemented in Python) Or should/could I be doing something completely different?

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • C#: Non-constructed generics as properties (eg. List<T>)

    - by Dav
    The Problem It's something I came across a while back and was able to work around it somehow. But now it came back, feeding on my curiosity - and I'd love to have a definite answer. Basically, I have a generic dgv BaseGridView<T> : DataGridView where T : class. Constructed types based on the BaseGridView (such as InvoiceGridView : BaseGridView<Invoice>) are later used in the application to display different business objects using the shared functionality provided by BaseGridView (like virtual mode, buttons, etc.). It now became necessary to create a user control that references those constructed types to control some of the shared functionality (eg. filtering) from BaseGridView. I was therefore hoping to create a public property on the user control that would enable me to attach it to any BaseGridView in Designer/code: public BaseGridView<T> MyGridView { get; set; }. The trouble is, it doesn't work :-) When compiled, I get the following message: The type or namespace name 'T' could not be found (are you missing a using directive or an assembly reference?) Solutions? I realise I could extract the shared functionality to an interface, mark BaseGridView as implementing that interface, and then refer to the created interface in my uesr control. But I'm curious if there exists some arcane C# command/syntax that would help me achieve what I want - without polluting my solution with an interface I don't really need :-)

    Read the article

  • My project is no longer used - how should I feel?

    - by flybywire
    For the last two years I have been developing and supporting an important project for a big customer. The project included mining data from the customer's existing systems, processing, and displaying and updating in the customer's public home page. The project was defined as crucial by the customer and I was payed good money and flown at the customer's expense to meet key employees. Some months ago, when the project was finished and in maintainance mode, I informed the customer that I am no longer interested in doing it as I had a new opportunity that would not be compatible with my existing customer. I was payed to train one of their employees, flown to meet him, make sure everything works and that he can be safely left in charge of the project. We finished in good terms after I complied with all my obligations and they payed me all they owed me. Some days ago, just out of curiosity, I entered to their website to see how the data continues to be updated and much to my dismay I discovered that the day after my contract was finished my system was "turned off" and it ceased to feed data to the public website. Let's put it clear, there is no issue of money or broken contract here. They are in they full right to do whatever they want with my software. But it is an issue of broken "programmer's ego". Should I feel bad about it (I do). Should I care and check out with my customer if they need some help? Or is it none of my matters?

    Read the article

  • Best way to implement plugin framework - are DLLs the only way (C/C++ project)?

    - by Microkernel
    Introduction: I am currently developing a document classifier software in C/C++ and I will be using Naive-Bayesian model for classification. But I wanted the users to use any algorithm that they want(or I want in the future), hence I went to separate the algorithm part in the architecture as a plugin that will be attached to the main app @ app start-up. Hence any user can write his own algorithm as a plugin and use it with my app. Problem Statement: The way I am intending to develop this is to have each of the algorithms that user wants to use to be made into a DLL file and put into a specific directory. And at the start, my app will search for all the DLLs in that directory and load them. My Questions: (1) What if a malicious code is made as a DLL (and that will have same functions mandated by plugin framework) and put into my plugins directory? In that case, my app will think that its a plugin and picks it and calls its functions, so the malicious code can easily bring down my entire app down (In the worst case could make my app as a malicious code launcher!!!). (2) Is using DLLs the only way available to implement plugin design pattern? (Not only for the fear of malicious plugin, but its a generic question out of curiosity :) ) (3) I think a lot of softwares are written with plugin model for extendability, if so, how do they defend against such attacks? (4) In general what do you think about my decision to use plugin model for extendability (do you think I should look at any other alternatives?) Thank you -MicroKernel :)

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17  | Next Page >