Search Results

Search found 1466 results on 59 pages for 'reliable'.

Page 50/59 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • Word characteristics tags

    - by theBlinker
    I want to do a riddle AI chatbot for my AI class. So i figgured the input to the chatbot would be : Something like : "It is blue, and it is up, but it is not the ceiling" Translation : <Object X> <blue> <up> <!ceiling> </Object X> (Answer : sky?) So Input is a set of characteristics (existing \ not existing in the object), output is a matched, most likely object. The domain will be limited to a number of objects, i could input all attributes myself, but i was thinking : How could I programatically build a database of characteristics for a word? Is there such a database available? How could i tag a word, how could i programatically find all it's attributes? I was thinking on crawling wikipedia, or some forum, but i can't see it build any reliable word tag database. Any ideas on how i could achieve such a thing? Any ideas on some literature on the subject? Thank you

    Read the article

  • Trying to set PC clock programmatically just before Daylight Saving Time ends

    - by Moe Sisko
    To reproduce : 1) Add Microsoft.VisualBasic assembly to your project reference 2) Change PC timezone to : (GMT+10:00) Canberra, Melbourne, Sydney . Ensure PC is set to automatically adjust clock for daylight savings time. (For this timezone, daylight savings time ends at 3am on 4 Apr 2010.) 3) add following code : public void SetNewDateTime(DateTime dt) { Microsoft.VisualBasic.DateAndTime.Today = dt; // ignores time component Microsoft.VisualBasic.DateAndTime.TimeOfDay = dt; // ignores date component } private void button1_Click(object sender, EventArgs e) { DateTime dt = new DateTime(2010, 4, 5, 5, 0, 0); // XX SetNewDateTime(dt); // XX System.Threading.Thread.Sleep(500); DateTime dt2 = new DateTime(2010, 4, 4, 1, 0, 0); SetNewDateTime(dt2); } 4) When button 1 is clicked, the PC clock eventually shows 2am, whereas 1 am was expected. (If code marked at "XX" is removed, the clock sometimes shows the correct time of 1 am). Any idea what is happening ? (Or is there a more reliable way of setting the PC clock from C# code ?) TIA.

    Read the article

  • How to detect a WCF session crash before calling a contract method?

    - by brain_pusher
    I am using a session mode for my WCF service. The problem is the following: if session is broken and no longer exists, client can't know it before calling a contract. For example, if the service has been restarted, the client's session id is invalid, because that session has been closed on the server side. I check the channel state before calling the contract and its value is CommunicationState.Opened even if session is already broken. So, when I call the contract after this check I get a CommunicationException with this message: The remote endpoint no longer recognizes this sequence. This is most likely due to an abort on the remote endpoint. The value of wsrm:Identifier is not a known Sequence identifier. The reliable session was faulted. Is there any workaround? I need a way to get an appropriate session state before calling a contract so that I can restore it without getting an exception. P.S. The CommunicationException type is general, so I can't detect a session crash by catching this exception. P.P.S. I have asked the similar question here, but in that case I didn't know the reason, now I don't know how to evade it.

    Read the article

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • Case Insensitive Ternary Search Tree

    - by Yan Cheng CHEOK
    I had been using Ternary Search Tree for a while, as the data structure to implement a auto complete drop down combo box. Which means, when user type "fo", the drop down combo box will display foo food football The problem is, my current used of Ternary Search Tree is case sensitive. My implementation is as follow. It had been used by real world for around 1++ yeas. Hence, I consider it as quite reliable. My Ternary Search Tree code However, I am looking for a case insensitive Ternary Search Tree, which means, when I type "fo", the drop down combo box will show me foO Food fooTBall Here are some key interface for TST, where I hope the new case insentive TST may have similar interface too. /** * Stores value in the TernarySearchTree. The value may be retrieved using key. * @param key A string that indexes the object to be stored. * @param value The object to be stored in the tree. */ public void put(String key, E value) { getOrCreateNode(key).data = value; } /** * Retrieve the object indexed by key. * @param key A String index. * @return Object The object retrieved from the TernarySearchTree. */ public E get(String key) { TSTNode<E> node = getNode(key); if(node==null) return null; return node.data; } An example of usage is as follow. TSTSearchEngine is using TernarySearchTree as the core backbone. Example usage of Ternary Search Tree // There is stock named microsoft and MICROChip inside stocks ArrayList. TSTSearchEngine<Stock> engine = TSTSearchEngine<Stock>(stocks); // I wish it would return microsoft and MICROCHIP. Currently, it just return microsoft. List<Stock> results = engine.searchAll("micro");

    Read the article

  • How to do cleanup reliably in python?

    - by Cheery
    I have some ctypes bindings, and for each body.New I should call body.Free. The library I'm binding doesn't have allocation routines insulated out from the rest of the code (they can be called about anywhere there), and to use couple of useful features I need to make cyclic references. I think It'd solve if I'd find a reliable way to hook destructor to an object. (weakrefs would help if they'd give me the callback just before the data is dropped. So obviously this code megafails when I put in velocity_func: class Body(object): def __init__(self, mass, inertia): self._body = body.New(mass, inertia) def __del__(self): print '__del__ %r' % self if body: body.Free(self._body) ... def set_velocity_func(self, func): self._body.contents.velocity_func = ctypes_wrapping(func) I also tried to solve it through weakrefs, with those the things seem getting just worse, just only largely more unpredictable. Even if I don't put in the velocity_func, there will appear cycles at least then when I do this: class Toy(object): def __init__(self, body): self.body.owner = self ... def collision(a, b, contacts): whatever(a.body.owner) So how to make sure Structures will get garbage collected, even if they are allocated/freed by the shared library? There's repository if you are interested about more details: http://bitbucket.org/cheery/ctypes-chipmunk/

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • Getting a lightweight installation of java eclipse.

    - by liam
    Having dealt with yet another stupid eclipse problem, I want to try to get the lightest, most minimal eclipse installation as possible. To be clear, I use eclipse for two things: - Editing Java - Debugging Java Everything else I do through emacs/zsh (editing jsp/xml/js, file management, svn check-in, etc). I have not found any aspect of working in eclipse to do these tasks to be efficient or even reliable, so I do not want plug-ins that relate to it. From the eclipse.org site, this is the lightest install of eclipse that they have, and I don't want any of those things (bugzilla, mylyn, cvs, xml_ui), and have actually had problems with each of them even though I do not use them. So what is the minimal build I can get that will: 1) Ignore svn metadata 2) Includes the full-featured editor (intellisense and type-finding) 3) Includes the full-featured debugger (standard eclipse/jdk) Does not have any extra plug-ins, platforms, or "integrations" with other platforms, specifically, I don't want to deal with plug-ins relating to: Maven, JSP Validation, Javascript editing or validation, CVS or SVN, Mylyn, Spring or Hibernate "natures", app servers like a bundled tomcat/glassfish/etc, J2EE tools, or anything of the like. I do primarily spring/hibernate/web-mvc apps, and have never dealt with an eclipse plug-in that handles any of it gracefully, I can work effectively with my own toolset, but eclipse extensions do nothing but get in the way. I have worked with plain eclipse up to Ganymede, MyEclipse (up to 7.5), and the latest version of Spring-SourceTools, and find that they are all saddled with buggy useless plug-ins (though the combination is always different). Switching to netbeans/intellij is not an option, and my teammates work with svn-controlled .class/.project files, so it pretty much has to be eclipse. Does anyone have any good advice on how I can save a few grey hairs?

    Read the article

  • Download Current WSJ.com Prime Rate

    - by Registered User
    I need to automatically download the current Wall Street Journal Prime Rate and load the data into my database. What is the best method for downloading this data automatically? I have come up with three possible solutions for doing this: Scrape a HTML web page from WSJ. Parse a RSS news feed from WSJ. Use some API that I haven't found from WSJ. Regarding solution 1, although I don't like solution 1 since it could easily break, it's the only one that I have worked out from end to end. It appears I can scrape this page with a WebRequest / WebResponse and read the text in this code: <tr> <td style="text-align:left" class="colhead">&nbsp;</td> <td class="colhead">Latest</td> <td class="colhead">Wk ago</td> <td class="colhead">High</td> <td class="colhead">Low</td> </tr> <tr> <td class="text">U.S.</td> <td style="font-weight:bold;" class="num">3.25</td> <td class="num">3.25</td> <td class="num">3.25</td> <td class="num" style="border-right:0px">3.25</td> </tr> Regarding solution 2, although I can implement a RSS reader solution, I don't see a way to reliably anticipate verbiage for changes in the Prime Rate. Therefore, I don't think this is as safe or reliable a way to get the data as solution 1. Regarding solution 3, I haven't found any published API's for checking money rates like the Prime Rate. If anyone knows of a web service or other API for checking money rates, then please let me know.

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • Ditching Django's models for Ajax/Web Services

    - by Igor Ganapolsky
    Recently I came across a problem at work that made me rethink Django's models. The app I am developing resides on a Linux server. Its a simple model/view/controller app that involves user interaction and updating data in the database. The problem is that this data resides in a MS SQL database on a Windows machine. So in order to use Django's models, I would have to leverage an ODBC driver on linux, and the use a python add-on like pyodbc. Well, let me tell you, setting up a reliable and functional ODBC connection on linux is no easy feat! So much so, that I spent several hours maneuvering this on my CentOS with no luck, and was left with frustration and lots of dumb system errors. In the meantime I have a deadline to meet, and suddenly the very agile and rapid Django application is a roadblock rather than a pleasure to work with. Someone on my team suggested writing this app in .NET. But there are a few problems with that: it won't be deployable on a linux machine, and I won't be able to work on it since I don't know ASP.net. Then a much better suggestion was made: keep the app in django, but instead of using models, do straight up ajax/web services calls in the template. And then it dawned on me - what a great idea. Django's models seem like a nuissance and hindrance in this case, and I can just have someone else write .Net services on their side, that I can call from my template. As a result my app will be leaner and more compact. So, I was wondering if you guys ever came across a similar dillema and what you decided to do about it.

    Read the article

  • properly format postal address with line breaks [google maps]

    - by munchybunch
    Using V3 of the google maps API, is there any reliable way to format addresses with the line break? By this, I mean something like 1600 Amphitheatre Parkway Mountain View, CA 94043 should be formatted as 1600 Amphitheatre Parkway Mountain View, CA 94043 Looking through the response object from geocoding, there is an address_components array that has, for the above address, 8 components (not all of the components are used for the address): 0: Object long_name: "1600" short_name: "1600" types: Array[1] 0: "street_number" length: 1 1: Object long_name: "Amphitheatre Pkwy" short_name: "Amphitheatre Pkwy" types: Array[1] 0: "route" length: 1 2: Object long_name: "Mountain View" short_name: "Mountain View" types: Array[2] 0: "locality" 1: "political" length: 2 3: Object long_name: "San Jose" short_name: "San Jose" types: Array[2] 0: "administrative_area_level_3" 1: "political" length: 2 4: Object long_name: "Santa Clara" short_name: "Santa Clara" types: Array[2] 0: "administrative_area_level_2" 1: "political" length: 2 5: Object long_name: "California" short_name: "CA" types: Array[2] 0: "administrative_area_level_1" 1: "political" length: 2 6: Object long_name: "United States" short_name: "US" types: Array[2] 0: "country" 1: "political" length: 2 7: Object long_name: "94043" short_name: "94043" types: Array[1] 0: "postal_code" length: 1 I was thinking that you could just combine parts that you want, like sprintf("%s %s<br />%s, %s %s", array[0].short_name, array[1].short_name, array[2].short_name, array[5].short_name, array[7].short_name) [edit]I just realized that sprintf isn't defined by default in JavaScript, so just a concatenation would do I guess.[/edit] But that seems awfully unreliable. Does anyone know the details on the structure of address_components, and if it's reliably similar like that for street addresses in the US? If I wanted to, I guess I could look for the proper types (street_number,route, etc) as well. I'd love it if anyone had a better way than what I"m doing here...

    Read the article

  • Looking for combinations of server and embedded database engines

    - by codeelegance
    I'm redesigning an application that will be run as both a single user and multiuser application. It is a .NET 2.0 application. I'm looking for server and embedded databases that work well together. I want to deploy the embedded database in the single user setup and of course, the server in the multiuser setup. Past releases have been based on MSDE but in the past year we've been having a lot of install issues: new installs hanging and leaving the system in an unknown state, upgrades disconnecting the database, etc. I migrated the application to SQL Server 2005 and the install is more reliable (as long as a user doesn't try to install over a broken MSDE installation). Since next year's release will be a complete redesign I figured now's the best time to address the database issue as well. The database has been abstracted from the rest of the application so I just need to choose which database(s) to use and write an implementation for each one. So far I've considered: SQL Server/ SQL Server Compact Edition Firebird (same DB engine is available in two different server modes and an embedded dll) Each has its own merits but I'm also interested in any other suggestions. This is a fairly simple program and its data requirements are simple as well. I don't expect it to strain whatever database I eventually choose. So easy configuration and deployment hold more weight than performance.

    Read the article

  • Are there any guarantees in JLS about order of execution static initialization blocks?

    - by Roman
    I wonder if it's reliable to use a construction like: private static final Map<String, String> engMessages; private static final Map<String, String> rusMessages; static { engMessages = new HashMap<String, String> () {{ put ("msgname", "value"); }}; rusMessages = new HashMap<String, String> () {{ put ("msgname", "????????"); }}; } private static Map<String, String> msgSource; static { msgSource = engMessages; } public static String msg (String msgName) { return msgSource.get (msgName); } Is there a possibility that I'll get NullPointerException because msgSource initialization block will be executed before the block which initializes engMessages? (about why don't I do msgSource initialization at the end of upper init. block: just the matter of taste; I'll do so if the described construction is unreliable)

    Read the article

  • Why does my .NET 4 application know .NET 4 is not installed

    - by Tergiver
    I developed an application that targeted .NET 4 the other day and XCOPY-installed it to a Windows XP machine. I had told the owner of the machine that they would need to install .NET Framework 4 to run my app and he told me he did (not a reliable source). When I ran the application I was presented with a message box that said this app requires .NET Framework 4, would I like to install it? Clicking the Yes button took me to the Microsoft web site and a few clicks later .NET 4 was installed, and the application successfully launched. Now I normally don't develop applications that target the latest version of .NET, I always target the lowest version I can (what features do I really need?). So this was my first .NET 4 app (and I only targeted 4 because it used a library that did). In the past, XCOPY-installing .NET applications to a machine that didn't have the correct version of .NET installed resulted in the application simply crashing on startup with no useful information presented to the user. Was it built into my app because I targeted .NET X? Was it something already installed on the target machine? I love the feature, I just want to know precisely how to leverage it in the future.

    Read the article

  • Should a Trim method generally in the Data Access Layer or with in the Domain Layer?

    - by jpierson
    I'm dealing with a database that contains data with inconsistencies such as white leading and trailing white space. In general I see a lot of developers practice defensive coding by trimming almost all strings that come from the database that may have been entered by a user at some point. In my oppinoin it is better to do such formating before data is persisted so that it is done only once and then the data can be in a consistent and reliable state. Unfortunatley this is not the case however which leads me to the next best solution, using a Trim method. If I trim all data as part of my data access layer then I don't have to concern myself with defensive trimming within the business objects of my domain layer. If I instead put the trimming responsibility in my business objects, such as with set accessors of my C# properties, I should get the same net results however the trim will be operating on all values assigned to my business objects properties not just the ones that come from the inconsistent database. I guess as a somewhat philisophical question that may determine the answer I could ask "Should the domain later be responsible for defensive/coercive formatting of data?" Would it make sense to have a set accessor for a PhoneNumber property on a business object accept a unformatted or formatted string and then attempt to format it as required or should this responsibility be pushed to the presentation and data access layers leaving the domain layer more strict in the type of data that it will accept? I think this may be the more fundamental question. Update: Below are a few links that I thought I should share about the topic of data cleansing. Information service patterns, Part 3: Data cleansing pattern LINQ to SQL - Format a string before saving? How to trim values using Linq to Sql?

    Read the article

  • Checking whether images loaded after page load

    - by johkar
    Determining whether an image has loaded reliably seems to be one of the great JavaScript mysteries. I have tried various scripts/script libraries which check the onload and onerror events but I have had mixed and unreliable results. Can I reliably just check the complete property (IE 6-8 and Firefox) as I have done in the script below? I simply have a page wich lists out servers and I link to an on.gif on each server. If it doesn't load I just want to load an off.gif instead. This is just for internal use...I just need it to be reliable in showing the status!!! <script type="text/javascript"> var allimgs = document.getElementsByTagName('img'); function checkImages(){ for (i = 0; i < allimgs.length; i++){ var result = Math.random(); allimgs[i].src = allimgs[i].src + '?' + result; } serverDown(); setInterval('serverDown()',5000); } window.onload=checkImages; function serverDown(){ for (i = 0; i < allimgs.length; i++){ var imgholder=new Image(); imgholder.src=allimgs[i].src; if(!allimgs[i].complete){ allimgs[i].src='off.gif'; } } } </script>

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • I seek a PDF paginator

    - by Cameron Laird
    More precisely, I need software that will allow me to consume existing PDF instances, decorate them with page numbers, or page-number-like writing, then write them back to the filesystem. I'll happily pay for an application, or program it myself. I almost certainly require the software run under Linux (Ubuntu, more specifically). iText comes close. iText certainly can read existing PDF instances. While iText is, for this purpose, only a library, and requires me to program a tiny amount to specify where on the page the numbers should appear, I'm happy to do that. I hesitate with iText only because the latest iText license is a headache at certain government agencies (in practice, I'd probably request and pay for a special license), and because, over the last few years, I've observed cases where iText doesn't appear to keep up with the standard, that is, it has more troubles than I expect reading PDFs observed "in the wild". Similarly, every other possibility I know has at least one difficulty: ReportLab would likely require a disproportionate licensing fee for the small value it provides in this situation, and so on. This application requires no particular sophistication with Unicode, fonts, ... I recognize there are plenty of executables and libraries that do some or all of what I require. I welcome any tips on software that is reliable, generally current with PDF practice, flexible/programmable/configurable/..., and "automatable". In the absence of any new insight, I'll likely go with a specific open-source library I don't want to mention now for which I've already contracted enhancements, or perhaps revisit iText.

    Read the article

  • architecture and tools for a remote control application?

    - by slothbear
    I'm working on the design of a remote control application. From my iPhone or a web browser, I'll send a few commands. Soon my home computer will perform the commands and send back results. I know there are remote desktop apps, but I want something programmable, something simpler, and something that I wrote. My current direction is to use Amazon Simple Queue Service (SQS) as the message bus. The iPhone places some messages in a queue. My local Java/JRuby program notices the messages on the queue, performs the work and sends back status via a different queue. This will be a very low-volume application. At $1.00 for a million requests (plus a handful of data transfer charges), Amazon SQS looks a lot more affordable than having my own server of any type. And super reliable, that's important for me too. Are there better/standard toolkits or architectures for this kind of remote control? Cost is not a big issue, but I prefer the tons I learn by doing it myself. I'm moderately concerned about security, but doubt it will be a problem. The list of commands recognized will be very short, and only recognized in specific contexts. No "erase hard drive" stuff. update: I'll probably distribute these programs to some other people who want the same function, but who don't have Amazon SQS accounts. For now, they'll use anonymous access to my queues, with random 80-character queue names.

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >