Search Results

Search found 271 results on 11 pages for 'turing completeness'.

Page 5/11 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • SVN Export or Recursively Remove .SVN Folders

    - by Ben Griswold
    I shared this script with a coworker yesterday. It doesn’t do much; it recursively deletes .svn folders from a source tree.  It comes in handy if you want to share your codebase or you get in a terrible spot with SVN and you just want to start all over. Just blow away all svn artifacts and use your mulligan. It’s true. You can nearly get the same result using the SVN export command which copies your source sans the .svn folders to an alternate location.  The catch is an export only includes those files/folders which exist under version control.  If you want a clean copy of your source – versioned or not – export just might not do. The contents of the .cmd file include the following: for /f "tokens=* delims=" %%i in (’dir /s /b /a:d *.svn’) do ( rd /s /q "%%i" ) Just download and drop the unzipped “SVN Cleanup.cmd” file into the root of the project, execute and away you go.  If you search around enough, I know you can find similar scripts and approaches elsewhere, but I’m still uploading my script for completeness and future reference. Download SVN Cleanup

    Read the article

  • What Should You Look for In a CRM Demo?

    - by charles.knapp
    I have helped firms evaluate software demos and delivered demos in diverse industries such as manufacturing, healthcare, life sciences, and travel (to name just a few). Here are a few suggestions. First, which vendor has the best fit for your industry? Make sure that the vendor demo staff tell you clearly throughout the demo (not just in a passing comment), what portion of each business process and screen is standard, what has been configured, what has been custom coded, and what has been provided by a partner. If you don't keep asking, what you buy may be less useful than what you saw. This will lead to added (and unbudgeted) costs and time. Second, what are the roles of the primary users? What are their top-most needs, such as exception-oriented dashboards or rapid data entry? Can you get a demo for each key role, showing how the software fits a typical workday? Have the vendor repeatedly tell you what is standard, configured, custom coded, or provided by a partner. Third, how well does the demo balance ease of use with completeness of business processes? One common approach is to hide needed fields or steps that are of low visual value. Another approach is to focus heavily on a visually appealing capability, while downplaying the fit with your key business processes. Result: despite their business acumen, demo attendees may not focus adequately on gaps in business fit So, look for complete disclosure and complete CRM. To arrange a demo from Oracle, please visit http://www.oracle.com/crm.

    Read the article

  • So are we ever getting the technological singularity

    - by jsoldi
    I´m still waiting for an AI robot that will pass the Turing test. I keep going back to http://www.a-i.com/ and nothing. I don´t know much about AI but, did anyone ever tried to make a genetic algorithm whose evolution algorithm itself evolves? Or how about one whose algorithm that makes the genetic algorithm evolve, evolves? Or one whose genetic algorithm that makes the genetic algorithm that makes the genetic algorithm evolve, evolves? Or how about an algorithm that abstracts all this into a potentially infinitely deep tree of genetic evolution algorithms? Aren´t we just failing as programmers? And I don´t think we can blame the processors speed. If you make and application that simulates consciousness you will get a Nobel prize no matter how many hours it takes to respond to your questions. But nobody did it. It almost reminds me to Randi´s $1000000 paranormal challenge. As I keep going back to AI chat bots, they keep getting better at changing the subject on a way that seems natural. But if I tell them something like "if 'x' is 2 then whats two times 'x'?" then they don't have a clue what I'm talking about. And I don't think they need a whole human brain simulation to be able to answer to something like that. They don't need feelings or perception. This is just language and logics. I don't think my perception of the color red gives me the ability to understand that if 'x' is 2 then two times 'x' is 4. I'm sure we are just missing some elemental principle we cannot grasp because it's probably stuck behind our eyes. What do you think?

    Read the article

  • Book Review: Professional WCF 4

    - by Sam Abraham
    My Investigation of WCF internals have set the right stage to revisit Professional WCF 4 by Pablo Cibraro, Kurt Claeys, Fabio Cozzolino and Johann Grabner. In this book, the authors dive deep into all aspects of the WCF API in a reading targeted towards intermediate and advanced developers. Book quality so far as presentation, code completeness, content clarity and organization was superb. The authors have taken a hands-on approach to thoroughly covering the WCF 4.0 API with three chapters totaling 100+ pages completely dedicated to business cases with downloadable source code readily available. Chapter 1 outlines SOA best-practice considerations. Next three chapters take a top-down approach to the WCF API covering service and data contracts, bindings, clients, instancing and Workflow Services followed by another carefully-thought three chapters covering the security options available via the WCF API. In conclusion, Professional WCF 4.0 provides a thorough coverage of the WCF API and is a recommended read for anybody looking to reinforce their understanding of the various features available in the WCF framework. Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.   All the best, --Sam

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • Virtualbox shared folder mount from fstab fails; works once bootup is complete

    - by Ben
    I've got Ubuntu 13.10 installed in Virtualbox 4.3. The host machine is Windows. I have a couple of Virtualbox shared folders being mounted by /etc/fstab. Until recently this setup worked just fine, but after upgrading from Ubuntu 13.04 and Virtualbox 4.2 (at essentially the same time) the fstab mounting stopped working. I get the following error during boot: An error occurred while mounting /home/benme/Documents. keys:Press S to skip mounting or M for manual recovery Pressing M for manual recovery and then trying to mount manually also fails: root@benme-vb:~# cd /home/benme root@benme-vb:/home/benme# mount Documents /sbin/mount.vboxsf: mounting failed with the error: No such device But if I instead skip mounting during boot, wait for Unity to start and then mount manually in a shell, everything works fine: benme-vb ~ % ls Documents benme-vb ~ % sudo mount Documents [sudo] password for benme: benme-vb ~ % ls Documents # actual file list omitted Note that when I mount manually I'm letting mount take all the options from /etc/fstab, and it works. This suggests to me that it's some sort of timing issue, where Virtualbox isn't "ready" to provide the shared file mounts at the point /etc/fstab mounts are run during bootup. Here's the fstab line, just for completeness: Documents /home/benme/Documents vboxsf uid=benme,gid=benme,dmode=774,fmode=664 0 0 Is there something I can do about this from the Ubuntu side? Or does anyone happen to know more about this from the Virtualbox angle? I've found an old report on the Virtualbox bug-tracker with identical symptoms, but in that case the user had updated Virtualbox without updating their guest additions and resolving that fixed the problem; this isn't happening here, I've definitely got the 4.3 guest additions installed.

    Read the article

  • how a pure functional programming language manage without assignment statements?

    - by Gnijuohz
    When reading the famous SICP,I found the authors seem rather reluctant to introduce the assignment statement to Scheme in Chapter 3.I read the text and kind of understand why they feel so. As Scheme is the first functional programming language I ever know something about,I am kind of surprised that there are some functional programming languages(not Scheme of course) can do without assignments. Let use the example the book offers,the bank account example.If there is no assignment statement,how can this be done?How to change the balance variable?I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory,this must can be done too. I learned C,Java,Python and use assignments a lot in every program I wrote.So it's really an eye-opening experience.I really hope someone can briefly explain how assignments are avoided in those functional programming languages and what profound impact(if any) it has on these languages. The example mentioned above is here: (define (make-withdraw balance) (lambda (amount) (if (>= balance amount) (begin (set! balance (- balance amount)) balance) "Insufficient funds"))) This changed the balance by set!.To me it looks a lot like a class method to change the class member balance. As I said,I am not familiar with functional programming languages,so if I said something wrong about them,feel free to point out.

    Read the article

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, JavaScript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist someone like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (i.e. like Thread, Synchronization, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • Ubuntu 12.10 ATI Driver 12.11 fails after compiz and xorg update

    - by Lukasz W.
    I updated my system via the package manager from Unity and next restart was just blackness. After being here: http://linux.hootip.com/amd-catalyst-12-11-beta-fix-and-installation-the-drivers-on-ubuntu-12-11/ I had the Catalyst 12.11 Beta driver installed. I checked my /var/log/apt/history.log and the update I received was of compiz and xorg packages. I tried to get latest release info, but all I get from their pages are commit info; I can't tell what was n the package update I got served. Anyone knows what was in the latest xorg/compiz release that broke the driver? Which driver should I use now? For completeness this is how I got the system back to boot (probably lame and not elegant): Boot with GRUB selection "More Ubuntu options (or sth like that), From secondary screen select 3.5.0-19 with boot options, When system prompts on stuff you'd like to do, select "root" - Drop to root shell, There: # mount -o remount,rw / # mv /etc/X11/xorg.conf /etc/X11/xorg.conf.failed # /usr/share/ati/fglrx-uninstall.sh # reboot This got be back on my feet.

    Read the article

  • Which Open Source Licenses can address concerns for an Open Source Game Engine?

    - by Chris
    I am on a team that is looking to open source an engine we are building. It's intended as an engine for Online RPG style games. We're writing it to work on both desktops and android platforms. I've been over to the OSI http://opensource.org/licenses/category to check out the most common licenses. However, this will be my first time going into an open source project and I wanted to know if the community had some insight into which licenses might be best suited. Key licensing concerns: Removing or limiting our liability (most already seem to cover this, but stating for completeness). We want other developers to be able to take part or all of our project and use it in their own projects with proper accreditation to our project. Licensing should not hinder someone's ability to quickly use the engine. They should be able to download a release and start using it without needing to wait on licensing issues. Game content (gfx, sound, etc.) that is not part of the engine should be allowed to be licensed separately. If someone is using our engine, they can retain full copy right of their content, including engine generated data. Our primary goal is exposure, which is why we're going open source to start with. Both for the project and for the individuals developing it. Are there any licenses that can require accreditation visible to players? While I'd put our primary goal as exposure, for licensing the accreditation is less of a concern. From what I've read through (and have been able to understand) it doesn't seem like any of the licenses cover anything that is produced by the licensed software. Are there any that state this specifically, or does simply not mentioning it leave it open for other licensing? Are there any other concerns that we should consider? Has anyone had any issues using any of these licenses?

    Read the article

  • Kids and programming: ScratchKara

    - by Mike Pagel
    Ever now and then I kept wondering how to share with my kids the excitement of creating something with your computer. Of course, today this is a bit more difficult, as they have seen 3D animation games and well-edited websites. I guess that's why they weren't all that hyped when I found my first computer model at our local recycling facilities (an 8-bit Laser VZ-200 with rubber keys). When I finally got it up and running with an old analog TV set they finally asked whether we could play soccer on it. Needless to say that my showing them how it remembers some BASIC commands and lists and executes them did not make any impression. So the question is for real: How do you get today's kids excited about programming? And just recently I looked again for environments that allow even young kids (mine are 7 and 9 years old now) to do something and have fun. Obviously any real, text-oriented programming language wouldn't work well. To cut it short: Something really nice was built by University of Oldenburg: ScratchKara. It is the perfect mixture of Kara, a simulation of a little ladybug and Scratch, an authoring environment from MIT. ScratchKara allows kids to initially simply explore how the bug moves and turns by pressing the action buttons, then move towards sequencing commands through drag & drop, and eventually end up building algorithms with procedures and functions. Even through it is built for kids and beginners, the environment comes with debugging and refactoring, which I found more than amazing. My kids love it and I have to admit I keep thinking about how to solve a bit more advanced problems with this language, which does not allow you to store any state information (other than your call stack). Yes, I am hooked, too... Once the language is understood you can then move to one of the original Kara versions, where you can define the bug's behavior through finite statemachines, Turing tables, Java and other textual languages. And from there, anything is possible.

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • How do I create my own programming language and a compiler for it

    - by Dave
    I am thorough with programming and have come across languages including BASIC, FORTRAN, COBOL, LISP, LOGO, Java, C++, C, MATLAB, Mathematica, Python, Ruby, Perl, Javascript, Assembly and so on. I can't understand how people create programming languages and devise compilers for it. I also couldn't understand how people create OS like Windows, Mac, UNIX, DOS and so on. The other thing that is mysterious to me is how people create libraries like OpenGL, OpenCL, OpenCV, Cocoa, MFC and so on. The last thing I am unable to figure out is how scientists devise an assembly language and an assembler for a microprocessor. I would really like to learn all of these stuff and I am 15 years old. I always wanted to be a computer scientist some one like Babbage, Turing, Shannon, or Dennis Ritchie. I have already read Aho's Compiler Design and Tanenbaum's OS concepts book and they all only discuss concepts and code in a high level. They don't go into the details and nuances and how to devise a compiler or operating system. I want a concrete understanding so that I can create one myself and not just an understanding of what a thread, semaphore, process, or parsing is. I asked my brother about all this. He is a SB student in EECS at MIT and hasn't got a clue of how to actually create all these stuff in the real world. All he knows is just an understanding of Compiler Design and OS concepts like the ones that you guys have mentioned (ie like Thread, Synchronisation, Concurrency, memory management, Lexical Analysis, Intermediate code generation and so on)

    Read the article

  • Location of development solutions on disk - Common or upto the individual

    - by dreza
    In our team meeting today a senior member brought up the proposal that we should be having a common location/structure for our development solutions. A couple of his points were: Making it common meant when talking about projects and emailing stuff everyone is on the same wavelength and knows where to look. If there is ever the need to hard code a location path then it will work across all developers pc's. He had a more few points to back up his suggestion but I unfortunately got distracted during the discussion and so didn't hear all of them. I have no issue with the idea and can see it's merits but I was wondering if it is common or even recommended that all developers place their code in the same folder structure. Or do developers like to have the flexibility of location solutions where-ever they want? We currently use SVN for our version control. In this case his recommendation was to place all code in: c:\Work\Development\<Customer>\<project>\Code\<solution>\ the code I guess actual path is irrelevant for this question but added for completeness.

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • Why is CS never a topic of conversation of the layman? [closed]

    - by hydroparadise
    Granted, every profession has it's technicalities. If you are an MD, you better know the anatomy of the human body, and if you are astronomer, you better know your calculus. Yet, you don't have to know these more advance topics to know that smoking might give you lung cancer because of carcinogens or the moon revolves around the earth because of gravity (thank you Discovery Channel). There's sort of a common knowledge (at least in more developed countries) of these more advanced topics. With that said, why are things like recursive descent parsing, BNF, or Turing machines hardly ever mentioned outsided 3000 or 4000 level classes in a university setting or between colleagues? Even back in my days before college in my pursuit of knowledge on how computers work, these very important topics (IMHO) never seem to get the light of day. Many different sources and sites go into "What is a processor?" or "What is RAM?", or "What is an OS?". You might get lucky and discover something about programming languages and how they play a role in how applications are created, but nothing about the tools for creating the language itself. To extend this idea, Dennis Ritchie died shortly after Steve Jobs, yet Dennis Ritchie got very little press compared to Steve Jobs. So, the heart of my question: Does the public in general not care to hear about computer science topics that make the technology in their lives work, or does the computer science community not lend itself to the general public to close the knowledge gap? Am I wrong to think the general public has the same thirst for knowledge on how things work as I do? Please consider the question carefully before answering or vote closing please.

    Read the article

  • A few questions about how JavaScript works

    - by KayoticSully
    I originally posted on Stack Overflow and was told I might get some better answers here. I have been looking deeply into JavaScript lately to fully understand the language and have a few nagging questions that I can not seem to find answers to (Specifically dealing with Object Oriented programming. I know JavaScript is meant to be used in an OOP manner I just want to understand it for the sake of completeness). Assuming the following code: function TestObject() { this.fA = function() { // do stuff } this.fB = testB; function testB() { // do stuff } } TestObject.prototype = { fC : function { // do stuff } } What is the difference between functions fA and fB? Do they behave exactly the same in scope and potential ability? Is it just convention or is one way technically better or proper? If there is only ever going to be one instance of an object at any given time, would adding a function to the prototype such as fC even be worthwhile? Is there any benefit to doing so? Is the prototype only really useful when dealing with many instances of an object or inheritance? And what is technically the "proper" way to add methods to the prototype the way I have above or calling TestObject.prototype.functionName = function(){} every time? I am looking to keep my JavaScript code as clean and readable as possible but am also very interested in what the proper conventions for Objects are in the language. I come from a Java and PHP background and am trying to not make any assumptions about how JavaScript works since I know it is very different being prototype based. Also are there any definitive JavaScript style guides or documentation about how JavaScript operates at a low level? Thanks!

    Read the article

  • How can I be certain that my code is flawless? [duplicate]

    - by David
    This question already has an answer here: Theoretically bug-free programs 5 answers I have just completed an exercise from my textbook which wanted me to write a program to check if a number is prime or not. I have tested it and seems to work fine, but how can I be certain that it will work for every prime number? public boolean isPrime(int n) { int divisor = 2; int limit = n-1 ; if (n == 2) { return true; } else { int mod = 0; while (divisor <= limit) { mod = n % divisor; if (mod == 0) { return false; } divisor++; } if (mod > 0) { return true; } } return false; } Note that this question is not a duplicate of Theoretically Bug Free Programs because that question asks about whether one can write bug free programs in the face of the the limitative results such as Turing's proof of the incomputability of halting, Rice's theorem and Godel's incompleteness theorems. This question asks how a program can be shown to be bug free.

    Read the article

  • How to define implementation details?

    - by woni
    In our project, an assembly combines logic for the IoC-Container, the project internals and the communication layer. The current version evolved to have only internal classes in addin assemblies. My main problem with this approach is, that the entry point is only available over the IoC-Container. It is not possible to use anything else than reflection to initialize the assembly. Everything behind the IoC-Interface is defined as implementation detail and therefore not intended for usages outside. It is well known that you should not test implementation detail (such as private and internal methods), because they should be tested through the public interface. It is also well known, that your tests should not use the IoC-Container to setup the SUTs, because that would result in too much dependencies. So we are using the InternalsVisibleTo-Attribute to make internals visible to our test assemblies and test the so called implementation details. I recognized that one problem could be the mixup between different concerns in that assembly, changing this would make this discussion useless, because classes have to be defined public. Ignoring my concerns with this, isn't the need to test a class enough reason to make it public, the usages of InternalsVisibleTo seems unintended, and a little bit "hacky". The approach to test only against the publicly available IoC-Container is too costly and would result in integration style tests. The pros of using internals are, that the usages are well known and do not have to be implemented like a public method would have to be (documentation, completeness, versioning,...). Is there a solution, to not test against internals, but keep their advantages over public classes, or do we have to redefine what an implementation detail is.

    Read the article

  • Resume on 30 Days of SharePoint

    Dear readers, as you might have noticed... It was an organisational desaster on my end! Even though I continued my studies and research on Microsoft SharePoint 2013 during the last 30 days, I wasn't able to write an article a day to keep you posted on my progress. Nonetheless, I gathered a good number of additional blogs, mainly SharePoint MVP sites, and online forums which will be helpful in the next couple of weeks while I'm actually going to develop a C#-based client which will enable an existing 'legacy' application to SharePoint as a document management system (DMS) besides other already existing solutions. Finding excuses Well, no. Not really. I simply didn't block any or enough time every day to write down my progress during my own challenge. My log book on learning about SharePoint stands at 41 hours and 15 minutes during this month. Which means that I spent an average of more than 1 hour per day on getting into SharePoint. I know that might sound a little bit low but also keep in mind that I went for the challenge on top of my daily job and private responsibilities. During the same period there had been two priority 0 incidents from clients - external root cause - which took presedence over this leisure project. More to come Anyway, it was a first trial and despite the low level of reporting on my blog, I'm confident about what I learned during the last 30 days, and I'm ready to implement the client's requirements. At least, I would say that I have a better understanding about the road map or the path to walk during the next month. As time and secrecy allows I'm going to note down some bits and pieces... During the process of development, I'm going to 'cheat' on the challenge summary article and add links to those new entries. Just for the sake of completeness. Next challenge? Hmm, there had been ideas during the last meetup of the Mauritius Software Craftsmanship Community (MSCC) regarding certifications in IT and eventually we might organise some kind of a study group for specific exams, most probably Microsoft exams towards MCSD Web Developer or Windows Developer.

    Read the article

  • JSR 308 Moves Forward

    - by abuckley
    I am pleased to announce a number of recent milestones for JSR 308, Annotations on Java Types: Adoption of JCP 2.8 Thanks to the agreement of the Expert Group, JSR 308 operates under JCP 2.8 from September 2012. There is a publicly archived mailing list for EG members, and a companion list for anyone who wishes to follow EG traffic by email. There is also a "suggestion box" mailing list where anyone can send feedback to the E.G. directly. Feedback will be discussed on the main EG list. Co-spec lead Prof. Michael Ernst maintains an issue tracker and a document archive. Early-Access Builds of the Reference Implementation Oracle has published binaries for all platforms of JDK 8 with support for type annotations. Builds are generated from OpenJDK's type-annotations/type-annotations forest (notably the langtools repo). The forest is owned by the Type Annotations project. Integration with Enhanced Metadata On the enhanced metadata mailing list, Oracle has proposed support for repeating annotations in the Java language in Java SE 8. For completeness, it must be possible to repeat annotations on types, not only on declarations. The implementation of repeating annotations on declarations is already in the type-annotations/type-annotations forest (and hence in the early-access builds above) and work is underway to extend it to types.

    Read the article

  • How to handle notifications to several partial views of the same model?

    - by Seki
    I am working on refactoring an old simulation of a Turing machine. The application uses a class that contains the state and the logic of program execution, and several panels to display the tape representation and show the state, messages, and the GUI controls (start, stop, program listing, ...). I would like to refactor it using the MVC architecture that was not used originaly: the Frame is the only way to get access to the different panels and there is also a strong coupling between the "engine" class and the GUI updates in the way of frame.displayPanel.state.setText("halted"); or frame.outputPanel.messages.append("some thing"); It looks to me that I should put the state related code into an observable model class and make the different panels observers. My problem is that the java Observable class only provides a global notification to the Observers, while I would prefer not to refresh every Observers everytime, but only when the part that specificaly observe has changed. I am thinking of implementing myself several vectors of listeners (for the state / position, for the output messages, ...) but I feel like reinventing the wheel. I though also about adding some flags that the observers could check like isNewMessageAvailable(), hasTapeMoved(), etc but it sounds also approximative design. BTW, is it ok to keep the fetch / execute loop into the model or should I move it in another place? We can think in a theorical ideal way as I am completely revamping this small application.

    Read the article

  • Problem with incomplete type while trying to detect existence of a member function

    - by abir
    I was trying to detect existence of a member function for a class where the function tries to use an incomplete type. The typedef is struct foo; typedef std::allocator<foo> foo_alloc; The detection code is struct has_alloc { template<typename U,U x> struct dummy; template<typename U> static char check(dummy<void* (U::*)(std::size_t),&U::allocate>*); template<typename U> static char (&check(...))[2]; const static bool value = (sizeof(check<foo_alloc>(0)) == 1); }; So far I was using incomplete type foo with std::allocator without any error on VS2008. However when I replaced it with nearly an identical implementation as template<typename T> struct allocator { T* allocate(std::size_t n) { return (T*)operator new (sizeof(T)*n); } }; it gives an error saying that as T is incomplete type it has problem instantiating allocator<foo> because allocate uses sizeof. GCC 4.5 with std::allocator also gives the error, so it seems during detection process the class need to be completely instantiated, even when I am not using that function at all. What I was looking for is void* allocate(std::size_t) which is different from T* allocate(std::size_t). My questions are (I have three questions, but as they are correlated , so I thought it is better not to create three separate questions). Why MS std::allocator doesn't check for incomplete type foo while instantiating? Are they following any trick which can be implemented ? Why the compiler need to instantiate allocator<T> to check the existence of the function when sizeof is not used as sfinae mechanism to remove/add allocate in the overload resolutions set? It should be noted that, if I remove the generic implementation of allocate leaving the declaration only, and specialized it for foo afterwards such as struct foo{}; template< struct allocator { foo* allocate(std::size_t n) { return (foo*)operator new (sizeof(foo)*n); } }; after struct has_alloc it compiles in GCC 4.5 while gives error in VS2008 as allocator<T> is already instantiated and explicit specialization for allocator<foo> already defined. Is it legal to use nested types for an std::allocator of incomplete type such as typedef foo_alloc::pointer foo_pointer; ? Though it is practically working for me, I suspect the nested types such as pointer may depend on completeness of type it takes. It will be good to know if there is any possible way to typedef such types as foo_pointer where the type pointer depends on completeness of foo. NOTE : As the code is not copy paste from editor, it may have some syntax error. Will correct it if I find any. Also the codes (such as allocator) are not complete implementation, I simplified and typed only the portion which I think useful for this particular problem.

    Read the article

  • MVVM - Master/Detail scenario with Navigation and Blendability

    - by vidalsasoon
    Hi, I'll start off with what I want so it may be simpler to understand: I have a Page (Master.xaml) that has has a listbox of PersonViewModel. When the user selects a PersonViewModel from the listbox, I want to Navigate to a details (Details.xaml) page of the selected PersonViewModel. The details page does some extra heavy lifting that I only want done once the user navigates to the page. (I don't want too much stuff loaded in each PersonViewModel of the master listbox) So how do you guys handle master/detail scenarios with navigation while maintaining "blendability"? I've been turing in circles for the past week. there seems to be no clean solution for something that should be quite common?

    Read the article

  • ASP.NET MVC Html.ActionLink Maintains Route Values

    - by Carl
    Hi, I have a question that has pretty much been asked here: http://stackoverflow.com/questions/780643/asp-net-mvc-html-actionlink-keeping-route-value-i-dont-want However, the final solution is a kludge, pure and simple and I really would like to understand why this happens, if someone can please explain it to me? For completeness, it is possible to recreate the scenario very easily: Create a new MVC web app. Run it up. Visit the About tab Modify the URL to read /Home/About/Flib - This obviously takes you to the action with an id of 'Flib' which we don't care about. Notice that the top menu link to About now actually links to /Home/About/Flib - this is wrong as far as I can see, as I now have absolutely no way of using site links to get back to /Home/About I really don't understand why I should be forced to modify all of my Html.ActionLinks to include new { id = string.Empty } for the routevalues and null for the htmlAttribs. This seems especially out of whack because I already specify id = 0 as part of the route itself. Hopefully I'm missing a trick here.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >