Search Results

Search found 5819 results on 233 pages for 'compiler theory'.

Page 62/233 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • sqrt(int_value + 0.0) ? The point?

    - by Earlz
    Hello, while doing some homework in my very strange C++ book, which I've been told before to throw away, had a very peculiar code segment. I know homework stuff always throws in extra "mystery" to try to confuse you like indenting 2 lines after a single-statement for-loop. But this one I'm confused on because it seems to serve some real-purpose. basically it is like this: int counter=10; ... if(pow(floor(sqrt(counter+0.0)),2) == counter) ... I'm interested in this part especially: sqrt(counter+0.0) Is there some purpose to the +0.0? Is this the poormans way of doing a static cast to a double? Does this avoid some compiler warning on some compiler I do not use? The entire program printed the exact same thing and compiled without warnings on g++ whenever I left out the +0.0 part. Maybe I'm not using a weird enough compiler?

    Read the article

  • Assign bitset member to char

    - by RedX
    I have some code here that uses bitsets to store many 1 bit values into a char. Basically struct BITS_8 { char _1:1; (...) char _8:1; } Now i was trying to pass one of these bits as a parameter into a function void func(char bit){ if(bit){ // do something }else{ // do something else } // and the call was struct BITS_8 bits; // all bits were set to 0 before bits._7 = 1; bits._8 = 1; func(bits._8); The solution was to single the bit out when calling the function: func(bits._8 & 0x128); But i kept going into //do something because other bits were set. I was wondering if this is the correct behaviour or if my compiler is broken. The compiler is an embedded compiler that produces code for freescale ASICs.

    Read the article

  • template expressions and visual studio 2005 c++

    - by chris
    I'd like to build the olb3d library with my visual studio 2005 compiler but this failes due to template errors. To be more specific, the following expression seem to be a problem: void function(T u[Lattice::d]) On the website of the project is stated that prpably my compiler is not capable of such complicated template expressions - one should use the gcc 3.4.1. My question is now if there is a way to upgrade my vs c++ compiler so it can handle template expressions on the level as the gcc 3.4.1? Maybe it helps if I get a newer version of visual studio? Cheers C.

    Read the article

  • Eclipse bug? Switching on a null with only default case

    - by polygenelubricants
    I was experimenting with enum, and I found that the following compiles and runs fine on Eclipse (Build id: 20090920-1017, not sure exact compiler version): public class SwitchingOnAnull { enum X { ,; } public static void main(String[] args) { X x = null; switch(x) { default: System.out.println("Hello world!"); } } } When compiled and run with Eclipse, this prints "Hello world!" and exits normally. With the javac compiler, this throws a NullPointerException as expected. So is there a bug in Eclipse Java compiler?

    Read the article

  • How to disambiguate subdirs with the same name in the Projects list?

    - by jlstrecker
    My Qt project has 2 subdirs/subprojects with the same name. Their directories are myproject/node and myproject/compiler/test/node. The problem (or annoyance) is that, in the Projects list in Qt, both subdirs are listed as "node". So you have to open them up to figure out which is which. myproject.pro is like this: TEMPLATE = subdirs QMAKE_CLEAN = Makefile SUBDIRS += \ compiler_test_node \ node \ ... compiler_test_node.subdir = compiler/test/node node.depends = compiler_vuo_compile ... Without renaming the myproject/compiler/test/node directory, is there a way to make it show up with a different name in the Projects list?

    Read the article

  • ANSI C++: Diferences between delete and delete[]

    - by Sunscreen
    I was looking a snipset of code: int* ip; ip = new int[100]; delete ip; The example above states that: "This code will work with many compilers, but it should instead read:" int* ip; ip = new int[100]; delete [] ip; Is this indeed the case? I use the compiler "Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 11.00.7022 for 80x86" and does not complain (first example) while compiling. At runtime the pointer is set to NULL. Other compilers behave diferrently? Can a compiler not compain and issues can appear at runtime? Thanks, Sun

    Read the article

  • Converting a macro to an inline function

    - by Rob
    I am using some Qt code that adds a VERIFY macro that looks something like this: #define VERIFY(cond) \ { \ bool ok = cond; \ Q_ASSERT(ok); \ } The code can then use it whilst being certain the condition is actually evaluated, e.g.: Q_ASSERT(callSomeFunction()); // callSomeFunction not evaluated in release builds! VERIFY(callSomeFunction()); // callSomeFunction is always evaluated Disliking macros, I would instead like to turn this into an inline function: inline VERIFY(bool condition) { Q_ASSERT(condition); } However, in release builds I am worried that the compiler would optimise out all calls to this function (as Q_ASSERT wouldn't actually do anything.) I am I worrying unnecessarily or is this likely depending on the optimisation flags/compiler/etc.? I guess I could change it to: inline VERIFY(bool condition) { condition; Q_ASSERT(condition); } But, again, the compiler may be clever enough to ignore the call. Is this inline alternative safe for both debug and release builds?

    Read the article

  • Annotation retention policy: what real benefit is there in declaring `SOURCE` or `CLASS`?

    - by watery
    I know there are three retention policies for Java annotations: CLASS: Annotations are to be recorded in the class file by the compiler but need not be retained by the VM at run time. RUNTIME: Annotations are to be recorded in the class file by the compiler and retained by the VM at run time, so they may be read reflectively. SOURCE: Annotations are to be discarded by the compiler. And although I understand their usage scenarios, I don't get why it is such an important thing to specify the retention policy that retention policies exist at all. I mean, why aren't all the annotations just kept at runtime? Do they generate so much bytecode / occupy so much memory that stripping those undeclared as RUNTIME does make that much difference?

    Read the article

  • Upgrade Intel Xeon Prestonia to a 64-bit processor

    - by IDisposable
    In theory, could I upgrade a mPGA604-socket motherboard with a Prestonia processor to some Intel Xeon processor with 64-bit? I've got a Dell PowerEdge 1750 with dual 2.8GHz Xeon processors running my Windows Home Server machine. I want to upgrade to the upcoming Vail release, but it is 64-bit only. The processors are Prestonia-core, which is pre-64bit, but I was wondering if it was possible to swap in some pin-compatible later generation processor. According to wikipedia, the mPGA604-socket continues to be used for several later generations that do have same pinout. So, IN THEORY, could I swap in a 64-bit, like a Nocona-core?

    Read the article

  • Looking for an actual experience of RAID 5 2 drive failure?

    - by Brian
    I'm wondering if anyone has any personal experience of RAID 5 2 drive failure with large drives? As I understand it, the theory is that with large 1-2TB drives, if one drive fails in the raid set, it needs to rebuild everything so is thus hitting all the other drives very hard, and the chance of another failure goes up, especially if the drives were from the same manufacturing batch. And if you lose another drive, you lose all the data. This is usually explained after the statement "RAID is not backup" which I agree with. The theory of this makes sense, and I understand it, but does it really happen?

    Read the article

  • Deleting slow on X11 emacs

    - by Malvolio
    I'm running GNU Emacs 21.4.1 on a (remote) remote Linux ((CentOS) box, using my MacBook as the X-server. Works fine, unless I try to delete a word, line, or region. Then it locks up for 30 seconds or so. It sounds like a minor thing, but you realize how often you do a delete when you have to stop for 30 seconds every time. My theory is that Emacs is trying to put the text in the X-server cut-and-paste buffer, which is trying to put it in the OSX cut-and-paste buffer and somewhere along the way, the process is blocked until it times out. (My only evidence for this theory is (a) copy-region behaves the same way and (b) deleted text doesn't show up in the buffer.) Any suggestions appreciated. Edit: (setq interprogram-cut-function nil) fixed me right up. Which makes perfect sense. Thanks, Trey.

    Read the article

  • How much faster is a 64bit CPU than 32bit CPU? [closed]

    - by W.N.
    I just need the result in theory. I'm not an expert in computer architecture, just a software developer. Most of my friends think 64bit-CPU is 2 times faster than 32bit-CPU. But I think 64-CPU is 2 ^ 2 = 4 times faster than 32bit-CPU (in theory). Which is the right answer to this question? And, if there were 128bit-CPU, how many times would it faster than a 32bit-CPU? PS: I searched with Google, and found a link that referred to benchmarks that supported my answer. I've seen a benchmark where 64bit was 4x faster but that was with a particular encryption where the extra registers available made the difference

    Read the article

  • mootools slideshow not working with JQuery. Need help !

    - by Shantanu Gupta
    I am working on a site http://tapasya.co.in where i just impemented mootools slideshow. But I noticed that menubar that i was using stopped working, it was supposed to drop horizontaly but it is not being displayed now. I have used jquery for it. Please see the source of the web page. What can be the problem ? Mootools conflicting with javascript or some other problem. If I tries to use $.noConflict() it throws me an error in script Uncaught TypeError: Object function (B,C){if(B&&B.$family&&B.uid){return B}var A=$type(B);return($[A])?$[A](B,C,this.document):null} has no method 'noConflict' I tried the given solution below. But it is not working. <script type="text/javascript" src="<%= ResolveUrl("~/Js/jquery-1.3.2.min.js") %>" ></script> <script type="text/javascript" src="<%= ResolveUrl("~/Scripts/SlideShow/js/mootools.js") %>"></script> <script type="text/javascript" src="<%= ResolveUrl("~/Scripts/SlideShow/js/slideshow.js") %>"></script> <script type="text/javascript" src="<%= ResolveUrl("~/Scripts/SlideShow/js/lightbox.js") %>"></script> <script type="text/javascript"> // <![CDATA[ $.noConflict(); var timeout = 500; var closetimer = 0; var ddmenuitem = 0; function ddmenu_open(){ ddmenu_canceltimer(); ddmenu_close(); ddmenuitem = $(this).find('ul').css('visibility', 'visible'); } function ddmenu_close(){ if(ddmenuitem) ddmenuitem.css('visibility', 'hidden'); } function ddmenu_timer(){ closetimer = window.setTimeout(ddmenu_close, timeout); } function ddmenu_canceltimer(){ if(closetimer){ window.clearTimeout(closetimer); closetimer = null; }} $(document).ready(function(){ $('#ddmenu > li').bind('mouseover', ddmenu_open) $('#ddmenu > li').bind('mouseout', ddmenu_timer) }); document.onclick = ddmenu_close; // ]]> </script> <script type="text/javascript"> //<![CDATA[ window.addEvent('domready', function(){ var data = { '1.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' }, '2.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' }, '3.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' }, '4.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' } }; // Note the use of "linked: true" which tells Slideshow to auto-link all slides to the full-size image. //http://code.google.com/p/slideshow/wiki/Slideshow#Options: var mootoolsSlideshow = new Slideshow('divbanner', data, {loader:true,captions: true, delay: 5000,controller: false, height: 370,linked: false, hu: '<%= ResolveUrl("~/Scripts/SlideShow/Images/") %>', thumbnails: true, width: 1002}); // Here we create the Lightbox instance. // In this case we will use the "close" and "open" callbacks to pause our show while the modal window is visible. var box = new Lightbox({ 'onClose': function(){ this.pause(false); }.bind(mootoolsSlideshow), 'onOpen': function(){ this.pause(true); }.bind(mootoolsSlideshow) }); }); //]]> </script>

    Read the article

  • $.noConflict() not working with mootools slide show. Need help !

    - by Shantanu Gupta
    I am working on a site http://tapasya.co.in where i just impemented mootools slideshow. But I noticed that menubar that i was using stopped working, it was supposed to drop horizontaly but it is not being displayed now. I have used jquery for it. Please see the source of the web page. What can be the problem ? Mootools conflicting with javascript or some other problem. If I tries to use $.noConflict() it throws me an error in script Uncaught TypeError: Object function (B,C){if(B&&B.$family&&B.uid){return B}var A=$type(B);return($[A])?$[A](B,C,this.document):null} has no method 'noConflict' I tried the given solution below. But it is not working. <script type="text/javascript" src="<%= ResolveUrl("~/Js/jquery-1.3.2.min.js") %>" ></script> <script type="text/javascript" src="<%= ResolveUrl("~/Scripts/SlideShow/js/mootools.js") %>"></script> <script type="text/javascript" src="<%= ResolveUrl("~/Scripts/SlideShow/js/slideshow.js") %>"></script> <script type="text/javascript" src="<%= ResolveUrl("~/Scripts/SlideShow/js/lightbox.js") %>"></script> <script type="text/javascript"> // <![CDATA[ $.noConflict(); var timeout = 500; var closetimer = 0; var ddmenuitem = 0; function ddmenu_open(){ ddmenu_canceltimer(); ddmenu_close(); ddmenuitem = $(this).find('ul').css('visibility', 'visible'); } function ddmenu_close(){ if(ddmenuitem) ddmenuitem.css('visibility', 'hidden'); } function ddmenu_timer(){ closetimer = window.setTimeout(ddmenu_close, timeout); } function ddmenu_canceltimer(){ if(closetimer){ window.clearTimeout(closetimer); closetimer = null; }} $(document).ready(function(){ $('#ddmenu > li').bind('mouseover', ddmenu_open) $('#ddmenu > li').bind('mouseout', ddmenu_timer) }); document.onclick = ddmenu_close; // ]]> </script> <script type="text/javascript"> //<![CDATA[ window.addEvent('domready', function(){ var data = { '1.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' }, '2.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' }, '3.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' }, '4.jpg': { caption: 'Acoustic Guitar,electric,bass,keyboard, indian vocal traning and Music theory.' } }; // Note the use of "linked: true" which tells Slideshow to auto-link all slides to the full-size image. //http://code.google.com/p/slideshow/wiki/Slideshow#Options: var mootoolsSlideshow = new Slideshow('divbanner', data, {loader:true,captions: true, delay: 5000,controller: false, height: 370,linked: false, hu: '<%= ResolveUrl("~/Scripts/SlideShow/Images/") %>', thumbnails: true, width: 1002}); // Here we create the Lightbox instance. // In this case we will use the "close" and "open" callbacks to pause our show while the modal window is visible. var box = new Lightbox({ 'onClose': function(){ this.pause(false); }.bind(mootoolsSlideshow), 'onOpen': function(){ this.pause(true); }.bind(mootoolsSlideshow) }); }); //]]> </script>

    Read the article

  • Programming concepts taken from the arts and humanities

    - by Joey Adams
    After reading Paul Graham's essay Hackers and Painters and Joel Spolsky's Advice for Computer Science College Students, I think I've finally gotten it through my thick skull that I should not be loath to work hard in academic courses that aren't "programming" or "computer science" courses. To quote the former: I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. — Paul Graham, "Hackers and Painters" There are certainly other, much stronger reasons to work hard in the "boring" classes. However, it'd also be neat to know that these classes may someday inspire me in programming. My question is: what are some specific examples where ideas from literature, art, humanities, philosophy, and other fields made their way into programming? In particular, ideas that weren't obviously applied the way they were meant to (like most math and domain-specific knowledge), but instead gave utterance or inspiration to a program's design and choice of names. Good examples: The term endian comes from Gulliver's Travels by Tom Swift (see here), where it refers to the trivial matter of which side people crack open their eggs. The terms journal and transaction refer to nearly identical concepts in both filesystem design and double-entry bookkeeping (financial accounting). mkfs.ext2 even says: Writing superblocks and filesystem accounting information: done Off-topic: Learning to write English well is important, as it enables a programmer to document and evangelize his/her software, as well as appear competent to other programmers online. Trigonometry is used in 2D and 3D games to implement rotation and direction aspects. Knowing finance will come in handy if you want to write an accounting package. Knowing XYZ will come in handy if you want to write an XYZ package. Arguably on-topic: The Monad class in Haskell is based on a concept by the same name from category theory. Actually, Monads in Haskell are monads in the category of Haskell types and functions. Whatever that means...

    Read the article

  • unit level testing, agile, and refactoring

    - by dsollen
    I'm working on a very agile development system, a small number of people with my doing the vast majority of progaming myself. I've gotten to the testing phase and find myself writing mostly functional level testing, which I should in theory be leavning for our tester (in practice I don't entirely...trust our tester to detect and identify defects enough to leave him the sole writter of functional tests). In theory what I should be writing is Unit level tests. However, I'm not sure it's worth the expense. Unit testing takes some time to do, more then functional testing since I have to set up mocks and plugs into smaller units that weren't design to run in issolation. More importantly, I find I refactor and redesign heavily-part of this is due to my inherriting code that needed heavy redesign and is still being cleaned up, but even once I've finished removing parts that need work I'm sure in the act of expanding the code I'll still do a decent amount of refactoring and redesign. It feels as if I will break my unit tests, forcing wasted time to refactor them as well, often due to unit test, by definition, having to be coupled so closely to the code structure. So.is it worth all the wasted time when functional tests, that will never break when I refactor/redesign, should find most defects? Do unit tests really provide that much extra defect detetection over through functional? and how does one create good unit tests that work with very quick and agile code that is modified rapidly? ps, I would be fine/happy with links to anything one considers an excellent resource for how to 'do' unit testing in a highly changing enviroment. edit: to clarify I am doing a bit of very unoffical TDD, I just seem to be writing tests on what would be considered a functional level rather then unit level. I think part of this is becaus I own nearly all of the project I don't feel I need to limit the scope as much; and part of it is that it's daunting to think of trying to go back and retroactively add the unit tests needed to cover enough code that I can feel comfortable testing only a unit without the full functionality and trust that unit still works with the rest of the units.

    Read the article

  • Is it more valuable to double major in Computer Science/Software Engineering or get an undergraduate CS degree with a Masters in SE?

    - by Austin Hyde
    A friend and I (both in college) are currently in a debate over which is better, in terms of employment opportunities, experience, and education: a Bachelors degree in both Computer Science and Software Engineering, or a Bachelors in Computer Science with a Masters in Software Engineering. My point of view is that I would rather go to school for 4-4.5 years to learn both sides of the field, and be out working on real projects gaining real experience, by going the double major route. His point of view is that it would look better to potential employers if he had a Bachelors in CS and Masters in SE. That way, when he's finally done after 4 years of CS and 2-4 of SE (depending on where he goes), he can pretty much have his choosing of what he wants to do. We are both in agreement on the distinction between the two degrees: CS is "traditional" and about the theory of algorithms, data structures, and programming, where SE is the study of the design of software and the implementation of CS theory. So, what's your stance on this debate? Have you gone one route or another? And most importantly, why?

    Read the article

  • Is there any place to find real-world usage-style tutorials for programming languages?

    - by OleDid
    Let's face it. When you want to learn something completely new, be it mathematics or foreign languages, it's easiest to learn when you get real world scenarios in front of you, with theory applied. For example, trigonometry can be extremely interesting when applied to creation of 2D platform games. Norwegian can be really interesting to learn if you live in Norway. When I try to look at a new programming language, I always find these steps the hardest: What tools do I need to compile and how do I do it Introduction-step: Why is this programming language so cool? Where and how is it used? (The step I am looking for, real-world scenarios) The rest, deep diving into the language, pure theory and such, is often much easier if you have completed step 1 and 2. Because now you know what it's all about, and can just read the specification when you need to. What I ask is, do you have any recommendations for places I can find such material for programming languages? Be it websites or companies selling books in this style, I'm interested. Also, I am interested in all languages. (If I had found a "real-world usage" explained for even INTERCAL, I would be interested). In some other thread here, I found a book called "Seven Languages in Seven Weeks". This is kind of what I am looking for, but I believe there must be "more like this".

    Read the article

  • How Can I Effectively Interview an Oracle Candidate?

    - by Tim Medora
    First, I browsed through SO for matching questions and didn't find one, but please point me in the right direction if this exact question has already been asked. I work with and around programmers of various skill levels on various platforms. I would consider my skills to be strong in terms of relational database design, query development, and basic performance tuning and administration. I'm mid-level when it comes to database theory. My team is looking to me to ensure that we have the best talent on staff, in this case, an engineer experienced in Oracle administration. To me, a well-rounded database administrator, regardless of platform, should also be competent in developing against the database so that is also a requirement. However my database skills are centralized around SQL Server 200x with experience in a few other products like SAP MaxDB, Access, and FoxPro. How can I thoroughly assess the skills of an Oracle engineer? I can ask high-level database theory questions and talk about routine tasks that are common across platforms, but I want to dig deep enough that I can be confident in the people I hire. Normally, I would alternate very specific questions that have a right/wrong answer with architectural questions that might have several valid answers. Does anyone have an interview template, specific questions, or any other knowledge that they can share? Even knowing the meaningful Oracle-related certifications would be a help. Thank you. EDIT: All the answers have been very helpful so far and I have given upvotes to everyone. I'm surprised that there are already 3 close votes on this question as "off topic". To be clear, I am specifically asking how a MS SQL Server engineer (like myself) can effectively interview a person with different but symbiotic skills. The question has already received specific, technical answers which have improved my own database design and programming skills. If this is more appropriate as a community wiki, please convert it.

    Read the article

  • How to implement "bullet time" in a multiplayer game?

    - by Tom
    I have never seen such a feature before, but it should provide an interesting gameplay opportunity. So yes, in a multiplayer/real-time environment (imagine FPS), how could I implement a slow motion/bullet time effect? Something like an illusion for the player that's currently slo-mo'ed. So everybody sees him "real-time", but he sees everything slowed down. Update A sidenote: keep in mind that a FPS game has to be balanced in order for it to be fun. So yes, this bullet time feature has to be solid, giving a small advantage to the "player", while not taking away from other players. Plus, there is a possibility that two players could activate their bullet time at the same time. Furthermore: I'm going to implement this in the future no matter what it takes. And, the idea is to build a whole new game engine for all this. If that gives new options, I'm more then interested in hearing the ideas. Meanwhile, here with my team we're thinking about this too, when our theory will be crafted, I'm going to share it here. Is this even possible? So, the question on "is this even possible" has been answered, now it's time to find the best solution. I'm keeping the "answer" until something exceptionally good comes up, like a prototype theory with something close to working pseudo code.

    Read the article

  • Ubuntu 11.10 loads from live usb fine, but boots to black screen from harddrive. Why?

    - by Estel
    A few days ago I had a hard drive failure, which was running Windows XP (32-bit) just fine. The second hard drive in my computer held a few unimportant files, so I formatted it in the Ubuntu setup and installed 11.10 without a hitch. I had been using it for about a week, but decided to install Windows 7 (64-bit) in order to utilize Networking with my home server (running Windows Server 2000). My system is 64-bit based, and thus I had no problems installing other than a basic RAM error that required me to remove my RAM down to a single stick. I played with the settings in Windows 7 for around an hour before I shut down. After reinstalling the RAM, Windows 7 would not boot. In this, I then assumed that something about my system was rejecting Win7 and I reinstalled Ubuntu. However, now Ubuntu (11.10) boots into black screen, and I've already attempted activating the grub menu with the shift key, and following steps listed here: https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen but nothing seems to work. I've reinstalled twice now, with the same result each time. Now, the very odd part about this whole scenario is that the USB I installed from has no problems booting as a live USB. This puzzles me greatly, because the hard drive boots straight to black screen and the live USB loads normally. At this point, my only theory is that the boot sector of the hard disk was somehow corrupted with Win7, and that Ubuntu was unable to completely write through. I used Darik's Boot n Nuke to wipe the drive, but was met with an error, this also puzzles me because the hard disk has no promblems reading or writing. Any suggestions/comments are appreciated. If you have a theory, I will be more than happy to oblige. Additional information: Intel Core2 Duo e6400 2.13GHz nVidia GeForce 7-series (7900 GS) 4 GB DDR2 333MHz (2x 2GB) Dell XPS 410 BIOS Revision 2.5.3

    Read the article

  • Emulate Historical Figures i.e. Einstein - Is this possible using linguistic logic for my http://www.ustimeline.com Education System

    - by Johnnylight
    After hearing about the success of IBM's Watson I started thinking perhaps emulating human language is now possible? My goal is to create Virtual Historical characters to represent the main characters in my Adventur-Cation The Great American Adventure program such as Einstein or Crazy Horse. The goal is to build an intelligent system capable of indexing the internet and storing the data using a schema using modern knowledge on linguistic theory (phonemes, morphemes, syntax) to build a system capable to returning a semantically sound response very similar to the response made by the same person if still alive today. The goal would be to use the same engine/system for all characters. Each characters would have their own digital representation and voice, and would organize data differently based on tags/keywords stored about the individual. Imagine a Max Headroom Einstein. Based on the success of Watson, I believe something like this may now be possible. Would be an interesting way to study history and would be a vehicle of entertainment as well. Can anyone confirm if this has already been attempted? Is anyone interested in exploring this using Cognitive Science, Psychology, Artificial Intelligence, Historical data captured on the internet, and Linguistic theory?

    Read the article

  • Ubuntu install can't find hard drives

    - by Casey Hungler
    I recently got a Dell Inspiron Special Edition 7720 computer. I am trying to install Ubuntu along side Windows. When I use the WUBI installer, the installation of Ubuntu works as long as I do not boot into Windows; if I boot into Windows, when I go back into Ubuntu, I am given a variety of error messages which claim to have corrupt or missing kernel/root directory, etc. I have been working with this problem for about a week, and have reinstalled Ubuntu MANY times. So far, I have eliminated all of the following problems: Corrupt WUBI installation (Downloaded multiple times, used on other systems), I have tried using a CD and a flash drive, both of which work on other computers. I know that no program within Ubuntu is creating the problem. I know that others have successfully installed Ubuntu on a computer with my operating system (Windows 7 SP1). This is a much shortened version of the original question, which has been up for about 5 days, and included a more detailed description of the problem, but left everyone clueless as to the source of this problem. When I spoke with the Dell service technician who came over today to replace my keyboard, he suggested that the driver for my HDD was so new that it was not compatible with the current version of Ubuntu. His reasoning is as follows: 1) During an install from a flash drive or CD, where I am supposed to get the option to wipe my system or create a dual boot, I get a window that asks me to select a hard drive partition, but none are listed. 2) This model of computer was made public in June of this year, while Ubuntu was released in April Adopting this theory, it would seem to me that the WUBI install fails after booting into Windows because Ubuntu can no longer find the files that it needs to load. Does this theory seem at all plausible to anyone? I just want to install Ubuntu and have it stay on my computer. I don't care how I put it there, I just need it to work, so I would TRULY appreciate any advice or suggestions anyone could give. Thanks so much for your time and support!!!

    Read the article

  • When and how does one become a good programmer these days? [closed]

    - by YoungMoney
    I mean, good enough to make software people want and get paid for it. Maybe even good enough to launch a company or something. I'm also concerned that I'm not applying the finer points of my algorithms/data structures/software design knowledge. Background: I'm 20 and have been struggling with programming for about two years now, trying to become a software engineer. I started with a few university courses that I did quite poorly in. I learned how to make websites with HTML/JavaScript and PHP/MySQL, but feel like I know very relevant theory for making good databases - how does something like Facebook serve hundreds of millions of people? What would be smart ways to store data? I don't know. Now I'm doing some android application development, but again I have no idea about good Java design theory (I use static variables like they're going out of fashion) and feel more like I'm gluing stuff together and letting Eclipse slowly autocomplete my project. In short, I'm not sure if I'm becoming a legitimate software developer or just "doing what's cool". At least I've taken some data structures and Algorithms courses and plan to take more in the next years. But I'm having a really tough time applying this stuff to my fun little apps that I'm building. Every language higher level than C++ seems to have its own quicksort function already built-in, for example. Similarly, I can't remember ever needing to implement a linked-list, heap, binary tree, or or worry about pointers and memory management. But maybe this is a good thing so that I focus on other things? I'm not too sure what those other things are though. Hopefully something more than building another photo sharing app. Anyways that's it for me, I look forward to your responses!

    Read the article

  • If you had to go back and re-learn your skill set, how would you do it?

    - by vorbb
    My younger brother is looking to start programming. He's 14, and technically-inclined, but no real experience programming. He's looking to me for guidance, and I don't feel as if my experience is enough, so I figured I'd ask here. He's more interested in web programming, but also has an interest in desktop/mobile/server applications. What would be a good learning path for him to take? I'm going to buy him a bunch of books for Christmas to get him started; the question is, what should he learn, and in which order? The way I see it, he needs to learn theory and code. I'd like to start him off with Python or Ruby or PHP. If he wants to get in to web, he's also going to need to learn HTML, CSS, Javascript, etc. Out of those three domains (Languages, Theory, Markup/Etc.), what is the best order do you think to learn in? Also, am I missing anything? Thanks!

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >