Search Results

Search found 25744 results on 1030 pages for 'google closure compiler'.

Page 588/1030 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • please help me understand libraries/includes

    - by fiftyeight
    I'm trying to understand how libraries work. for example I downloaded a tarball and extracted it. Now I do "./configure", it searches in pre-defined directories from what I understand for certain library files. What does it do then? it creates a makefile, and the makefile contains the paths to these libraries? than I do "make", it complies the source code and hard-codes the locations of the libraries? am I correct? I do not really understand if libraries are files with pre-defined paths or the OS somehow gives access to the libraries through system calls. another example, I complied something on my computer than moved it to a remote server, the executable needs mysql libraries to work, the server has mysql but for some reason when execute the file it tells me "can't find libmysqlclient.so.16". is there a solution for this? is there a way to know where is tries to locate this file or give it another path? I can't compile it on the server since it has no compiler and I don't have root access to install packages last question is if in the sequence "./configure","make","make install" the "make install" command is the only one that actually puts files outside the directory in which these files reside? if for example the software will be installed in /usr/local/ is the "make install" command the only one that will require "sudo" before it? let me see if I got it correctly: "./configure" creates the Makefile according to the location of various files on your system. "make" compiles the source code according to this makefile. and "make install" moves the files to their appropriate location. I know this has been very long I thank anyone who had the patience to read my question :)

    Read the article

  • Determinism in multiplayer simulation with Box2D, and single computer

    - by Jake
    I wrote a small test car driving multiplayer game with Box2D using TCP server-client communcations. I ran 1 instance of server.exe and 2 instance of client.exe on the same machine that I code and compile the executables. I type inputs (WASD for a simple car movement) into one of the 2 clients and I can get both clients to update the simulation. There are 2 cars in the simulation. As long as the cars do not collide, I get the same identical output on both client.exe. I can run the car(s) around for as long as I could they still update the same. However, if I start to collide the cars, very quickly they go out of sync. My tools: Windows 7, C++, MSVS 2010, Box2D, freeGlut. My Psuedocode: // client.exe void timer(int value) { tcpServer.send(my_inputs); foreach(i = player including myself) inputs[i] = tcpServer.receive(); foreach(i = player including myself) players[i].process(inputs[i]); myb2World.step(33, 8, 6); // Box2D world step simulation foreach(i = player including myself) renderer.render(player[i]); glutTimerFunc(33, timer, 0); } // server.exe void serviceloop { while(all clients alive) { foreach(c = clients) tcpClients[c].receive(&inputs[c]); // send input of each client to all clients foreach(source = clients) { foreach(dest = clients) { tcpClients[dest].send(inputs[source]); } } } } I have read all over the internet and SE the following claims (paraphrased): Box2D is deterministic as long as floating point architecture/implementation is the same. (For any deterministic engine) Determinism is gauranteed if playback of recorded inputs is on the same machine with exe compiled using same compiler and machine. Additionally my server.exe and client.exe gameloop is single thread with blocking socket calls and fixed time step. Question: Can anyone explain what I did wrong to get different Box2D output?

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • the OpenJDK group at Oracle is growing

    - by john.rose
    p.p1 {margin: 0.0px 0.0px 12.0px 0.0px; font: 12.0px Times} span.s1 {text-decoration: underline ; color: #0000ee} The OpenJDK software development team at Oracle is hiring. To get an idea of what we’re looking for, go to the Oracle recruitment portal and enter the Keywords “Java Platform Group” and the Location Keywords “Santa Clara”.  (We are a global engineering group based in Santa Clara.)  It’s pretty obvious what we are working on; just dive into a public OpenJDK repository or OpenJDK mailing list. Here is a typical job description from the current crop of requisitions: The Java Platform group is looking for an experienced, passionate and highly-motivated Software Engineer to join our world class development effort. Our team is responsible for delivering the Java Virtual Machine that is used by millions of developers. We are looking for a development engineer with a strong technical background and thorough understanding of the Java Virtual Machine, Java execution runtime, classloading, garbage collection, JIT compiler, serviceability and a desire to drive innovations. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 7 years of software engineering or related experience.

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Starting out with OpenGL when most tutorials are out of date

    - by AUTO
    I'm sure there are already a bunch of questions like this asked, but the constant updating of the OpenGL library throws them all away, and in a month or two, the answers here will be worthless again. I am ready to start programming in OpenGL using C++. I've got a working compiler (DevCpp; do NOT ask me to switch to VC++, and don't ask me why). Now I'm just looking for a solid tutorial on how to program with OpenGL. My assistant found the tutorial provided by NeHe Productions, but as I've come to find out, it's WAY OUT OF DATE! (although I did pull together a basic window to support an OpenGL canvas) Then I went online, and found the OpenGL SuperBible, which apparently uses freeglut? But what I'd like to know is whether or not SuperBible 5th edition is up to date any longer. The suggestion to freeglut I found said the latest version was 2.6.0 but now it's 2.8.0! Is the OpenGL SuperBible still a good, and fairly up-to-date place to start? Is there a better place to go to learn OpenGL? Am I allowed to simply store freeglut in the DevCpp include directory (maybe in GL), or is there some important procedure? Are there any comments or suggestions that I didn't think to ask since I'm only just beginning? @dreta cleared some things up for me, so now I have a better idea of what to ask: I think I'd like to start out with OpenGL using a wrapper library instead of directly accessing OpenGL.I just think that, for a beginner, it would be easier for me to program and get good results, while I don't yet have to understand all the grimy details (as @stephelton mentioned). The problem is, I can't find any library that doesn't have undefined references to no longer supported functions. Freeglut sounds operational, but it still uses GLU.Does anyone know what I can do?Also, I tried compiling the first SuperBible's source, but I got errors since GLAPI is not being defined as a type, the error originating in the GLU library. I'd like to use the SuperBible, but I don't know how to fix this.

    Read the article

  • Why have minimal user/handwritten code and do everything in XAML?

    - by Mrk Mnl
    I feel the MVVM community has become overzealous like the OO programmers in the 90's - it is a misnomer MVVM is synonymous with no code. From my closed StackOverflow question: Many times I come across posts here about someone trying to do the equivalent in XAML instead of code behind. Their only reason being they want to keep their code behind 'clean'. Correct me if I am wrong, but is not the case that: XAML is compiled too - into BAML - then at runtime has to be parsed into code anyway. XAML can potentially have more runtime bugs as they will not be picked up by the compiler at compile time - from incorrect spellings - these bugs are also harder to debug. There already is code behind - like it or not InitializeComponent(); has to be run and the .g.i.cs file it is in contains a bunch of code though it may be hidden. Is it purely psychological? I suspect it is developers who come from a web background and like markup as opposed to code. EDIT: I don't propose code behind instead of XAML - use both - I prefer to do my binding in XAML too - I am just against making every effort to avoid writing code behind esp in a WPF app - it should be a fusion of both to get the most out of it. UPDATE: Its not even Microsoft's idea, every example on MSDN shows how you can do it in both.

    Read the article

  • Reinventing the Wheel, why should I?

    - by Mercfh
    So I have this problem, it may be my OCD (i have OCD it's not severe.....but It makes me very..lets say specific about certain things, programming being one of them) or it may be the fact that I graduated college and still feel "meh" at programming. Reading This made me think "OH thats me!" but thats not really my main problem. My big problem is....anytime im using a high level language/API/etc. I always think to myself that im not really "programming". I know I know...it sounds stupid. But Like I feel like....if i can't figure out how to do it at the lowest level then Im not really "understanding" it. I do this for just about every new technology I learn. I look at the lowest level and try to understand it. Sometimes I do.....most of the time I don't, I mean i've only really been programming for 4 years (at college, if you even call it programming.....our university's program was "meh"). For instance I do a little bit of embedded programming (with the Atmel AVR 8bits/Arduino stuff). And I can't bring myself to use the C compiler, even though it's 8 million times easier than using assembly......it's stupid I know... Anyone else feel like this, I think it's just my OCD that makes me feel this way....but has anyone else ever felt like they need to go down to the lowest level of the language to even be satisfied with using it? I apologize for the very very odd question, but I think it really hinders me in getting deep seeded into a programming language and making a real application of my own. (it's silly I know)

    Read the article

  • Should the syntax for disabling code differ from that of normal comments?

    - by deltreme
    For several reasons during development I sometimes comment out code. As I am chaotic and sometimes in a hurry, some of these make it to source control. I also use comments to clarify blocks of code. For instance: MyClass MyFunction() { (...) // return null; // TODO: dummy for now return obj; } Even though it "works" and alot of people do it this way, it annoys me that you cannot automatically distinguish commented-out code from "real" comments that clarify code: it adds noise when trying to read code you cannot search for commented-out code for for instance an on-commit hook in source control. Some languages support multiple single-line comment styles - for instance in PHP you can either use // or # for a single-line comment - and developers can agree on using one of these for commented-out code: # return null; // TODO: dummy for now return obj; Other languages - like C# which I am using today - have one style for single-line comments (right? I wish I was wrong). I have also seen examples of "commenting-out" code using compiler directives, which is great for large blocks of code, but a bit overkill for single lines as two new lines are required for the directive: #if compile_commented_out return null; // TODO: dummy for now #endif return obj; So as commenting-out code happens in every(?) language, shouldn't "disabled code" get its own syntax in language specifications? Are the pro's (separation of comments / disabled code, editors / source control acting on them) good enough and the cons ("shouldn't do commenting-out anyway", not a functional part of a language, potential IDE lag (thanks Thomas)) worth sacrificing? Edit I realise the example I used is silly; the dummy code could easily be removed as it is replaced by the actual code.

    Read the article

  • How to diagram custom programming languages, non textual?

    - by Adam
    I've used and created domain-specific languages before, plenty of times (e.g. using yacc/lex). Normally we'd start with grammar written in BNF, and a bunch of keywords. This is easy to do, easy to share. Recently, I've started working with diagrammatic programming languages - closest parallel is circuit-diagrams in electronics, where it's very difficult to express ideas in text, but very easy to express them in wiring-diagrams. This is a new and novel problem for me: how to efficiently express these mini-languages, and share concepts in them with colleagues? (i.e. how to whiteboard-program within them. Actual programming is easy - you have physical components to hand) Are there tools for this? Or good/best practices (e.g. equivalent of "always use BNF as starting point for your new DSL, and use tools like yacc to generate the parser, compiler, etc"). My googlefu is proving weak - all I get is false positives for wiring diagrams, and UML editors (since these are custom languages, UML doesn't seem to help)

    Read the article

  • How to write your unit tests to switch between NUnit and MSTest

    - by Justin Jones
    On my current project I found it useful to use both NUnit and MsTest for unit testing. When using ReSharper for running unit tests, it just simply works better with NUnit, and on large scale projects NUnit tends to run faster. We would have just simply used NUnit for everything, but MSTest gave us a few bonuses out of the box that were hard to pass up. Namely code coverage (without having to shell out thousands of extra dollars for the privilege) and integrated tests into the build process. I’m one of those guys who wants the build to fail if the unit tests don’t pass. If they don’t pass, there’s no point in sending that build on to QA. So making the build work with MsTest is easiest if you just create a unit test project in your solution. This adds the right references and project type Guids in the project file so that everything just automagically just works. Then (using NuGet of course) you add in NUnit. At the top of your test file, remove the using statements that refer to MsTest and replace it with the following: #if NUNIT using NUnit.Framework; #else using TestFixture = Microsoft.VisualStudio.TestTools.UnitTesting.TestClassAttribute; using Test = Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute; using TestFixtureSetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using SetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using Microsoft.VisualStudio.TestTools.UnitTesting; #endif Basically I’m taking the NUnit naming conventions, and redirecting them to MsTest. You can go the other way, of course. I only chose this direction because I had already written the tests as NUnit tests. NUnit and MsTest provide largely the same functionality with slightly differing class names. There’s few actual differences between then, and I have not run into them on this project so far. To run the tests as NUnit tests, simply open up the project properties tab and add the compiler directive NUNIT. Remove it, and you’re back in MsTest land.

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • Can't (re)install lazarus

    - by twice
    I had Lazarus installed (from website, not USC), while upgrading to 13.10 and after that, it posted some error message (which I can't remember) at every start. I decided it shouldn't suffer, so I started reinstalling it, but it didn't work. then I thought I'll start all over, and deleted and purged everything. But it still would not install. I found a similar, very similar question to mine here: Cant correctly install Lazarus I tried everything said there: sudo dpkg -l | grep "^rc" returned some packages: since sudo apt-get -f brought up the apt-get help message, and sudo apt-get -f install did nothing, I went ahead and manually purged every broken package, which seemed to bring me forward, since it removed the entries in sudo dpkg -l | grep "^rc". When I was done I tried it again, since it didn't work I rebooted, no change. this is the error message I get on the attempt to install the first package (fpc-2.6.2): root@Someone-PC:~# dpkg -i /home/someone/Desktop/fpc_2.6.2-0_amd64\ \(3\).deb (Reading database ... 242209 files and directories currently installed.) Unpacking fpc (from .../fpc_2.6.2-0_amd64 (3).deb) ... dpkg: error processing /home/someone/Desktop/fpc_2.6.2-0_amd64 (3).deb (--install): trying to overwrite '/usr/lib/fpc/2.6.2/msg/errores.msg', which is also in package fp-compiler-2.6.2 2.6.2-5 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /home/someone/Desktop/fpc_2.6.2-0_amd64 (3).deb So, how can I fix this? And install Lazarus?

    Read the article

  • Why isn't there a typeclass for functions?

    - by Steve314
    I already tried this on Reddit, but there's no sign of a response - maybe it's the wrong place, maybe I'm too impatient. Anyway... In a learning problem I've been messing around with, I realised I needed a typeclass for functions with operations for applying, composing etc. Reasons... It can be convenient to treat a representation of a function as if it were the function itself, so that applying the function implicitly uses an interpreter, and composing functions derives a new description. Once you have a typeclass for functions, you can have derived typeclasses for special kinds of functions - in my case, I want invertible functions. For example, functions that apply integer offsets could be represented by an ADT containing an integer. Applying those functions just means adding the integer. Composition is implemented by adding the wrapped integers. The inverse function has the integer negated. The identity function wraps zero. The constant function cannot be provided because there's no suitable representation for it. Of course it doesn't need to spell things as if it the values were genuine Haskell functions, but once I had the idea, I thought a library like that must already exist and maybe even using the standard spellings. But I can't find such a typeclass in the Haskell library. I found the Data.Function module, but there's no typeclass - just some common functions that are also available from the Prelude. So - why isn't there a typeclass for functions? Is it "just because there isn't" or "because it's not so useful as you think"? Or maybe there's a fundamental problem with the idea? The biggest possible problem I've thought of so far is that function application on actual functions would probably have to be special-cased by the compiler to avoid a looping problem - in order to apply this function I need to apply the function application function, and to do that I need to call the function application function, and to do that...

    Read the article

  • Any tips on getting hired as a software project manager straight out of college?

    - by MHarrison
    I graduated with a BS in compsci last September, and I've been trying (unsuccessfully) to find a job as a project manager ever since. I fell in love with software engineering (the formal practice behind it all, not just coding) in school, and I've dedicated the last 3-4 years of my life to learning everything I can about project management and gaining experience. I've managed several projects (with teams around 12 people) while in school, and I worked with my university's software engineering research lab. My résumé is also decent - I worked as a programmer before I went to school (I'm 27 now), and I did Google Summer of Code for 3 summers. I also have general "people management" experience via working as the photo editor for my university's newspaper for 2 years. My first problem with the job hunt is not getting enough interviews. I use careers.stackoverflow.com, which is awesome because I usually get contacted by non-HR people who know what they're talking about, but there's just not enough companies using it for me to get interviews on a regular basis. I've also tried sites like monster.com, and in a fit of desperation, I sent out no less than 60 applications to project management positions. I've gotten 3 automated rejection letters and that's it. At least careers.stackoverflow gets me a phone interview with 8/10 places I apply to. But the main (and extremely frustrating) problem is the matter of experience. I've successfully managed projects from start to finish (in my software engineering classes we had real customers come in with a real software need and we built it for them), but I've never had to deal with budgets and money (I know this is why HR people immediately turn me away). Most of these positions require 5+ years PM experience, and I've seen absurd things like 12+ years required. Interviews are also maddening. I've had so many places who absolutely loved me and I made it to the final round of interviews, and I left thinking things went extremely well and they'd consider me. However, when I check in with them a week later, they tell me "We really liked you and your qualifications are excellent, but we're hoping to find someone with more experience." The bad interviews I can understand - like the PM position that would have had me managing developers both locally and overseas - I had 3 interviews with them and the ENTIRE interview process was them asking me CS brainteasers and having me waste time on things like writing quicksort on paper or writing binary search trees. Even when I tried steering the discussion towards more relevant PM stuff, they gave me some vague generic replies and went back to the "We want to be Google/MS" crap. But when I have a GOOD interview, they say my "qualifications are excellent" but they want "more experience"...that makes me want to tear my hair out. What else can I DO? While I'm aiming for technically-involved PM positions (not just crunching budget numbers), I really don't want a straight development job because I like creating software from the very high-level vs. spending a lot of time debugging memory leaks. In fact, I can't even GET development positions that I'm qualified for because I make the mistake of telling them that my future career goals are as PM (which usually results in them saying something like "Well we already have PMs and this position isn't really set up to get you there." - which I take to mean "No, that's my job, stay away.") My apologies on the long rant, but I'm seriously hellbent on getting hired as a PM since it's both my career goal and the passion that keeps me awake at night. Any suggestions on what the heck else I can do? I'm currently writing a blog where I talk about my philosophies about software engineering, and I'm writing up specs for an iOS app which I will design, code, and show employers, but this takes an awful lot of time that I don't have.

    Read the article

  • How to install custom c library?

    - by arijit
    I just wanted to add a c library to Ubuntu which was created by Harvard University for cs50 course. They provided instructions for how to install the library which is listed below. Debian, Ubuntu First become root, as with: sudo su - Then install the CS50 Library as follows: apt-get install gcc wget http://mirror.cs50.net/library/c/cs50-library-c-3.1.zip unzip cs50-library-c-3.1.zip rm -f cs50-library-c-3.1.zip cd cs50-library-c-3.1 gcc -c -ggdb -std=c99 cs50.c -o cs50.o ar rcs libcs50.a cs50.o chmod 0644 cs50.h libcs50.a mkdir -p /usr/local/include chmod 0755 /usr/local/include mv -f cs50.h /usr/local/include mkdir -p /usr/local/lib chmod 0755 /usr/local/lib mv -f libcs50.a /usr/local/lib cd .. rm -rf cs50-library-c-3.1 I did exactly as directed. But the compiler reported “Undefined reference to a function”--the function was Get String. So, I searched for a solution and found one. It said to use the -l switch. Now when I compile I use something like: gcc –o hello.c hello –lcs50 (I don’t remember the exact command.) However, I cannot use the make command, which is easier to use. I understand that there is some problem with linking the library. What is a good solution to this problem?

    Read the article

  • Java Champion Dick Wall Explores the Virtues of Scala (otn interview)

    - by Janice J. Heiss
    In a new interview up on otn/java, titled “Java Champion Dick Wall on the Virtues of Scala (Part 2),” Dick Wall explains why, after a long career in programming exploring Lisp, C, C++, Python, and Java, he has finally settled on Scala as his language of choice. From the interview: “I was always on the lookout for a language that would give me both Python-like productivity and simplicity for just writing something and quickly having it work and that also offers strong performance, toolability, and type safety (all of which I like in Java). Scala is simply the first language that offers all those features in a package that suits me. Programming in Scala feels like programming in Python (if you can think it, you can do it), but with the benefit of having a compiler looking over your shoulder and telling you that you have the wrong type here or the wrong method name there.The final ‘aha!’ moment came about a year and a half ago. I had a quick task to complete, and I started writing it in Python (as I have for many years) but then realized that I could probably write it just as fast in Scala. I tried, and indeed I managed to write it just about as fast.”Wall makes the remarkable claim that once Java developers have learned to work in Scala, when they work on large projects, they typically find themselves more productive than they are in Java. “Of course,” he points out, “people are always going to argue about these claims, but I can put my hand over my heart and say that I am much more productive in Scala than I was in Java, and I see no reason why the many people I know using Scala wouldn’t say the same without some reason.”Read the interview here.

    Read the article

  • GCC: assembly listing for IA64 without an Itanium machine

    - by KD04
    I need to try the following thing: I would like to compile some simple C code samples and see the assembly listing generated by GCC for IA64 architecture, i.e. I just want to run GCC with the -S switch and see the resultant .s file. I don't have an Itanium machine, so in order to do it myself I'll probably need a cross-compiling version of GCC built for x86 RedHat. I'm not interested in full cross-compilation, meaning that I don't need to generate the binaries at all. The easiest way, of course, would be to find an Itanium machine with with GCC and just try it there. Unfortunately, I don't seem to have access to any. Another option is to build a cross-compiling version GCC on my RedHat, but apparently that's quite an endeavor for someone who hasn't done it before (I assume that the fact that I only need .s output doesn't make it simpler). What other options are there, if any? Maybe there's some sort of a web front to an Itanium GCC compiler on the Net (something like Comeau Online or ideone.com, but with .s output)? Anything else? I would appreciate any help.

    Read the article

  • Find non-ascii characters from a UTF-8 string

    - by user10607
    I need to find the non-ASCII characters from a UTF-8 string. my understanding: UTF-8 is a superset of character encoding in which 0-127 are ascii characters. So if in a UTF-8 string , a characters value is Not between 0-127, then it is not a ascii character , right? Please correct me if i'm wrong here. On the above understanding i have written following code in C : Note: I'm using the Ubuntu gcc compiler to run C code utf-string is xvab c long i; char arr[] = "xvab c"; printf("length : %lu \n", sizeof(arr)); for(i=0; i<sizeof(arr); i++){ char ch = arr[i]; if (isascii(ch)) printf("Ascii character %c\n", ch); else printf("Not ascii character %c\n", ch); } Which prints the output like: length : 9 Ascii character x Not ascii character Not ascii character ? Not ascii character ? Ascii character a Ascii character b Ascii character Ascii character c Ascii character To naked eye length of xvab c seems to be 6, but in code it is coming as 9 ? Correct answer for the xvab c is 1 ...i.e it has only 1 non-ascii character , but in above output it is coming as 3 (times Not ascii character). How can i find the non-ascii character from UTF-8 string, correctly. Please guide on the subject.

    Read the article

  • Are R&D mini-projects a good activity for interns?

    - by dukeofgaming
    I'm going to be in charge of hiring some interns for our software department soon (automotive infotainment systems) and I'm designing an internship program. The main productive activity "menu" I'm planning for them consists of: Verification testing Writing Unit Tests (automated, with an xUnit-compliant framework [several languages in our projects]) Documenting Code Updating wiki Updating diagrams & design docs Helping with low priority tickets (supervised/mentored) Hunting down & cleaning compiler/run-time warnings Refactoring/cleaning code against our coding standards But I also have this idea that having them do small R&D projects would be good to test their talent and get them to have fun. These mini-projects would be: Experimental implementations & optimizations Proof of concept implementations for new technologies Small papers (~2-5 pages) doing formal research on the previous two points Apps (from a mini-project pool) These kinds of projects would be pre-defined and very concrete, although new ideas from the interns themselves would be very welcome. Even if a project is too big or is abandoned, the idea would also be to lay the ground work so they can be retaken by another intern or intern team. While I think this is good in concept, I don't know if it could be good in practice, as obviously this would diminish their productivity on "real work" (work with immediate value to the company), but I think it could help bring aboard very bright people and get them to want to stay in the future (which, I think, is the end goal for any internship program). My question here is if these activities are too open ended or difficult for the average intern to accomplish and if R&D is an efficient use of an interns time or if it makes more sense for to assign project work to interns instead.

    Read the article

  • Are there any real-world cases for C++ without exceptions?

    - by Martin
    In When to use C over C++, and C++ over C? there is a statement wrt. to code size / C++ exceptions: Jerry answers (among other points): (...) it tends to be more difficult to produce truly tiny executables with C++. For really small systems, you're rarely writing a lot of code anyway, and the extra (...) to which I asked why that would be, to which Jerry responded: the main thing is that C++ includes exception handling, which (at least usually) adds some minimum to the executable size. Most compilers will let you disable exception handling, but when you do the result isn't quite C++ anymore. (...) which I do not really doubt on a technical real world level. Therefore I'm interested (purely out of curiosity) to hear from real world examples where a project chose C++ as a language and then chose to disable exceptions. (Not just merely "not use" exceptions in user code, but disable them in the compiler, so that you can't throw or catch exceptions.) Why does a project chose to do so (still using C++ and not C, but no exceptions) - what are/were the (technical) reasons? Addendum: For those wishing to elaborate on their answers, it would be nice to detail how the implications of no-exceptions are handled: STL collections (vector, ...) do not work properly (allocation failure cannot be reported) new can't throw Constructors cannot fail

    Read the article

  • Understanding Asynchronous Programming with .NET Reflector

    - by Nick Harrison
    When trying to understand and learn the .NET framework, there is no substitute for being able to see what is going on behind at the scenes inside even the most confusing assemblies, and .NET Reflector makes this possible. Personally, I never fully understood connection pooling until I was able to poke around in key classes in the System.Data assembly. All of a sudden, integrating with third party components was much simpler, even without vendor documentation!With a team devoted to developing and extending Reflector, Red Gate have made it possible for us to step into and actually debug assemblies such as System.Data as though the source code was part of our solution. This maybe doesn’t sound like much, but it dramatically improves the way you can relate to and understand code that isn’t your own.Now that Microsoft has officially launched Visual Studio 2012, Reflector is also fully integrated with the new IDE, and supports the most complex language feature currently at our command: Asynchronous processing.Without understanding what is going on behind the scenes in the .NET Framework, it is difficult to appreciate what asynchronocity actually bring to the table and, without Reflector, we would never know the Arthur C. Clarke Magicthat the compiler does on our behalf.Join me as we explore the new asynchronous processing model, as well as review the often misunderstood and underappreciated yield keyword (you’ll see the connection when we dive into how the CLR handles async).Read more here

    Read the article

  • JEOS

    - by john.graves(at)oracle.com
    JEOS stands for Just Enough Operating System.  It is  great environment for building virtual machines without all the clutter of a windowing system, games, office products, etc.  It is from Ubuntu and you install it using the Ubuntu server install, but rather than picking a standard install, press F4 and choose “Install a minimal system.” Note: The “Install a minimal virtual machine” is specific to VMWare and I plan to use VirtualBox. Be sure to include Open SSH in the install so that it installs sshd. *** Also, if you plan to install XE, you’ll need to modify the partitions to have a larger swap space (at least 1.5 G). *** Once the install is done, I find it useful to install a few other items. Update Ubuntu apt-get update Install some other tools apt-get openjdk-6-jre Yes, java will be included in any of the WebLogic installs, but I need this one if I want to do remote display (for config wizards, etc). apt-get gcc Some apps require to rebuild the kernel modules, so you’ll need a basic compiler. Install guest additions (Choose the VirtualBox Devices->Install Guest Additions…” option.  This sets up a /dev/cdrom or /dev/cdrom1.  You’ll need to manually mount this temporarily: sudo mount /dev/cdrom /mnt Then run the linux .bin file. Update nofile limits.  Most java apps fail with the standard ubuntu settings: edit /etc/security/limits.conf and add these lines at the end: *     soft nofile 65535 *     hard nofile 65535 root  soft nofile 65535 root  hard nofile 65535 These numbers are very high and I wouldn’t do this on a production system, but for this environment it is fin. To get rid of the annoying piix error on boot, add the following line to the /etc/modprobe.d/blacklist.conf file blacklist i2c_piix4

    Read the article

  • What's wrong with cplusplus.com?

    - by Kerrek SB
    This is perhaps not a perfectly suitable forum for this question, but let me give it a shot, at the risk of being moved away. There are several references for the C++ standard library, including the invaluable ISO standard, MSDN, IBM, cppreference, and cplusplus. Personally, when writing C++ I need a reference that has quick random access, short load times and usage examples, and I've been finding cplusplus.com pretty useful. However, I've been hearing negative opinions about that website frequently here on SO, so I would like to get specific: What are the errors, misconceptions or bad pieces of advice given by cplusplus.com? What are the risks of using it to make coding decisions? Let me add this point: I want to be able to answer questions here on SO with accurate quotes of the standard, and thus I would like to post immediately-usable links, and cplusplus.com would have been my choice site were it not for this issue. Update: There have been many great responses, and I have seriously changed my view on cplusplus.com. I'd like to list a few choice results here; feel free to suggest more (and keep posting answers). As of June 29, 2011: Incorrect description of some algorithms (e.g. remove). Information about the behaviour of functions is sometimes incorrect (atoi), fails to mention special cases (strncpy), or omits vital information (iterator invalidation). Examples contain deprecated code (#include style). Inexact terminology is doing a disservice to learners and the general community ("STL", "compiler" vs "toolchain"). Incorrect and misleading description of the typeid keyword.

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >