Search Results

Search found 14910 results on 597 pages for 'programs and features'.

Page 374/597 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • gcc options for fastest code

    - by rwallace
    I'm distributing a C++ program with a makefile for the Unix version, and I'm wondering what compiler options I should use to get the fastest possible code (it falls into the category of programs that can use all the computing power they can get and still come back for more), given that I don't know in advance what hardware, operating system or gcc version the user will have, and I want above all else to make sure it at least works correctly on every major Unix-like operating system. Thus far, I have g++ -O3 -Wno-write-strings, are there any other options I should add? On Windows, the Microsoft compiler has options for things like fast calling convention and link time code generation that are worth using, are there any equivalents on gcc? (I'm assuming it will default to 64-bit on a 64-bit platform, please correct me if that's not the case.)

    Read the article

  • Is code clearness killing application performance?

    - by Jorge Córdoba
    As today's code is getting more complex by the minute, code needs to be designed to be maintainable - meaning easy to read, and easy to understand. That being said, I can't help but remember the programs that ran a couple of years ago such as Winamp or some games in which you needed a high performance program because your 486 100 Mhz wouldn't play mp3s with that beautiful mp3 player which consumed all of your CPU cycles. Now I run Media Player (or whatever), start playing an mp3 and it eats up a 25-30% of one of my four cores. Come on!! If a 486 can do it, how can the playback take up so much processor to do the same? I'm a developer myself, and I always used to advise: keep your code simple, don't prematurely optimize for performance. It seems that we've gone from "trying to get it to use the least amount of CPU as possible" to "if it doesn't take too much CPU is all right". So, do you think we are killing performance by ignoring optimizations?

    Read the article

  • removing a line from a text file?

    - by Blackbinary
    Hi all. I am working with a text file, which contains a list of processes under my programs control, along with relevant data. At some point, one of the processes will finish, and thus will need to be removed from the file (as its no longer under control). Here is a sample of the file contents (which has enteries added "randomly"): PID=25729 IDLE=0.200000 BUSY=0.300000 USER=-10.000000 PID=26416 IDLE=0.100000 BUSY=0.800000 USER=-20.000000 PID=26522 IDLE=0.400000 BUSY=0.700000 USER=-30.000000 So for example, if I wanted to remove the line that says PID=26416.... how could I do that, without writing the file over again? I can use external unix commands, however I am not very familiar with them so please if that is your suggestion, give an example. Thanks!

    Read the article

  • Why are functional languages considered a boon for multi threaded environments?

    - by Billy ONeal
    I hear a lot about functional languages, and how they scale well because there is no state around a function; and therefore that function can be massively parallelized. However, this makes little sense to me because almost all real-world practical programs need/have state to take care of. I also find it interesting that most major scaling libraries, i.e. MapReduce, are typically written in imperative languages like C or C++. I'd like to hear from the functional camp where this hype I'm hearing is coming from....

    Read the article

  • Playground for Artificial Intelligence?

    - by Dolph Mathews
    In school, one of my professors had created a 3D game (not just an engine), where all the players were entirely AI-controlled, and it was our assignment to program the AI of a single player. We were basically provided an API to interact with the game world. Our AI implementations were then dropped into the game together, and we watched as our programs went to battle against each other. It was like robot soccer, but virtual, and with lots of big guns. I'm now looking for anything similar (and open source) to play with. (Preferably in Java, but I'm open to any language.) I'm not looking for a game engine, or a framework... I'm looking for a complete game that simply lacks AI code... preferably set up for this kind of exercise. Suggestions?

    Read the article

  • How to use traceit to report function input variables in stack trace

    - by reckoner
    Hi, I've been using the following code to trace the execution of my programs: import sys import linecache import random def traceit(frame, event, arg): if event == "line": lineno = frame.f_lineno filename = frame.f_globals["__file__"] if filename == "<stdin>": filename = "traceit.py" if (filename.endswith(".pyc") or filename.endswith(".pyo")): filename = filename[:-1] name = frame.f_globals["__name__"] line = linecache.getline(filename, lineno) print "%s:%s:%s: %s" % (name, lineno,frame.f_code.co_name , line.rstrip()) return traceit def main(): print "In main" for i in range(5): print i, random.randrange(0, 10) print "Done." sys.settrace(traceit) main() Using this code, or something like it, is it possible to report the values of certain function arguments? In other words, the above code tells me "which" functions were called and I would like to know "what" the corresponding values of the input variables for those function calls. Thanks in advance.

    Read the article

  • Which .NET performance and/or memory profilers will allow me to profile a DLL?

    - by Eric
    I write a lot of .NET based plug-ins for other programs which are usually compiled as a DLL which is up to the native application to start up. I've been using Equatec's profiler, which works great, but now would like something with more features, including the ability to profile memory usage. I tried out Red Gate's Ant Profiler, but as far as I can see there is no way to profile a DLL. The only option is to profile an EXE. So my question is what other profiling tools are available that will allow me to profile a single library DLL rather than an EXE. I'm assuming this would require injecting profile code into the library as Equatec does?

    Read the article

  • stting environment variables in powershell by calling python script that prints $env:myVar=myvalue

    - by leeg
    I have some legacy python scripts that manage my shell environment for all the programs and plugins I am running on Linux (bash) and windows (cmd.exe). I want to port this to powershell. How do I set environment variables in powershell by calling python script that prints $env:myVar=myvalue and causes my environment variable to persist in the powershell. In Bash I can use a bash function to call my python script which prints export var=value to stdout and the function will set the environment variables in my shell. This will also work in windows cmd shell by calling a .bat file. I cannot figure out how to do this in powershell. I think it should be something like this: setvar.ps1: function SETVAR {c:\python26\python.exe varconfig.py } varconfig.py: import sys print >> sys.stdout, '$env:myVar=foo'

    Read the article

  • Is recursion preferred compare to iteration in multicore era?

    - by prM
    Or say, do multicore CPUs process recursion faster than iteration? Or it simply depends on how one language runs on the machine? like c executes function calls with large cost, comparing to doing simple iterations. I had this question because one day I told one of my friend that recursion isn't any amazing magic that can speed up programs, and he told me that with multicore CPUs recursion can be faster than iteration. EDIT: If we consider the most recursion-loved situation (data structure, function call), is it even possible for recursion to be faster?

    Read the article

  • How to Check .Net 2.0 assembly for Calls to .Net 3.5

    - by Paul Farry
    I've found an issue I believe where an assembly is making a call to a .Net3.5 method in a .Net2.0 only environment. (None on the .Net service packs nor .Net 3.5 is installed) I'd like to know if there are any programs that I can run like FXCop to check an assembly for adherence to only making method calls that are available in .Net 2.0 without the 3.5 extensions that were added. I've been bitten by this before and I'd like to have a way to check assemblies so that before they are released they can be checked to prevent these kinds of issues. Please don't say require .Net 3.5 because whilst I'd like to go to this, it's just not possible at this point.

    Read the article

  • Portable way to determine the platform's line separator

    - by Adrian McCarthy
    Different platforms use different line separator schemes (LF, CR-LF, CR, NEL, Unicode LINE SEPARATOR, etc.). C++ (and C) make a lot of this transparent to most programs, by converting '\n' to and from the target platform's native new line encoding. But if your program needs to determine the actual byte sequence used, how could you do it portably? The best method I've come up with is: Write a temporary file in text mode with just '\n' in it, letting the run-time do the translation. Read back the temporary file in binary mode to see the actual bytes. That feels kludgy. Is there a way to do it without temporary files? I tried stringstreams instead, but the run-time doesn't actually translate '\n' in that context (which makes sense). Does the run-time expose this information in some other way?

    Read the article

  • worth of getting certified

    - by user58935
    In 6 months I will be a graduate and pursuing a masters in computers for the next 2 years in India. My options after that are to either do a post graduation (again) from a reputed college abroad, or to take up a job. Recently I came to know about about global certification programs like ccna, ccnp, ccie, oca, ocp, j2se, mcse, mcp etc. If I do these certifications, will it help me get a better job, or get into a top college ? How much does it matter? Considering that I like most areas of computers, which certifications are most beneficial? (I even had a crazy idea to do them all in 2.5 years left. Or should I try and master a few instead). Please advise.

    Read the article

  • Facebook Open Graph without a browser

    - by Hellnar
    Hello, For a middleware system with internet (which works inside a set-top box) I want to develop a primitive Facebook interface where users can type their user-names and password, showing their latest notification, messages and other casual stuff on the TV screen by using the recent Facebook Graph API. This middleware program uses Java ME to run programs (such as this simple facebook app) and it can connect to internet however it doesn't have a real web browser. Without browser it can connect to any url to retrieve the JSON response however I am not sure how to achieve authentication without a real browser. Under this circumstances, is it possible Facebook authentication? If you think so, what approach would you suggest ? Thanks

    Read the article

  • Shared static classes between AppDomains in loaded library code

    - by Christian Stewart
    I'm working on a program in which I want to do something similar to what the Photon Server system does: Offer a common "API" class library, which contains common data classes, enumerations, and interfaces for working with the host program. Have client programs (class libraries) reference this DLL and implement interfaces listed within it. Have the "host" application load built DLL client libraries into separate AppDomains and reference the interfaces that lie within to have polymorphic client code from within a dll file. I have something like this worked out: a class library that contains common code, but I've run into the following question How should I handle static classes? Should I add a method that is called by the host program to synchronize data? How do I keep a static class the same between AppDomains? Should I discard these classes in favor of better interfaces between the code levels? And in general, how do I share data between these loaded AppDomains?

    Read the article

  • flash.media.Sound.play takes long time to return

    - by Fire Lancer
    I'm trying to play some sounds in my flash project through action script. However for some reason in my code the call to Sound.play takes from 40ms to over 100ms in extreme cases, which is obviously more than enough to be very noticeable whenever a sound is played. This happens every time a sound is played, not just when that sound is first played, so I dont think its because the Sound object is still loading data or anything like that... At the start I have this to load the sound: class MyClass { [Embed(source='data/test_snd.mp3')] private var TestSound:Class; private var testSound:Sound;//flash.media.Sound public function MyClass() { testSound = new TestSound(); } Then im just using the play method of the sound object to play it later on. testSound.play();//seems to take a long time to return This as far as I can tell is following the same process as other Flash programs I found, however none of them seem to have this problem. Is there something that I've missed that would cause the play() method to be so slow?

    Read the article

  • Boost Filesystem Library Visual C++ Compile Error

    - by John Miller
    I'm having the following issue just trying to compile/run some of the example programs with the Boost Filesystem Library. I'm using MS-Visual C++ with Visual Studio .NET (2003). I've installed the Boost libraries, version 1.38 and 1.39 (just in case there was a version problem) using the BoostPro installers. If I just try to include /boost/filesystem/operations.hpp I receive the following error: \boost_1_38\boost\system\error_code.hpp(230) : error C2039: 'type' : is not a member of 'boost::enable_if<boost::system::is_error_condition_enum<Cond,boost::detail::enable_if_default_T>' Any help is greatly appreciated. Thank you!

    Read the article

  • Exclusive compute mode with OpenCL+NVidia

    - by lokli
    Hi, I have a question to exclusive compute mode with NVidia+OpenCL. I can set up exclusive compute mode (page 74 from cuda programming guide 3.0) with nvidia-smi on a nvidia-gpu . that means, only one program can compute on gpu. cuda runtime schedules than app automatically. but I have a problem with opencl-programs in this case: if one application runs on a gpu with setted exclusive compute mode and second opencl-program calls clGetDeviceInfo(..., CL_DEVICE_AVAILABLE, ...) with the same GPU is the result == CL_TRUE. After that if opencl-app tries to create a context on this device, than crashes the running app (both). How can i find out an available GPU with OpenCL? Thanks.

    Read the article

  • Metric to measure object-orientedness

    - by Jono
    Is there a metric that can assist in determining the object-orientedness of a system or application? I've seen some pretty neat metrics in the .NET Reflector Add-ins codeplex project, but nothing like this yet. If such a metric doesn't exist, would it even be possible or useful? There are the 3 supposed tenets of object-oriented programming: encapsulation, inheritance, and polymorphism; a tool that ranked programs against these might be able to show areas of a C# (or similar) code base where the whole object-oriented ideal was discarded, and perhaps how many bugs are associated with that area versus the rest of the project.

    Read the article

  • Why would I get a bus error or segmentation fault when calling free() normally?

    - by chucknelson
    I have a very simple test program, running on Solaris 5.8: #include <stdio.h> #include <stdlib.h> int main(void) { char *paths; paths = getenv("PATH"); printf("Paths: %s\n", paths); free(paths); // this causes a bus error return 0; } If I don't call free() at the end, it displays the message fine and exits. If I include the free() call, it crashes with a bus error. I've had other calls to free(), in other programs, cause segmentation faults as well. Even if I allocate the memory for *paths myself, free() will cause a bus error. Is there some reason trying to free up the memory is causing a crash?

    Read the article

  • Replacing TCP/IP pipe with WCF

    - by msarchet
    So currently my company is using a TCP/IP connection to talk between server and client programs, right now we are building this connection using System.RunTime.Remoting, which is clunky and not that reliable. It was built about 5 years ago and the model keeps getting reused and it's starting to propagate some issues, ports used, refused connections, etc. I'm trying to find some resources on how to change this over to WCF but I'm not really sure what I am looking for or what I should be searching. If you want some more information on what were actually doing with it I can go into some detail, but I'll need to pull up the code and make sure I explain it completely. thanks!

    Read the article

  • Retrieving data from database. Retrieve only when needed or get everything?

    - by RHaguiuda
    I have a simple application to store Contacts. This application uses a simple relational database to store Contact information, like Name, Address and other data fields. While designing it, I question came to my mind: When designing programs that uses databases, should I retrieve all database records and store them in objects in my program, so I have a very fast performance or I should always gather data only when required? Of course, retrieving all data can only be done if it`s not too many, but do you use this approach when you make sure that the database will be small (< 300 records for example)? I have designed once a similar application that fetches data only when needed, but that was slow (using a Access database). Thanks for all help.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • ncurses - expect: sleep executes at wrong time

    - by rahul
    I have some ncurses apps that I need to automate to test repeatedly. I am placing the "sleep" command between "send" commands. However, what i see is that all the sleep's are executed in the beginning before the screen loads. expect concatenates the sends (I see that at the screen bottom during sleep) then issues them together. I have tried sending all keys with "send -s" or "send -h". That marginally helps. I've replaced "-f" on line 1 with "-b" - again a tiny difference. Why isn't "sleep" pausing at the right time. Incidentally, my programs have a getc() loop, so i can't use "expect" command. I tried that too. #!/usr/bin/expect -f spawn ruby testsplit.rb #expect set send_human {3 3 5 5 7} set send_slow {10 1} exp_send -s -- "--" exec sleep 3 send -s "+" send -s "=" sleep 1 send -h -- "-" send -h -- "-" sleep 1 send -h -- "v" interact

    Read the article

  • How do I set up the Clojure classpath in Emacs after installing with ELPA?

    - by derefed
    I'm trying to add paths to my classpath in the Clojure REPL that I've set up in Emacs using ELPA. Apparently, this isn't the $CLASSPATH environment variable, but rather the swank-clojure-classpath variable that Swank sets up. Because I used ELPA to install Swank, Clojure, etc., there are a ton of .el files that take care of everything instead of my .emacs file. Unfortunately, I can't figure out how to change the classpath now. I've tried using (setq 'swank-clojure-extra-classpaths (list ...)) both before and after the ELPA stuff in my .emacs, and I've tried adding paths directly to swank-clojure-classpath in .emacs, .emacs.d/init.el, and .emacs.d/user/user.el, but nothing works. What I'm ultimately trying to do is to add both the current directory "." and the directory in which I keep my Clojure programs. I'm assuming swank-clojure-classpath is the thing I need to set here. Thanks for your help.

    Read the article

  • Encrypting password in compiled C or C++ code

    - by Daniel
    Hello!, I know how to compile C and C++ Source files using GCC and CC in the terminal, however i would like to know if its safe to include passwords in these files, once compiled. For example.. i check user input for a certain password e.g 123, but it appears compiled C/C++ programs is possible to be decompiled. Is there anyway to compile a C/C++ source file, while keeping the source completely hidden.. If not, could anyone provide a small example of encrypting the input, then checking against the password e.g: (SHA1, MD5)

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >