Search Results

Search found 10285 results on 412 pages for 'cpu architecture'.

Page 369/412 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • C# Breakpoint Weirdness

    - by Dan
    In my program I've got two data files A and B. The data in A is static and the data in B refers back to the data in A. In order to make sure the data in B is invalidated when A is changed, I keep an identifier for each of the links which is a long byte-string identifying the data. I get this string using BitConverter on some of the important properties. My problem is that this scheme isn't working. I save the identifiers initially, and with I reload (with the exact same data in A) the identifiers don't match anymore. It seems the bit converter gives different results when I go to save. The really weird thing about it is, if I place a breakpoint in the save code, I can see the identifier it's writing to the file is fine, and the next load works. If I don't place a breakpoint and say print the identifiers to console instead, they're totally different. It's like when my program is running at full-speed the CPU messes up some instructions. This isn't the first time something like this happens to me. I've seen it in other projects. What gives? Has anyone every experienced this kind of debugging weirdness? I can't explain how stopping the program and not stopping it can change the output. Also, it's not a hardware problem because this happens on my laptop as well.

    Read the article

  • 32/64 bit problems with Eclipse CDT on Ubuntu

    - by waffleShirt
    I have just recently started running Linux on my PC and I am trying to start learning OpenGL. I am using the latest version of Eclipse CDT as my IDE, and my system is Ubuntu 10.10, 64 bit version. The problem I am having is that whenever I try to run a build from within the IDE I get the error message "Launch Failed. Binary Not Found." Ive done a lot of looking around on the internet but I still cant solve the problem. I know for a fact that the binary is built, it can be run from a terminal window. According to posts I have seen the problem is that Eclipse tries to run a 32 bit binary, but GCC 4.4.5 defaults to 64 bit binaries on a 64 bit system. Ive seen a lot of information about using the -m32 flag in makefiles, but then I still get the following output in Eclipse: make all g++ -o HelloWorld2 main.o /usr/bin/ld: i386 architecture of input file `main.o' is incompatible with i386:x86-64 output /usr/bin/ld: final link failed: Invalid operation collect2: ld returned 1 exit status make: *** [HelloWorld2] Error 1 What I would like to know is how to either get Eclipse to launch the 64 bit binaries, or have Eclipse correctly compile 32 bit binaries.

    Read the article

  • WPF/MVVM - keep ViewModels in application component or separate?

    - by Anders Juul
    Hi all, I have a WPF application where I'm using the MVVM pattern. I get the VM activated for actions that require user input and thus need to activate views from the VM. I've started out separating the VMs into a separate component/assembly, partly because I see them as the unit testable part, partly because views should rely on VM, not the other way round. But when I then need to bring up a window, the window is not known by the VM. All introductions I find place the VM in the WPF/App component, thus eliminating the problem. This article recommends keeping them in separate layers : http://waf.codeplex.com/wikipage?title=Architecture%20-%20Get%20The%20Big%20Picture&referringTitle=Home As I see it, I have the following choices Move VMs to the WPF/App assembly to allow VMs to access the windows directly. Place interfaces of views in VM-assembly, implement views in WPF/App assembly and register the connection through IOC or alternative ways. File a 'request' from the VM into some mechanism/bus that routes the request (but which mechanism!? E.g something in Prism?!) What's the recommendation? Thanks for any comments, Anders, Denmark

    Read the article

  • Design patterns for Agent / Actor based concurrent design.

    - by nso1
    Recently i have been getting into alternative languages that support an actor/agent/shared nothing architecture - ie. scala, clojure etc (clojure also supports shared state). So far most of the documentation that I have read focus around the intro level. What I am looking for is more advanced documentation along the gang of four but instead shared nothing based. Why ? It helps to grok the change in design thinking. Simple examples are easy, but in a real world java application (single threaded) you can have object graphs with 1000's of members with complex relationships. But with agent based concurrency development it introduces a whole new set of ideas to comprehend when designing large systems. ie. Agent granularity - how much state should one agent manage - implications on performance etc or are their good patterns for mapping shared state object graphs to agent based system. tips on mapping domain models to design. Discussions not on the technology but more on how to BEST use the technology in design (real world "complex" examples would be great).

    Read the article

  • how to update UI controls in cocoa application from background thread

    - by AmitSri
    following is .m code: #import "ThreadLabAppDelegate.h" @interface ThreadLabAppDelegate() - (void)processStart; - (void)processCompleted; @end @implementation ThreadLabAppDelegate @synthesize isProcessStarted; - (void)awakeFromNib { //Set levelindicator's maximum value [levelIndicator setMaxValue:1000]; } - (void)dealloc { //Never called while debugging ???? [super dealloc]; } - (IBAction)startProcess:(id)sender { //Set process flag to true self.isProcessStarted=YES; //Start Animation [spinIndicator startAnimation:nil]; //perform selector in background thread [self performSelectorInBackground:@selector(processStart) withObject:nil]; } - (IBAction)stopProcess:(id)sender { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } - (void)processStart { int counter = 0; while (counter != 1000) { NSLog(@"Counter : %d",counter); //Sleep background thread to reduce CPU usage [NSThread sleepForTimeInterval:0.01]; //set the level indicator value to showing progress [levelIndicator setIntValue:counter]; //increment counter counter++; } //Notify main thread for process completed [self performSelectorOnMainThread:@selector(processCompleted) withObject:nil waitUntilDone:NO]; } - (void)processCompleted { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } @end I need to clear following things as per the above code. How to interrupt/cancel processStart while loop from UI control? I also need to show the counter value in main UI, which i suppose to do with performSelectorOnMainThread and passing argument. Just want to know, is there anyother way to do that? When my app started it is showing 1 thread in Activity Monitor, but when i started the processStart() in background thread its creating two new thread,which makes the total 3 thread until or unless loop get finished.After completing the loop i can see 2 threads. So, my understanding is that, 2 thread created when i called performSelectorInBackground, but what about the thrid thread, from where it got created? What if thread counts get increases on every call of selector.How to control that or my implementation is bad for such kind of requirements? Thanks

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • Seperation of business logic

    - by bruno
    Dear all, When I was optimizing my architecture of our applications in our website, I came to a problem that I don't know the best solution for. Now at the moment we have a small dll based on this structure: Database <-> DAL <-> BLL the Dal uses Business Objects to pass to the BLL that will pass it to the applications that uses this dll. Only the BLL is public so any application that includes this dll, can see the bll. In the beginning, this was a good solution for our company. But when we are adding more and more applications on that Dll, the bigger the Bll is getting. Now we dont want that some applications can see Bll-logic from other applications. Now I don't know what the best solution is for that. The first thing I thought was, move and seperate the bll to other dll's which i can include in my application. But then must the Dal be public, so the other dll's can get the data... and that I seems like a good solution. My other soluition, is just to seperate the bll in different namespaces, and just include only the namespaces you need in the applications. But in this solution, you can get directly access to other bll's if you want. So i'm asking for your oppinions. Thx, Bruno

    Read the article

  • Android NDK import-module / code reuse

    - by Graeme
    Morning! I've created a small NDK project which allows dynamic serialisation of objects between Java and C++ through JNI. The logic works like this: Bean - JavaCInterface.Java - JavaCInterface.cpp - JavaCInterface.java - Bean The problem is I want to use this functionality in other projects. I separated out the test code from the project and created a "Tester" project. The tester project sends a Java object through to C++ which then echo's it back to the Java layer. I thought linking would be pretty simple - ("Simple" in terms of NDK/JNI is usually a day of frustration) I added the JNIBridge project as a source project and including the following lines to Android.mk: NDK_MODULE_PATH=.../JNIBridge/jni/" JNIBridge/jni/JavaCInterface/Android.mk: ... include $(BUILD_STATIC_LIBRARY) JNITester/jni/Android.mk: ... include $(BUILD_SHARED_LIBRARY) $(call import-module, JavaCInterface) This all works fine. The C++ files which rely on headers from JavaCInterface module work fine. Also the Java classes can happily use interfaces from JNIBridge project. All the linking is happy. Unfortunately JavaCInterface.java which contains the native method calls cannot see the JNI method located in the static library. (Logically they are in the same project but both are imported into the project where you wish to use them through the above mechanism). My current solutions are are follows. I'm hoping someone can suggest something that will preserve the modular nature of what I'm trying to achieve: My current solution would be to include the JavaCInterface cpp files in the calling project like so: LOCAL_SRC_FILES := FunctionTable.cpp $(PATH_TO_SHARED_PROJECT)/JavaCInterface.cpp But I'd rather not do this as it would lead to me needing to update each depending project if I changed the JavaCInterface architecture. I could create a new set of JNI method signatures in each local project which then link to the imported modules. Again, this binds the implementations too tightly.

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • Consolidate multiple site files into single location

    - by seengee
    We have a custom PHP/MySQL CMS running on Linux/Apache thats rolled out to multiple sites (20+) on the same server. Each site uses exactly the same CMS files with a few files for each site being customised. The customised files for each site are: /library/mysql_connect.php /public_html/css/* /public_html/ftparea/* /public_html/images/* There's also a couple of other random files inside /public_html/includes/ that are unique to each site. Other than this each site on the server uses the exact same files. Each site sitting within /home/username/. There is obviously a massive amount of replication here as each time we want to deploy a system update we need to update to each user account. Given the common site files are all stored in SVN it would make far more sense if we were able to simply commit to SVN and deploy to a single location direct from there. Unfortunately, making a major architecture change at this stage could be problematic. In my mind the ideal scenario would mean creating an account like /home/commonfiles/ and each site using these common files unless an account specific file exists, for example a request is made to /home/user/public_html/index.php but as this file doesnt exist the request is then redirected to /home/commonfiles/public_html/index.php. I know that generally this approach is possible, similar to how Zend Framework (and probably others) redirect all requests that dont match a specific file to index.php. I'm just not sure about how exactly to go about implementing it and whether its actually advisable. Would really welcome any input/ideas people have got. Thanks.

    Read the article

  • Rails wont install at create Makefile stage

    - by mattc
    I am getting this error after running sudo gem install rails /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb creating Makefile make xcrun cc -I. - I/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/universal-darwin12.0 -I/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/universal-darwin12.0 -I. -DJSON_GENERATOR -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -arch i386 -arch x86_64 -g -O3 -pipe -fno-common -DENABLE_DTRACE -fno-common -pipe -fno-common -c generator.c xcrun: Error: failed to exec real xcrun. (No such file or directory) cc -arch i386 -arch x86_64 -pipe -bundle -undefined dynamic_lookup -o generator.bundle generator.o -L. -L/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib -L. -arch i386 -arch x86_64 -lruby -lpthread -ldl -lobjc i686-apple-darwin11-llvm-gcc-4.2: generator.o: No such file or directory i686-apple-darwin11-llvm-gcc-4.2: generator.o: No such file or directory lipo: can't figure out the architecture type of: /var/tmp//ccHCNNwM.out make: *** [generator.bundle] Error 1 I have Homebrew installed, Xcode 4.4 with command line tools installed, Osx 10.6.2 I run gem env and get this but not sure what I'm looking for: RubyGems Environment: - RUBYGEMS VERSION: 1.8.24 - RUBY VERSION: 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin12.0] - INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8 - RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - universal-darwin-12 - GEM PATHS: - /Library/Ruby/Gems/1.8 - /Users/matthewcleghorn/.gem/ruby/1.8 - /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - REMOTE SOURCES: - http://rubygems.org/ Any help greatly appreciated. Thanks.

    Read the article

  • Keeping sync in multiplayer RTS game that uses floating point arithmetic

    - by Calmarius
    I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player. And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized. And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution). So I have 2 options so far: Say bye to the current code and restart from scratch using integers Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame... So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

    Read the article

  • Route WCF ServiceHost to another computer

    - by I2nfo
    GoodDay, I'm not a guru when it comes to WCF, but i do know the basics. My question is, how do i create a ServiceHost on machine X, while the code is on machine Y? if i build and run this code on my dev machine(localhost) : servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(),"net.tcp://my.datacenter.com/MyApp/MyService1"); //This is normally set to localhost. What implementation must be done on the datacenter server, so that if i had to point to http://my.datacenter.com/MyApp/MyService1 , it will route the service operation to my dev machine (localhost). However, the datacenter should not be accessible via the internet. It is a possible infrastructure that we researching to see if we can create a service bus type architecture so that all our customers can invoke other customer services running on their respective machines just by calling our datacenter url. We have looked at Windows Azure, but we have our own datacenter infrasture that we wish to leverage off. Come think of it, we kind of building our own Azure, on a very very basic scale. How does one go creating this? Thanks in Advance

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Handling Types Defined in Plug-ins That Are No Longer Available

    - by Chris
    I am developing a .NET framework application that allows users to maintain and save "projects". A project can consist of components whose types are defined in the assemblies of the framework itself and/or in third-party assemblies that will be made available to the framework via a yet-to-be-built plug-in architecture. When a project is saved, it is simply binary-serialised to file. Projects are portable, so multiple users can load the same project into their own instances of the framework (just as different users may open the same MSWord document in their own local copies of MSWord). What's more, the plug-ins available to one user's framework might not be available to that of another. I need some way of ensuring that when a user attempts to open (i.e. deserialise) a project that includes a type whose defining assembly cannot be found (either because of a framework version incompatibility or the absence of a plug-in), the project still opens but the offending type is somehow substituted or omitted. Trouble is, the research I've done to date does not even hint at a suitable approach. Any ideas would be much appreciated, thanks.

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • Is there any way to provide custom factory for .Net Framework creation Entities from EF4 ?

    - by ILICH
    There are a lot of posts about how cool POCO objects are and how Entity Framework 4 supports them. I decided to try it out with domain driven development oriented architecture and finished with domain entities that has dependencies from services. So far so good. Imagine my Products are POCO objects. When i query for objects like this: NorthwindContext db = new NorthwindContext(); var products = db.Products.ToList(); EF creates instances of products for me. Now I want to inject dependencies in my POCO objects (products) The only way I see is make some method within NorthwindContext that makes something like pseudo-code below: public List<Product> GetProducts(){ var products = database.Products.ToList(); container.BuildUp(products); //inject dependencies return products; } But what if i want to make my repository to be more flexible like this: public ObjectSet<Product> GetProducts() { ... } So, I really need a factory to make it more lazy and linq friendly. Please help !

    Read the article

  • Is the REST support in Spring 3's MVC Framework production quality yet?

    - by glenjohnson
    Hi all, Since Spring 3 was released in December last year, I have been trying out the new REST features in the MVC framework for a small commercial project involving implementing a few RESTful Web Services which consume XML and return XML views using JiBX. I plan to use either Hibernate or JDBC Templates for the data persistence. As a Spring 2.0 developer, I have found Spring 3's (and 2.5's) new annotations way of doing things quite a paradigm shift and have personally found some of the new MVC annotation features difficult to get up to speed with for non-trivial applications - as such, I am often having to dig for information in forums and blogs that is not apparent from going through the reference guide or from the various Spring 3 REST examples on the web. For deadline-driven production quality and mission critical applications implementing a RESTful architecture, should I be holding off from Spring 3 and rather be using mature JSR 311 (JAX-RS) compliant frameworks like RESTlet or Jersey for the REST layer of my code (together with Spring 2 / 2.5 to tie things together)? I had no problems using RESTlet 1.x in a previous project and it was quite easy to get up to speed with (no magic tricks behind the scenes), but when starting my current project it initially looked like the new REST stuff in Spring 3's MVC Framework would make life easier. Do any of you out there have any advice to give on this? Does anyone know of any commercial / production-quality projects using, or having successfully delivered with, the new REST stuff in Spring 3's MVC Framework. Many thanks Glen

    Read the article

  • C++ & proper TDD

    - by Kotti
    Hi! I recently tried developing a small-sized project in C# and during the whole project our team used the Test-Driven-Development (TDD) technique (xunit, moq). I really think this was awesome, because (when paired with C#) this approach allowed to relax when coding, relax when projecting and relax when refactoring. I suspect that all this TDD-stuff actually simplifies the coding process and, well, it allowed (eventually, for me) to get the same result with fewer brain cells working. Right after that I tried using TDD paired with C++ (I used Google Test and Google Mock libraries), and, I don't know why but I actually think that TDD here was a step back in terms of rapid application development. I had some moments when I had to spend huge amounts of time thinking of my tests, building proper mocks, rebuilding them and swearing at my monitor. And, well, I obviously can't ask something like "what I did wrong?" or "what was wrong in my approach?", because I don't know what to describe. But if there are any people who are used to TDD in C++ (and, probably C#) too, could you please advise me how to do this properly. Framework recommendations, architecture approaches, plain coding advices - if you are experienced in TDD & C++, please respond.

    Read the article

  • Android Signal analysis + some filters.

    - by Profete162
    Hello, as the world cup is the main sport event and the Vuvuzelas are the most annoying sound in the world, I had an idea to remove them definitively by reading this new ( http://www.popsci.com/diy/article/2010-06/simple-software-can-filter-out-vuvuzela-whine) that told us that the sound has some frequencies at 233Hz + 466,932,1864Hz. I have already made a lot of Android application by myself but never touching the signal analysis and filtering part, so here are a few questions, I do not ask for precise answer but maybe links and tutorial to find something to work on. I guess that a new Android phone has the CPU and power to make real-time filtering. 1) How can I intercept the sound coming from the Jack microphone - Line-IN plug- ( I plan to link my TV to my phone with Jack to Jack plug). My question is totally software and coding, I have all the wires and adapters to plug a jack into my android phone Line IN. 2) Are there some Fourier analysis librairies, may I have a look to Java libraries on the web and import them to my Android project? I really apologize because my question seem not precise, but I think that would be something great. Thank you for your answers.

    Read the article

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (-1)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for concurrent connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR) suggests that ...'the connection was reset by peer', or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? Otherwise, use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. I cannot figure-out whatever is wrong even with Valgrind or GDB

    Read the article

  • Componentizing complex functionality in an MVC web app

    - by NXT
    Hi Everyone, This is question about MVC web-app architecture, and how it can be extended to handle componentizing moderately complex units of functionality. I have an MVC style web-app with a customer facing credit card charge page. I've been asked to allow the admins to enter credit card payments as well, for times when credit cards are taken over the phone. The customer facing credit card charge section of the website is currently it's own controller, with approximately 3 pages and a login. That controller is responsible for: Customer login credential authentication Credit card data collection Calling a library to do the actual charge. reporting the results to the user. I would like to extract the card data collection pages into a component of some kind so that I can easily reuse the code on the admin side of the app. Right now my components are limited to single "view" pages with PHP style embedded Perl code. This is a simple, custom MVC framework written in Perl. Right now, controllers are called directly from the framework to service web requests. My idea is to allow controllers to be called from other controllers, so that I can componentize more complex functionality. For simplicity I think I prefer composition over inheritance, even though it will require writing a bunch of pass-through methods (actions). Being Perl, I could in theory do multiple inheritance. I'm wondering if anyone with experience in other MVC web frameworks can comment on how this sort of thing is usually done. Thank you.

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Upgraded to Xcode 4 -- Endless stream of duplicate symbol errors causing build errors

    - by D-Nice
    Everything was working perfectly fine in Xcode 3 yesterday before I upgraded. So I completed the upgrade, restarted my computer, and opened my old project. I had to reconfigure a few settings like the header paths so that I could begin to compile. I'm using AdWhirl for ad mediation, and at this point my errors begin to read something like duplicate symbol _OBJC_METACLASS_$_SBJSON in /Users/Admin/Desktop/TMapLiteAdwhirl/AdWhirl/MMSDK/libMMSDK.a(SBJSON.o) and /Users/Admin/Library/Developer/Xcode/DerivedData/TruxMapLite-bgpylibztethnlhkfkdumpvrjvgy/Build/Intermediates/TruxMapLite.build/Debug-iphoneos/TruxMapLite.build/Objects-normal/armv6/SBJSON.o for architecture armv6 The library it's referring to is the SDK for one of the ad networks I'm including in AdWhirl. Both of the 'duplicate symbols' refer to the SAME FILE, but they use different paths. If I had still had XCode 3, I would simply try excluding these libraries from the build path, but I have no idea how that can be done in Xcode 4. I've tried everything all the way down to deleting the library and all associated files from my project, but when I do this, i will simply get the same type of error for a different library in the AdWhirl directory. This is incredibly frustrating because before my upgrade everything was working smoothly and I was prepared to submit my binary. If anyone has any advice, id be more than happy to give it a try. Thanks!

    Read the article

  • What should be taught in a "Fundamentals of programming" course at university?

    - by Dervin Thunk
    I have started a new question (see here), because I think the topic is of importance in a more general form. The question is now: If you were a professor at a Computer Science Dept. in some university, what would make it into your course? This is a programming course, second term, first year computer science/computer engineering. Remember you have a limited amount of time, and students are of different levels of competence, and some may be scientists, but some will also go on to be programmers in companies of different kinds. You have to cater to all. Bonus: What language? (Although see this question for my current thoughts about this...) Maybe you want to attach a course outline from some university? See here for an even more general question about this. Answer: I can't really summarize this post... I guess it was too subjective. However, it looks like we have to cover the history of computing up to a certain extent, computer architecture (memory, registers, whatever), C, and finally some basic algos and data structures in a problem solving fashion. This will be the bare bones of the course. Thanks all. I will accept the most voted up answer to close the thread, as it should be done.

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >