Search Results

Search found 14169 results on 567 pages for 'parallel programming'.

Page 116/567 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • php programming

    - by HARSHA
    Hi, i am learning php,i downloded the xampp.and Apache server,Mysql are running properly in Xampp Control panel. I tried with a simple program that is hello world,i created a new folder in htdocs, and i saved my program in that new folder with .php extention. But when i run the program then is showing a error as follows ------ Object not found! The requested URL was not found on this server. If you entered the URL manually please check your spelling and try again. If you think this is a server error, please contact the webmaster. Error 404 localhost 18-5-2010 11:51:44 Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1

    Read the article

  • SHLoadImageFile(L"\\Program Files\\TrainingApp\\background.png"); whats that L in the argument for?

    - by ashishsony
    Hi, ive been working on c++ on linux for the past 2 years,and switched to windows c++ programming recently. can anyone tell me what that L is there in the argument of the function: SHLoadImageFile(L"\\Program Files\\TrainingApp\\background.png"); and on viewing certain sample code in MSVS C++ i came across hundereds of typedefs like.. LPARAM// typedef LONG_PTR LPARAM... here LONG_PTR is again typedef as __w64 long WPARAM// typedef UINT_PTR WPARAM... so there is a lot of chained typedefs.. I never saw this much of typedef chaining on c++ programming on linux using gcc.. what i want to say is that it just creates more confusion in this way for windows application programming.. while ive seen application programming on linux using frameworks like Qt.. there such things are rarely used.. so is there specific purpose in typedefining again and again on MSVSC++?? for eg.. there are typdefs like typedef int BOOL; whats the use of this when normal bool is available already..?? there are hundred other cases ive come across where just to decide what data type to use becomes so difficult.. it becomes difficult to understand a pre written code in this fashion too.. Thanks.

    Read the article

  • What are the most interesting equivalences arising from the Curry-Howard Isomorphism?

    - by Tom
    I came upon the Curry-Howard Isomorphism relatively late in my programming life, and perhaps this contributes to my being utterly fascinated by it. It implies that for every programming concept there exists a precise analogue in formal logic, and vice versa. Here's an "obvious" list of such analogies, off the top of my head: program/definition | proof type/declaration | proposition inhabited type | theorem function | implication function argument | hypothesis/antecedent function result | conclusion/consequent function application | modus ponens recursion | induction identity function | tautology non-terminating function | absurdity tuple | conjunction (and) disjoint union | exclusive disjunction (xor) parametric polymorphism | universal quantification So, to my question: what are some of the more interesting/obscure implications of this isomorphism? I'm no logician so I'm sure I've only scratched the surface with this list. For example, here are some programming notions for which I'm unaware of pithy names in logic: currying | "((a & b) => c) iff (a => (b => c))" scope | "known theory + hypotheses" And here are some logical concepts which I haven't quite pinned down in programming terms: primitive type? | axiom set of valid programs? | theory ? | disjunction (or)

    Read the article

  • How can i be sure that professional programming is not for me ?

    - by user17766
    Hello everybody I love programming, developing projects for hobby and learning new concepts. I am getting harder too much in current job Despite learnt many thing well. I can even hardly understand assigned tasks. I am asking why i am getting harder to myself. It may not my fault? Our architecture doesn't spend enough time to explain complicated sides of project for us or i am not enough smart one for understand fastly. Our architecture also doesn't know what kind of hell he is creating ? Seeing 3 level generic types and 4-5 level generic inheritance in domain model objects hell makes me think so really. It looks abusing concepts more than reduce complexity. Thinking that he hasn't experienced before such a big project while he is getting confused in problems of the project. May i am not in right company ? May i am not good programmer ? May i am really stupid ? Become good in programming concepts is not enough to deal big project's complications so someone should to tell me that i have to still effort too much even i am good programmer for adopting myself to any big project ? Also i had another bad experiences from previous job but my professional experiences is almost few months but i spend 2 years for learning and coding for fun and i really can say that i have well skills on OOP, Design Patterns, coding standards and deep knowledge in language currently used. Sometimes i am thinking to leave programming professionally and work in any lame job while doing programming just for hobby. Waiting suggestions and insights

    Read the article

  • Is there a resource that explains the benefits of layered programming?

    - by P.Brian.Mackey
    Some developers I know favor what I would call a procedural programming style. I recognize that procedural programming has its uses, albeit not in the business application world of .NET programming. So let's say we have a winform application with a buttonclick event. The buttonclick handles everything from the UI configuration to the database call and data manipulation. So you end up with a method that is 100's of lines of code long. Outside the fact that this code can't be considered test-able for various reasons, this style of programming is fragile to change. I can talk bout OO, Anti-patterns, etc. The problem is that any distinct topic I can dream up requires a great deal of explanation to understand the potential benefits. Outside of finding a new job (lots of businesses program this way), how can I teach these kinds of developers how to write better code? Obviously we can't sit around a round table and discuss pro's and con's all day due to time constraints and real work that has to be done. Although, training and intense training is the only thing I can think of to fix these problems. Not to say I write perfect code, I most certainly do not. I do believe there are certain best practices that should be followed as a rule E.G. OO in the context of .NET. The most common excuse I hear is "we can't write code fast enough if we do it like that".

    Read the article

  • Can One Get a Solid Programming Foundation Without Going To College/University?

    - by Daniel
    First, I have already searched the site and read all the previous "self-taught vs. college" topics. The majority of the answers defended that going to college was the best choice, for two main reasons: Going to college gives you the paper, which is essential to landing jobs, especially in tough economic times. Going to college gives you a solid programming base, teaching you the principles that will be essential regardless of the language/path you take after. Here comes my question: I am not worried about reason 1 at all, because I already have my own company (I build websites/ do affiliate marketing) and a stable financial situation, so I am pretty sure I won't need to look around for a job. I am worried about reason 2 though. That is, I want to make sure I'll have as solid a programming foundation as anyone else out there, and I am wondering if that is possible with self-learning. Suppose I take my time to study the very basics, like discrete maths, algorithm design, programming logic, computer architecture, Assembly, C programming, databases and data structures - mostly using books,online resources and lots of coding. Say I spend 1-2 years covering those basics. Do you think my foundation would be solid, or still lack in comparison to someone who went to college?

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Stack usage with MMX intrinsics and Microsoft C++

    - by arik-funke
    I have an inline assembler loop that cumulatively adds elements from an int32 data array with MMX instructions. In particular, it uses the fact that the MMX registers can accommodate 16 int32s to calculate 16 different cumulative sums in parallel. I would now like to convert this piece of code to MMX intrinsics but I am afraid that I will suffer a performance penalty because one cannot explicitly intruct the compiler to use the 8 MMX registers to accomulate 16 independent sums. Can anybody comment on this and maybe propose a solution on how to convert the piece of code below to use intrinsics? == inline assembler (only part within the loop) == paddd mm0, [esi+edx+8*0] ; add first & second pair of int32 elements paddd mm1, [esi+edx+8*1] ; add third & fourth pair of int32 elements ... paddd mm2, [esi+edx+8*2] paddd mm3, [esi+edx+8*3] paddd mm4, [esi+edx+8*4] paddd mm5, [esi+edx+8*5] paddd mm6, [esi+edx+8*6] paddd mm7, [esi+edx+8*7] ; add 15th & 16th pair of int32 elements esi points to the beginning of the data array edx provides the offset in the data array for the current loop iteration the data array is arranged such that the elements for the 16 independent sums are interleaved.

    Read the article

  • How do I optimize this postfix expression tree for speed?

    - by Peter Stewart
    Thanks to the help I received in this post: I have a nice, concise recursive function to traverse a tree in postfix order: deque <char*> d; void Node::postfix() { if (left != __nullptr) { left->postfix(); } if (right != __nullptr) { right->postfix(); } d.push_front(cargo); return; }; This is an expression tree. The branch nodes are operators randomly selected from an array, and the leaf nodes are values or the variable 'x', also randomly selected from an array. char *values[10]={"1.0","2.0","3.0","4.0","5.0","6.0","7.0","8.0","9.0","x"}; char *ops[4]={"+","-","*","/"}; As this will be called billions of times during a run of the genetic algorithm of which it is a part, I'd like to optimize it for speed. I have a number of questions on this topic which I will ask in separate postings. The first is: how can I get access to each 'cargo' as it is found. That is: instead of pushing 'cargo' onto a deque, and then processing the deque to get the value, I'd like to start processing it right away. I don't yet know about parallel processing in c++, but this would ideally be done concurrently on two different processors. In python, I'd make the function a generator and access succeeding 'cargo's using .next(). But I'm using c++ to speed up the python implementation. I'm thinking that this kind of tree has been around for a long time, and somebody has probably optimized it already. Any Ideas? Thanks

    Read the article

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • OpenMP: Get total number of running threads

    - by Konrad Rudolph
    I need to know the total number of threads that my application has spawned via OpenMP. Unfortunately, the omp_get_num_threads() function does not work here since it only yields the number of threads in the current team. However, my code runs recursively (divide and conquer, basically) and I want to spawn new threads as long as there are still idle processors, but no more. Is there a way to get around the limitations of omp_get_num_threads and get the total number of running threads? If more detail is required, consider the following pseudo-code that models my workflow quite closely: function divide_and_conquer(Job job, int total_num_threads): if job.is_leaf(): # Recurrence base case. job.process() return left, right = job.divide() current_num_threads = omp_get_num_threads() if current_num_threads < total_num_threads: # (1) #pragma omp parallel num_threads(2) #pragma omp section divide_and_conquer(left, total_num_threads) #pragma omp section divide_and_conquer(right, total_num_threads) else: divide_and_conquer(left, total_num_threads) divide_and_conquer(right, total_num_threads) job = merge(left, right) If I call this code with a total_num_threads value of 4, the conditional annotated with (1) will always evaluate to true (because each thread team will contain at most two threads) and thus the code will always spawn two new threads, no matter how many threads are already running at a higher level. I am searching for a platform-independent way of determining the total number of threads that are currently running in my application.

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

  • [LAZARUS] Access Remote SQL Server Database on WinCE programming

    - by Dels
    I'm programming using Lazarus (Freepascal IDE, Delphi Like), and i have a problem when i need to connect into a remote SQL Server database on the network. My question: Is there any way to connect to a remote SQLdb on Lazarus? What is required connector type for SQL Server 2005? Is there any ODBC driver available for Windows CE (Windows Mobile 5/6)? (If so, I could use TODBCConnection...) I already searched and asked on the Lazarus community forum but didn't get any response

    Read the article

  • starting smartcard programming

    - by hyperboreean
    How could one get started with smartcards programming? I am asking here about all the toolkit he needs in order to get started: books, tutorials, hardware etc. I am planning in playing around with a couple of smartcards programmers and I am pretty new to this field. Edit: I am mostly interested in programmers that play nice with Unix-like operating systems. Also, I am not sure how this works ... but I would like to program them in C/C++

    Read the article

  • book "Programming Role Playing Games with DirectX 2nd edition" and newer DirectX api

    - by numerical25
    I got the book "Programming Role Playing Games with DirectX 2nd edition" and I notice there are things in the book that are now considered deprecated. There is a whole Section on DirectPlay. And as much as I would like to avoid this section, I am afraid it might screw up the entire engine he is trying to build. So I was just curious to know even though DirectPlay is considered deprecated by XNA, and directX10. Is it possible to use it still in DirectX 9 ??

    Read the article

  • How to speed up calculation of length of longest common substring?

    - by eSKay
    I have two very large strings and I am trying to find out their Longest Common Substring. One way is using suffix trees (supposed to have a very good complexity, though a complex implementation), and the another is the dynamic programming method (both are mentioned on the Wikipedia page linked above). Using dynamic programming The problem is that the dynamic programming method has a huge running time (complexity is O(n*m), where n and m are lengths of the two strings). What I want to know (before jumping to implement suffix trees): Is it possible to speed up the algorithm if I only want to know the length of the common substring (and not the common substring itself)?

    Read the article

  • What are the best programming articles?

    - by lillq
    Part of being a good software developer is keeping current with what people are saying in the community. There are many good articles out there on the Internet about the wide subject of computer programming. What articles have you found worth your time? Please provide the article's title, author and a link if possible.

    Read the article

  • What's the Easiest Way to Learn Programming?

    - by Chris
    If a friend of yours wanted to get into development and didn't have any experience, what would you suggest? What language/resources would you suggest to break into programming? With all of the technologies out right now and buzz words where should one even start explaining this stuff to people?

    Read the article

  • Linux Bluetooth programming

    - by sfactor
    I am making a desktop application to connect with an embedded device. I was going to use Windows but due to lack of proper examples and documentation I decided to go with Linux bluez development. Can someone suggest a good resource to go about programming for bluez. I found a MIT documentation but that was about it.

    Read the article

  • Introduction to 3D Graphics Programming

    - by Jacob Relkin
    Hi everyone! I'm a self-taught programmer with absolutely nil 3D programming experience. A client of mine has related to me an idea for an iPhone app that requires OpenGL ES 2.0 for it's inherently complex 3D structure and animations. Where do I start on this ( albeit long ) journey toward OpenGL ES competence? I'm willing to put in a tremendous amount of time and effort into learning, and if i could please get some pointers to where I should start and what to expect, that would be awesome!

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >