Search Results

Search found 14169 results on 567 pages for 'parallel programming'.

Page 50/567 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • On what platform did these popular programming languages originate?

    - by speciousfool
    Perhaps you know the story of HTTP and HTML being developed on a NeXT computer. I am curious which platform served as the first home for these programming languages: Ada C C++ C# D Erlang Fortran Haskell Java Javascript Lisp Logo MATLAB ML Perl PHP Prolog Python R Ruby Scheme SQL Smalltalk I thought it might be interesting to reflect on how the machine and operating environment lead to different design decisions. Or to see if some architecture or operating system variant was particularly fruitful for programming language development. A question for the historians among us.

    Read the article

  • Linux network programming. What can I start with?

    - by Negai
    Hi everyone! I've recently got interested in Linux network programming and read quite a bit (Beej's Guide to Network Programming). But now I'm confused. I would like to write something to have some practice, but I don't know what exactly. Could please recommend me a couple of projects to start with? Thanks.

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • 10 Best Programming Podcast 2010 Edition

    - by mbcrump
    This list is in no particular order. Just the 10 best programming podcast that I have found so far. Stack Overflow Podcast -  Jeff Atwood (of codinghorror.com) and Joel Spolsky (of joelonsoftware.com) discuss the development of their new programming community, StackOverflow.com. [This Podcast hasn’t been updated in a while, but its always great to hear more from Jeff Atwood] Hanselminutes - Hanselminutes is a weekly audio talk show with noted web developer and technologist Scott Hanselman and hosted by Carl Franklin. Scott discusses utilities and tools, gives practical how-to advice, and discusses ASP.NET or Windows issues and workarounds. [This Podcast has recently started talking about random topics like diabetes, plane travel and geek relationship tips.  I am not sure if Scott is trying to move to a more mainstream audience or not] Herding Code - A weekly discussion featuring K. Scott Allen (odetocode.com), Kevin Dente, Scott Koon (lazycoder.com), and Jon Galloway. [Great all all-around podcast that I would recommend to all] Deep Fried Bytes - Deep Fried Bytes is an audio talk show with a Southern flavor hosted by technologists and developers Keith Elder and Chris Woodruff. The show discusses a wide range of topics including application development, operating systems and technology in general. Anything is fair game if it plugs into the wall or takes a battery. [This is one that just keeps getting better] Dot Net Rocks - .NET Rocks! is an Internet Audio Talk Show for Microsoft .NET Developers. [One of the first and usually very high quality content] Connected Show - Connected Show Podcast! A podcast covering new Microsoft technology for the developer community. The show is hosted by Dmitry Lyalin and Peter Laudati. [This and Polymorphic are one of my favorite podcast – Dmitry is a great host and would recommend this to all] Polymorphic Podcast - Object oriented development, architecture and best practices in .NET [Craig is a ASP.NET MVP and a great presenter. His podcast is great and it could only be better if he recorded it more often] ASP.NET Podcast - Wallace B. (Wally) McClure presents interviews and short technical talks on .NET Technologies. [Has great information on ASP.NET of course as well as iPhone Dev] Ruby on Rails Podcast - News and interviews about the Ruby language and the Rails website framework. [Even though I am not a Ruby programmer, I’ve found this podcast very interesting] Software Engineering Radio - Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Every ten days, a new episode is published that covers all topics software engineering. Episodes are either tutorials on a specific topic, or an interview with a well-known character from the software engineering world. All SE Radio episodes are original content ? we do not record conferences or talks given in other venues. Each episode comprises two speakers to ensure a lively listening experience. SE Radio is an independent and non-commercial organization. [Another excellent podcast – I would recommend any programmer add this to his/her drive home] If I have missed something, please feel free to email me and it might make the 2011 list. =)

    Read the article

  • Low level programming - what's in it for me?

    - by back2dos
    For years I have considered digging into what I consider "low level" languages. For me this means C and assembly. However I had no time for this yet, nor has it EVER been neccessary. Now because I don't see any neccessity arising, I feel like I should either just schedule some point in time when I will study the subject or drop the plan forever. My Position For the past 4 years I have focused on "web technologies", which may change, and I am an application developer, which is unlikely to change. In application development, I think usability is the most important thing. You write applications to be "consumed" by users. The more usable those applications are, the more value you have produced. In order to achieve good usability, I believe the following things are viable Good design: Well-thought-out features accessible through a well-thought-out user interface. Correctness: The best design isn't worth anything, if not implemented correctly. Flexibility: An application A should constantly evolve, so that its users need not switch to a different application B, that has new features, that A could implement. Applications addressing the same problem should not differ in features but in philosophy. Performance: Performance contributes to a good user experience. An application is ideally always responsive and performs its tasks reasonably fast (based on their frequency). The value of performance optimization beyond the point where it is noticeable by the user is questionable. I think low level programming is not going to help me with that, except for performance. But writing a whole app in a low level language for the sake of performance is premature optimization to me. My Question What could low level programming teach me, what other languages wouldn't teach me? Am I missing something, or is it just a skill, that is of very little use for application development? Please understand, that I am not questioning the value of C and assembly. It's just that in my everyday life, I am quite happy that all the intricacies of that world are abstracted away and managed for me (mostly by layers written in C/C++ and assembly themselves). I just don't see any concepts, that could be new to me, only details I would have to stuff my head with. So what's in it for me? My Conclusion Thanks to everyone for their answers. I must say, nobody really surprised me, but at least now I am quite sure I will drop this area of interest until any need for it arises. To my understanding, writing assembly these days for processors as they are in use in today's CPUs is not only unneccesarily complicated, but risks to result in poorer runtime performance than a C counterpart. Optimizing by hand is nearly impossible due to OOE, while you do not get all kinds of optimizations a compiler can do automatically. Also, the code is either portable, because it uses a small subset of available commands, or it is optimized, but then it probably works on one architecture only. Writing C is not nearly as neccessary anymore, as it was in the past. If I were to write an application in C, I would just as much use tested and established libraries and frameworks, that would spare me implementing string copy routines, sorting algorithms and other kind of stuff serving as exercise at university. My own code would execute faster at the cost of type safety. I am neither keen on reeinventing the wheel in the course of normal app development, nor trying to debug by looking at core dumps :D I am currently experimenting with languages and interpreters, so if there is anything I would like to publish, I suppose I'd port a working concept to C, although C++ might just as well do the trick. Again, thanks to everyone for your answers and your insight.

    Read the article

  • How to prevent parallel builds per build configuration across multiple Build Agents

    - by vanslly
    I have many build configurations in TeamCity, each servicing a large project. In the past if a build is kicked off the Build Agent could be busy for up to 20min! In order to improve throughput I installed a second Build Agent on the same machine such that if a build run is kicked off by say Build Agent 1 and it is busy for 20min and someone from another project makes a change then Build Agent 2 can do the build for the other project without needing to wait on the current build run to finish. All was well until two successive check-ins resulted in both Build Agents running a build for a single build configuration in parallel. Since some resources are shared, IIS directories & databases, I don't want a single build configuration to run on both Build Agents in parallel. How can I ensure a build isn't triggered if a build is currently running for that build configuration on a different build agent? One way seems to involve environmental variables and ensuring a 50/50 split by Build Agent in terms of build configuration compatibility, but that seems a little clunky.

    Read the article

  • OpenMP - running things in parallel and some in sequence within them

    - by Sayan Ghosh
    Hi, I have a scenario like: for (i = 0; i < n; i++) { for (j = 0; j < m; j++) { for (k = 0; k < x; k++) { val = 2*i + j + 4*k if (val != 0) { for(t = 0; t < l; t++) { someFunction((i + t) + someFunction(j + t) + k*t) } } } } } Considering this is block A, Now I have two more similar blocks in my code. I want to put them in parallel, so I used OpenMP pragmas. However I am not able to parallelize it, because I am a tad confused that which variables would be shared and private in this case. If the function call in the inner loop was an operation like sum += x, then I could have added a reduction clause. In general, how would one approach parallelizing a code using OpenMP, when we there is a nested for loop, and then another inner for loop doing the main operation. I tried declaring a parallel region, and then simply putting pragma fors before the blocks, but definitely I am missing a point there! Thanks, Sayan

    Read the article

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • a problem with parallel.foreach in initializing conversation manager

    - by Adrakadabra
    i use mvc2, nhibernate 2.1.2 in controller class i call foreachParty method like this: OrganizationStructureService.ForEachParty<Department>(department, null, p => { p.AddParentWithoutRemovingExistentAccountability(domainDepartment, AccountabilityTypeDbId.SupervisionDepartmentOfDepartment); } }, x => (!(x.AccountabilityType.Id == (int)AccountabilityTypeDbId.SupervisionDepartmentOfDepartment))); static public void ForEachParty(Party party, PartyTypeDbId? partyType, Action action, Expression expression = null) where T : Party { IList chilrden = new List(); IList acc = party.Children; if (party != null) action(party); if (partyType != null) acc = acc.Where(p => p.Child.PartyTypes.Any(c => c.Id == (int)partyType)).ToList(); if (expression != null) acc = acc.AsQueryable().Where(expression).ToList(); Parallel.ForEach(acc, p => { if (partyType == null) ForEachParty<T>(p.Child, null, action); else ForEachParty<T>(p.Child, partyType, action); }); } but just after executing the action on foreach.parallel, i dont know why the conversation is getting closed and i see "current conversation is not initilized yet or its closed"

    Read the article

  • Storage subsystem borking after server restart (all on a Parallel SCSI bus)

    - by Dat Chu
    I have a server (with a SCSI HBA) connected to two Promise VTrak M310p RAID enclosure on the same bus. Everything is working fine until I have to restart my server. Once restarted, the server can no longer communicate with the enclosures: lots of read errors and bus resets. I have to turn off both enclosure, then turn off the server, then turn on the enclosure, then turn on the server for things to work. I don't believe this is the normal behavior, what could I be missing?

    Read the article

  • Oracle parameter array binding from c# executed parallel and serial on different servers

    - by redir_dev_nut
    I have two Oracle 9i 64 bit servers, dev and prod. Calling a procedure from a c# app with parameter array binding, prod executes the procedure simultaneously for each value in the parameter array, but dev executes for each value serially. So, if the sproc does: select count(*) into cnt from mytable where id = 123; if cnt = 0 then insert into mytable (id) values (123); end if; Assuming the table initially does not have an id = 123 row. Dev gets cnt = 0 for the first array parameter value, then 1 for each of the subsequent. Prod gets cnt = 0 for all array parameter values and inserts id 123 for each. Is this a configuration difference, an illusion due to speed difference, something else?

    Read the article

  • Privoxy-like proxy that handles multiple parallel connections?

    - by overtherainbow
    Hello I use Privoxy on my XP host to filter/rewrite web pages, but it's slower because all connections go through Privoxy's single port. According to this post on StackOverflow, by default, browsers support more than one simultaneous connection, which would explain why going through Privoxy is slower. Does someone know of a similar application that could handle more than one connection? Thank you.

    Read the article

  • Distributed, Parallel, Fault-tolerant File System

    - by Eddified
    There are so many choices that it's hard to know where to start. My requirements are these: Runs on Linux Most of the files will be between 5-9 MB in size. There will also be a significant number of small-ish jpgs (100px x 100px). All of the files need to be available over http. Redundancy -- ideally it would provide the space efficiency similar to RAID 5 of 75% (in RAID 5 this would be calculated thus: with 4 identical disks, 25% of the space is used for parity = 75% efficent) Must support several petabytes of data scalable runs on commodity hardware In addition, I look for these qualities, though they are not "requirements": Stable, mature file system Lots of momentum and support etc I would like some input as to which file system works best for the given requirements. Some people at my organization are leaning towards MogileFS, but I'm not convinced of the stability and momentum of that project. GlusterFS and Lustre, based on my limited research, appear to be better supported... Thoughts?

    Read the article

  • IIS SMTP server (Installed on local server) in parallel to Google Apps

    - by shaharru
    I am currently using free version of Google Apps for hosting my email.It works great for my official mails my email on Google is [email protected]. In addition I'm sending out high volume mails (registrations, forgotten passwords, newsletters etc) from the website (www.mydomain.com) using IIS SMTP installed on my windows machine. These emails are sent from [email protected] My problem is that when I send email from the website using IIS SMTP to a mail address [email protected] I don’t receive the email to Google apps. (I only receive these emails if I install a pop service on the server with the [email protected] email box). It seems that the IIS SMTP is ignoring the domain MX records and just delivers these emails to my local server. Here are my DNS records for domain.com: mydomain.com A 82.80.200.20 3600s mydomain.com TXT v=spf1 ip4: 82.80.200.20 a mx ptr include:aspmx.googlemail.com ~all mydomain.com MX preference: 10 exchange: aspmx2.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx3.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx4.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx5.googlemail.com 3600s mydomain.com MX preference: 1 exchange: aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt1.aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt2.aspmx.l.google.com 3600s Please help! Thanks.

    Read the article

  • What kind of parallel cable is this?

    - by rodey
    I have an HP LaserJet 4600 and need to order a replacement cable for it. Here is a picture of the port on the printer and the cable is currently in use. I've never seen one of this style before. Can anyone provide the type of port this is or a provide a link to a replacement cable? Thanks!

    Read the article

  • proxy software that supports parallel transfer

    - by est
    Hi guys, I need to setup a really fast proxy server in a remote server, here's the scenario: The server prefetches 3KB of data, mostly HTTP resources. The server send to client 3KB of data, instead of traditional HTTP or SOCKS proxy, the server open multithreaded transfer with 3 connections, send 1KB of data per thread to each connection The client receives 1KBx3, and combine them to the original 3KB data, and return as a local HTTP proxy server. The client display the original data in browser via the local HTTP proxy The latency is not important as long as the transfer rate is good. Does any software like this exist? It's better if it's open source or free ones.

    Read the article

  • Running php and java in parallel on the same server

    - by manni
    I have got a java server from Rackspace. and I am already running a java application on the server. Now I want to run a php application on the same server. What should I do? When I asked Rackspace people, they said, apache is already installed on the server so I can run the php on it. I have also tried installing php on the server and then copied my php files in var/www/xxx but when I hit the url it is saying giving the page not found error. They have given me the ssh server root username and password. Thanks in advance.

    Read the article

  • Disabling parallel network connections on workstation

    - by sumar
    Is it possible to disable prarallel network connections on workstation, when workstation is connected to corporate LAN? I want to avoid users bypassing Internet access policies by concurently connect to LAN and 3G/Hotspot. We have many developers and they have local administrator rights on workstations. Developers should be able to connect virutual networks (VMware/VirutalPC/Hyper-V/VirutalBox). Other users should be able to use only one network connection concurently.

    Read the article

  • How to set up apache with parallel plesk?

    - by Ran Gualberto
    I'm working with the Windows Server 2008 (a godaddy Windows dedicated server). My problem is that .htaccess is not working in the server. And I just figured out that apache is not installed. I would like to know how to run the apache with the plesk (with existing php setup). And how to run the apache with the current site directory C:\inetpub\vhosts. My goal is to make .htaccess work on the server with plesk and with the directory C:\inetpub\vhosts.

    Read the article

  • How is a functional programming-based javascript app laid out?

    - by user321521
    I've been working with node.js for awhile on a chat app (I know, very original, but I figured it'd be a good learning project). Underscore.js provides a lot of functional programming concepts which look interesting, so I'd like to understand how a functional program in javascript would be setup. From my understanding of functional programming (which may be wrong), the whole idea is to avoid side effects, which are basically having a function which updates another variable outside of the function so something like var external; function foo() { external = 'bar'; } foo(); would be creating a side effect, correct? So as a general rule, you want to avoid disturbing variables in the global scope. Ok, so how does that work when you're dealing with objects and what not? For example, a lot of times, I'll have a constructor and an init method that initializes the object, like so: var Foo = function(initVars) { this.init(initVars); } Foo.prototype.init = function(initVars) { this.bar1 = initVars['bar1']; this.bar2 = initVars['bar2']; //.... } var myFoo = new Foo({'bar1': '1', 'bar2': '2'}); So my init method is intentionally causing side effects, but what would be a functional way to handle the same sort of situation? Also, if anyone could point me to either a python or javascript source code of a program that tries to be as functional as possible, that would also be much appreciated. I feel like I'm close to "getting it", but I'm just not quite there. Mainly I'm interested in how functional programming works with traditional OOP classes concept (or does away with it for something different if that's the case).

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • Review - Professional Android Programming with Mono for Android and .NET/C#

    - by Wallym
    Mike Riley of Dev Pro Connections Magazine has a review of our Mono for Android book.  You can read the full review on their siteMono for Android has been available for more than a year. The documentation for the product is adequate and has been improving over time, but until recently, finding a good book about the technology was difficult. Such a constraint has been lifted thanks to Wiley's Professional Android Programming with Mono for Android and .NET/C#. Written under the Wrox imprint by several contributors (Wallace B. McClure, Nathan Blevins, John J. Croft, Jonathan Dick, and Chris Hardy), the book is one of the most comprehensive and helpful Mono for Android titles currently on the market. Please buy 8-10 copies of our book for the ones you love, they make great romantic gifts.

    Read the article

  • Methods of learning / teaching programming

    - by Mark Avenius
    When I was in school, I had a difficult time getting into programming because of a catch-22 in the learning process: I didn't know how to write anything because I didn't know what keywords and commands meant. For example (as a student, I would think), "what does this using namespace std; thing do anyway? I didn't know what keywords and commands meant because I hadn't written anything. This basically led me to spending countless long night cursing the compiler as I made minor tweaks to my assignments until they would compile (and hopefully perform whatever operation they were supposed to). Is there a teaching/learning method that anyone uses that gets around this catch-22? I am trying to make this non-argumentative, which is why I don't want to know the 'best' method, but rather which methods exist.

    Read the article

  • A better way to do concurrent programming

    - by Alex.Davies
    Programming to take advantage of multicore processors is hard. If you let multiple threads access the same memory, bad things happen. To avoid this, you use the lock keyword, but if you use that in the wrong way, your code deadlocks. It's all a nightmare. Luckily, there's a better way - Actors. They're really easy to think about. They're really safe (if you follow a couple of simple rules). And high-performance, type-safe actors are now available for .NET by using this open-source library: http://code.google.com/p/n-act/ Have a look at the site for details. I'll blog with more reasons to use actors and tips and tricks to get the best parallelism from them soon.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >