Search Results

Search found 19768 results on 791 pages for 'hardware programming'.

Page 335/791 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Oracle Database Upcoming Event dates to know

    - by mandy.ho
    February may be a short month, but it's not short of exciting Oracle events. From information packed "Real Performance Days" to participation in one of the biggest IT Security events - look out for Oracle Database and let us know if you are there with us! Feb 13-18, 2011 - Las Vegas, NV TDWI World Conference Series Join Oracle in highlighting Exadata x2-2 and x2-8, along with Oracle Business Intelligence, Enterprise Performance management and Data Warehousing solutions. Oracle will be presenting a workshop - Oracle Data Integration: Best-of-Breed Solutions for the Enterprise Wednesday, February 16, 2011 7p.m - 9p.m Glen Goodrich, Director of Product Management Christophe Dupupet, Director of Product Management, Data Integration http://events.tdwi.org/events/las-vegas-world-conference-2011/sessions/session-list.aspx Feb 14-17, 2011 - Barcelona, Spain Mobile World Congress MWC is an event where Oracle showcases the near complete breadth and depth of value that our Communications Industry strategy and Hardware and Software Solutions can deliver. Oracle supports Communications Service Providers today and delivers platforms and flexibility primed for the future. Oracle will have a two story Pavilion, along with an Oracle Java and Embedded Solutions Center - App Planet. The Exhibition times are Monday, 14th February 09.00 - 19.00 Tuesday, 15th February 09.00 - 19.00 Wednesday, 16th February 09.00 - 19.00 Thursday, 17th February 09.00 - 16.00 Have questions? Meet with Oracle Sales representatives at the Oracle Café. Open every day from 9am to 17:00pm. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=109912&src=6973382&src=6973382&Act=4 Feb 14-18, 2011 - San Francisco, CA RSA Conference As the world's most complete, open, integrated business software and hardware systems provider, Oracle can uniquely safeguard your information throughout its entire lifecycle. Learn more by attending these sessions: Cloud Computing: A Brave New World for Security and Privacy (CLD-201) Wednesday, February 16 at 8:30 a.m. Databases Under Attack - Securing Heterogeneous Database Infrastructures (DAS-301) Thursday, February 17, 2011 at 8:30 a.m. Seven Steps to Protecting Databases (DAS-402) Friday, February 18 at 10:10 a.m. RSA Conference Attendees will also have the opportunity to meet with Oracle Security Solution experts, see live product demos and more by visiting booth # 1559. Hours: Monday, February 14, 6:00 p.m. - 8:00 p.m., Tuesday, February 15, 11:00 a.m. - 6:00 p.m. and 4:30 p.m. - 6:00p.m., Wednesday, February 16, 11:00 a.m. - 6:00 p.m., and Thursday, February 17, 11:00 a.m. - 3:00 p.m. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=127657&src=6967733&src=6967733&Act=12 Feb 21-25, 2011 - Various Locations IOUG Presents - A Day of Real World Performance with Tom Kyte, Andrew Holdsworth and Graham Wood These Oracle experts will debate, discuss and delineate the best practices for designing hardware architectures, deploying Oracle databases, and developing applications that deliver the fastest possible performance for your business.Topics are covered in a conversational format - with all three chiming in where appropriate. Each presenter has their own screen projector to demonstrate their individual points to the participants. Customers will have the opportunity to get their specific performance/tuning questions answered and learn how to balance all the different environmental requirements for their applications to improve performance. Register today for the following dates and locations • February 21 in San Diego, CA • February 22 in Los Angeles, CA • February 23 in Seattle, WA • February 25 in Phoenix, AZ http://www.ioug.org/tabid/194/Default.aspx Feb 8-24 - Various Oracle Enterprise Cloud Summit This series of full-day events with cloud experts, sharing real-world best practices, reference architectures and more continues during the month of February. Attend the Oracle Enterprise Cloud Summit to learn how to: • Build a state-of-the-art cloud architecture • Leverage your existing IT investments • Optimize your IT management processes Whether you are considering a move to cloud computing or have already adopted a cloud model, this event offers you the insights you need to take full advantage of cloud computing. Check below to see if the event is coming to a city near you. http://www.oracle.com/us/corporate/events/cloud-events-214342.html

    Read the article

  • Is inconsistent formatting a sign of a sloppy programmer?

    - by dreza
    I understand that everyone has their own style of programming and that you should be able to read other people's styles and accept it for what it is. However, would one be considered a sloppy programmer if one's style of coding was inconsistent across whatever standard they were working against? Some example of inconsistencies might be: Sometimes naming private variables with _ and sometimes not Sometimes having varying indentations within code blocks Not aligning braces up i.e. same column if using start using new line style Spacing not always consistent around operators i.e. p=p+1, p+=1 vs other times p =p+1 or p = p + 1 etc Is this even something that as a programmer I should be concerned with addressing? Or is it such a minor nit picking thing that at the end of the day I should just not worry about it and worry about what the end user sees and whether the code works rather than how it looks while working? Is it sloppy programming or just over obsessive nit picking? EDIT: After some excellent comments I realized I may have left out some information in my question. This question came about after reviewing another colleagues code check-in and noticing some of these things and then realizing that I've seen these kind of in-consistencies in previous check-ins. It then got me thinking about my code and whether I do the same things and noticed that I typically don't etc I'm not suggesting his technique is either bad or good in this question or whether his way of doing things is right or wrong. EDIT: To answer some queries to some more good feed back. The specific instance this review occurred in was using Visual Studio 2010 and programming in c# so I don't think the editor would cause any issues. In fact it should only help I would hope. Sorry if I left that piece of info out and it effects any current answers. I was trying to be a bit more generic in understanding if this would be considered sloppy etc. And to add an even more specific example of a code piece I saw during reading of the check-in: foreach(var block in Blocks) { // .. some other code in here foreach(var movement in movements) { movement.Moved.Zero(); } // the un-formatted brace } Such a minor thing I know, but many small things add up(???), and I did have to double glance at the code at the time to see where everything lined up I guess. Please note this code was formatted appropriately before this check-in. EDIT: After reading some great answers and varying thoughts, the summary I've taken from this was. It's not necessarily a sign of a sloppy programmer however as programmers we have a duty (to ourselves and other programmers) to make the code as readable as possible to assist in further ongoing development. However it can hint at inadequacies which is something that is only possible to review on a case by case (person by person) basis. There are many reasons why this might occur. They should be taken in context and worked through with the person/people involved if reasonable. We have a duty to try and help all programmers become better programmers! In the good old days when development was done using good old notepad (or other simple text editing tool) this occurred much more frequently. However we have the assistance of modern IDE's now so although we shouldn't necessarily become OTT about this, it should still probably be addressed to some degree. We as programmers vary in our standards, styles and approaches to solutions. However it seems that in general we all take PRIDE in our work and as a trait it is something that can stand programmers apart. Making something to the best of our abilities both internal (code) and external (end user result) goes along way to giving us that big fat pat on the back that we may not go looking for but swells our heart with pride. And finally to quote CrazyEddie from his post below. Don't sweat the small stuff

    Read the article

  • Disneyland Inside Out on iPhone and Android

    - by Ryan Cain
    It's hard to believe October was the last time I was over here on my blog.  Ironically after getter the developer phone from Microsoft I have been knee deep in iPhone programming and for the past few weeks Android programming again.  This time I've spent all my non-working hours programming a fun project for my "other" website, Disneyland Inside Out.  Disneyland Inside Out, a vacation planning site for Disneyland in California, has been around in various forms since June 1996.  It has always been a place for me to explore new technologies and learn about some of the new trends on the web.  I recently migrated the site over to DotNetNuke and have been building out custom modules for DNN.  I've also been hacking things together w/ the URLRewrite module in IIS 7.5 to provide strong SEO optimized URLs.  I can't say all that has really stuck within the DNN model of doing things, but it has worked pretty well. As part of my learning process, I spent most of the Fall bringing Disneyland Inside Out to the iPhone.  I will post more details on my development experiences later.  But this project gave me a really great opportunity to get a good feel for Objective-C development.  After 3 months I actually feel somewhat competent in the language and iPhone SDK, instead of just floundering around getting things to work.  The project also gave me a chance to play with some new frameworks on the iPhone and really dig into the Facebook SDK.  I also dug into some of the Gowalla REST api's as well.  We've been live with the app in iTunes for just about 10 days now, and have been sitting in the top 200 of free travel apps for the past few days.  You can get more info and the direct iTunes download link on our site: Disneyland Inside Out for iPhone Since launching the iPhone version I have gotten back into Android development, porting the Disneyland Inside Out app over to Android.  As I said in my first review of iPhone vs. Android, coming from a managed code background, Android is much easier to get going with.  I just about 3 weeks total I will have about 85 - 90% of the functionality up and running in the Android app, that took probably 1.5 - 2x's that time for iPhone.  That isn't a totally fair comparison as I am much more comfortable w/ Xcode and Objective-C today and can get some of the basic stuff done much faster than I could in the fall.  Though I'd say some of the hardest code to debug is still the null pointer issues on objects that were dealloc'd too early in Objective-C.  This isn't too bad with the NSZoombies enabled for synchronous code, but when you have a lot of async, which my app does, it can be hairy at times to track exactly what was causing the issue.   I will post more details later, as I am trying to wrap up a beta of the Android app today.  But in the meantime, if you have an iPhone, iPod Touch or iPad head on over to the site and take a look at my app.

    Read the article

  • Communication between state machines with hidden transitions

    - by slartibartfast
    The question emerged for me in embedded programming but I think it can be applied to quite a number of general networking situations e.g. when a communication partner fails. Assume we have an application logic (a program) running on a computer and a gadget connected to that computer via e.g. a serial interface like RS232. The gadget has a red/green/blue LED and a button which disables the LED. The LEDs color can be driven by software commands over the serial interface and the state (red/green/blue/off) is read back and causes a reaction in the application logic. Asynchronous behaviour of the application logic with regard to the LED color down to a certain delay (depending on the execution cycle of the application) is tolerated. What we essentially have is a resource (the LED) which can not be reserved and handled atomically by software because the (organic) user can at any time press the button to interfere/break the software attempt to switch the LED color. Stripping this example from its physical outfit I dare to say that we have two communicating state machines A (application logic) and G (gadget) where G executes state changes unbeknownst to A (and also the other way round, but this is not significant in our example) and only A can be modified at a reasonable price. A needs to see the reaction and state of G in one piece of information which may be (slightly) outdated but not inconsistent with respect to the short time window when this information was generated on the side of G. What I am looking for is a concise method to handle such a situation in embedded software (i.e. no layer/framework like CORBA etc. available). A programming technique which is able to map the complete behaviour of both participants on classical interfaces of a classical programming language (C in this case). To complicate matters (or rather, to generalize), a simple high frequency communication cycle of A to G and back (IOW: A is rapidly polling G) is out of focus because of technical restrictions (delay of serial com, A not always active, etc.). What I currently see as a general solution is: the application logic A as one thread of execution an adapter object (proxy) PG (presenting G inside the computer), together with the serial driver as another thread a communication object between the two (A and PG) which is transactionally safe to exchange The two execution contexts (threads) on the computer may be multi-core or just interrupt driven or tasks in an RTOS. The com object contains the following data: suspected state (written by A): effectively a member of the power set of states in G (in our case: red, green, blue, off, red_or_green, red_or_blue, red_or_off...etc.) command data (written by A): test_if_off, switch_to_red, switch_to_green, switch_to_blue operation status (written by PG): operation_pending, success, wrong_state, link_broken new state (written by PG): red, green, blue, off The idea of the com object is that A writes whichever (set of) state it thinks G is in, together with a command. (Example: suspected state="red_or_green", command: "switch_to_blue") Notice that the commands issued by A will not work if the user has switched off the LED and A needs to know this. PG will pick up such a com object and try to send the command to G, receive its answer (or a timeout) and set the operation status and new state accordingly. A will take back the oject once it is no longer at operation_pending and can react to the outcome. The com object could be separated of course (into two objects, one for each direction) but I think it is convenient in nearly all instances to have the command close to the result. I would like to have major flaws pointed out or hear an entirely different view on such a situation.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Move on and look elsewhere, or confront the boss?

    - by Meister
    Background: I have my Associates in Applied Science (Comp/Info Tech) with a strong focus in programming, and I'm taking University classes to get my Bachelors. I was recently hired at a local company to be a Software Engineer I on a team of about 8, and I've been told they're looking to hire more. This is my first job, and I was offered what I feel to be an extremely generous starting salary ($30/hr essentially + benefits and yearly bonus). What got me hired was my passion for programming and a strong set of personal projects. Problem: I had no prior experience when I interviewed, so I didn't know exactly what to ask them about the company when I was hired. I've spotted a number of warning signs and annoyances since then, such as: Four developers when I started, with everyone talking about "Ben" or "Ryan" leaving. One engineer hired thirty days before me, one hired two weeks after me. Most of the department has been hiring a large number of people since I started. Extremely limited internet access. I understand the idea from an IT point of view, but not only is Facebook blocked, but so it Youtube, Twitter, and Pandora. I've also figured out that they block all access to non-DNS websites (http://xxx.xxx.xxx.xxx/) and strangely enough Miranda-IM. Low cubicles. Which is fine because I like my immediate coworkers, but they put the developers with the customer service, customer training, and QA department in a huge open room. Noise, noise, noise, and people stop to chitchat all day long. Headphones only go so far. Several emails have been sent out by my boss since I started telling us programmers to not talk about non-work-related-things like Video Games at our cubicles, despite us only spending maybe five minutes every few hours doing so. Further digging tells me that this is because someone keeps complaining that the programmers are "slacking off". People are looking over my shoulder all day. I was in the Freenode webchat to get help with a programming issue, and within minutes I had an email from my boss (to all the developers) telling us that we should NOT be connected to any outside chat servers at work. Version control system from 2005 that we must access with IE and keep the Java 1.4 JRE installed to be able to use. I accidentally updated to Java 6 one day and spent the next two days fighting with my PC to undo this "problem". No source control, no comments on anything, no standards, no code review, no unit testing, no common sense. I literally found a problem in how they handle string resource translations that stems from the simple fact that they don't trim excess white spaces, leading to developers doing: getResource("Date: ") instead of: getResource("Date") + ": ", and I was told to just add the excess white spaces back to the database instead of dealing with the issue directly. Some of these things I'd like to try to understand, but I like having IRC open to talk in a few different rooms during the day and keep in touch with friends/family over IM. They don't break my concentration (not NEARLY as much as the lady from QA stopping by to talk about her son), but because people are looking over my shoulder all day as they walk by they complain when they see something that's not "programmer-looking work". I've been told by my boss and QA that I do good, fast work. I should be judged on my work output and quality, not what I have up on my screen for the five seconds you're walking by So, my question is, even though I'm just barely at my 90 days: How do you decide to move on from a job and looking elsewhere, or when you should start working with your boss to resolve these issues? Is it even possible to get the boss to work with me in many of these things? This is the only place I heard back from even though I sent out several resume's a day for several months, and this place does pay well for putting up with their many flaws, but I'm just starting to get so miserable working here already. Should I just put up with it?

    Read the article

  • Intermittent internet connectivity

    - by Rob Oplawar
    UPDATED: I recently built a new computer and set it up to dual-boot Windows 7 and Ubuntu 11.10. In Windows, using the same hardware, my LAN connectivity is solid. In Ubuntu, however, my network interface periodically dies and resets itself; I'll have a solid connection for 30 seconds, and then it will go out for 30 seconds. When I tail the log: tail -f /var/log/kern.log I see "eth0 link up" messages appear periodically, corresponding with the return of connectivity. I posted the original question months ago, and misinterpreted what was going on. With a working Internet connection in Windows, I ignored the problem for some months. See my answer below for the solution (drivers). ORIGINAL POST In Ubuntu, although I maintain a solid connection to my LAN (pinging the router IP address consistently returns a good result), my internet connectivity drops in and out. When I continuously ping 74.125.227.18 (a google.com server), I get responses for a while, then I start getting "Destination Host Unreachable" for a while, then I get responses again. This happens consistently, dropping the connection for about 30 seconds out of every minute or two. Whether I configure my network via the network manager or via /etc/network/interfaces seems to make no difference. I configure with the following settings: address 192.168.1.101 network 192.168.1.0 gateway 192.168.1.99 (my router's IP address) netmask 255.255.255.0 (confirmed as the right netmask for the router) broadcast 192.168.1.255 (also confirmed with the router). ifconfig confirms that these settings are working: eth0 Link encap:Ethernet HWaddr 50:e5:49:40:da:a6 inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe40:daa6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11557 errors:0 dropped:11557 overruns:0 frame:11557 TX packets:13117 errors:0 dropped:211 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9551488 (9.5 MB) TX bytes:1930952 (1.9 MB) Interrupt:41 Base address:0xa000 I get the same issue when I use automatic DHCP address settings, although I did confirm that there is no other machine on the network with the static IP address I want to use. As I said, the connection to the local network stays solid - I never have any trouble pinging 192.168.1.* - it's internet addresses that I intermittently cannot reach. It's not a DNS issue because pinging known IP addresses directly shows the same behavior. Also, I don't think it's a hardware issue, as I never have any internet connectivity problems on the same machine in Windows. The network hardware is built into the motherboard: Gigabyte Z68XP-UD3P. I managed to bring the OS fully up to date, according to the update manager, but it didn't fix the issue, and with my limited understanding of network architecture I'm at my wit's end. The only clue I can see is that ifconfig is reporting a lot of dropped packets, but I'm not sure what to do about it. UPDATE: It seems my problem is a little more generic than I described; now when I try pinging my router and google simultaneously, they both go unreachable at the same time. Running ifdown eth0 and then ifup eth0 brings it back temporarily; if I just wait it comes back after a couple of minutes. I'll broaden my search through intermittent network connectivity problems.

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Android: trouble updating to Android SDK Tools, revision 7.

    - by Arhimed
    Currently I have Android SDK 2.1 (+ tools revision 4). I'd like to upgrade to Android SDK 2.2. When I try to do it I'm informed I need to upgrade Android SDK Tools to revision 7 first. So I agree, the process starts and then I get an error: -= warning! =- A folder failed to be renamed or moved. On Windows this typically means that a program Is using that Folder (for example Windows Explorer or your anti-virus software.) Please momentarily deactivate your anti-virus software. Please also close any running programs that may be accessing the directory 'D:\Install\Programming\android-sdk-working-dir\android-sdk_r04-windows\android-sdk-windows\too!s'. When ready, press YES to try again. Downloading Android SDK Tools, revision 7 Installing Android SDK Tools, revision 7 Failed to rename directory D:\Install\Programming\android-sdk-working-dir\android-sdk_r04-windows\android-sdk-windows\tools to D:\Install\Programming\android-sdk-working-dir\android-sdk_r04-windows\android-sdk-windows\temp\ToolPackage.old01. I am aware of http/https and antivirus issues. So I disactivated my AV. I also closed any application that might hold a handle to the folder. Eclipse is also closed (I start the manager via command line). However I still get the same error. Looks like the only app that can hold a handle to the folder is the manager itself, because its starting directory is the one the error complains about ('\tools'). I am on Win XP Pro + SP3. Does anyone have an idea?

    Read the article

  • Newbie, deciding Python or Erlang

    - by Joe
    Hi Guys, I'm a Administrator (unix, Linux and some windows apps such as Exchange) by experience and have never worked on any programming language besides C# and scripting on Bash and lately on powershell. I'm starting out as a service provider and using multiple network/server monitoring tools based on open source (nagios, opennms etc) in order to monitor them. At this moment, being inspired by a design that I came up with, to do more than what is available with the open source at this time, I would like to start programming and test some of these ideas. The requirement is that a server software that captures a stream of data and store them in a database(CouchDB or MongoDB preferably) and the client side (agent installed on a server) would be sending this stream of data on a schedule of every 10 minutes or so. For these two core ideas, I have been reading about Python and Erlang besides ruby. I do plan to use either Amazon or Rackspace where the server platform would run. This gives me the scalability needed when we have more customers with many servers. For that reason alone, I thought Erlang was a better fit(I could be totally wrong, new to this game) and I understand that Erlang has limited support in some ways compared to Ruby or Python. But also I'm totally new to the programming realm of things and any advise would be appreciated grately. Jo

    Read the article

  • Python PyQt Timer Firmata

    - by George Cullins
    Hello. I am pretty new to python and working with firmata I am trying to play around with an arduino . Here is what I want to happen: Set arduino up with an LED as a digital out Set potentiometer to analog 0 Set PyQt timer up to update potentiometer position in application Set a threshold in PyQt to turn LED on (Analog in has 1024bit resolution, so say 800 as the threshold) I am using this firmata library : Link Here is the code that I am having trouble with: import sys from PyQt4 import QtCore, QtGui from firmata import * # Arduino setup self.a = Arduino('COM3') self.a.pin_mode(13, firmata.OUTPUT) # Create timer self.appTimer = QtCore.QTimer(self) self.appTimer.start(100) self.appTimer.event(self.updateAppTimer()) def updateAppTimer(self): self.analogPosition = self.a.analog_read(self, 0) self.ui.lblPositionValue.setNum() I am getting the error message: Traceback (most recent call last): File "D:\Programming\Eclipse\IO Demo\src\control.py", line 138, in myapp = MainWindow() File "D:\Programming\Eclipse\IO Demo\src\control.py", line 56, in init self.appTimer.event(self.updateAppTimer()) File "D:\Programming\Eclipse\IO Demo\src\control.py", line 60, in updateAppTimer self.analogPosition = self.a.analog_read(self, 0) TypeError: analog_read() takes exactly 2 arguments (3 given) If I take 'self' out I get the same error message but that only 1 argument is given What is python doing implicitly that I am not aware of? Blockquote

    Read the article

  • Status of VB6/ Best Desktop Application Language with Native Compilation

    - by Sandeep Jindal
    I was looking for a Desktop Application Programming Language with one of the biggest constraint: - “I need to output as native executable”. I explored multiple options: Java is not a very good option for desktop programming, but still you can use it. But Java to Exe is a problem. Only GCJ and Excelsior-Jet provides this. .Net platform does not support native compilation. Only very few expensive tools are available which can do the job. Python is not an option for native compilation. Right? VB6 is the option I am left with. From the above list, if I am correct, VB6 is the only and probably the best option I have. But VB6 itself has issues like: It is no more under development since There are questions on support of VB6 IDE with Vista Thus my questions are: From the list of programming language options, do you want to add any more? If VB6 is good/best option, looking at its development status, would you suggest using VB6 in this era?

    Read the article

  • Android USB driver for Xperia X10a

    - by mlohbihler
    I've been trying for a couple hours now, and have hit all of the sites Google found, but i cannot get the Android USB driver on my XP box to talk to my new Xperia X10a. I found the lines that some kind soul posted, and has been syndicated repeated, but they don't work for me. The idea is to add them to the Google.NTx86 and Google.NTamd64 sections of the android_winusb.inf file: ;Xperia X10 %SingleAdbInterface% = USB_Install, USB\VID_0FCE&PID_E12E %CompositeAdbInterface% = USB_Install, USB\VID_0FCE&PID_E12E&MI_01 I tried a number of variations of the first line, including "Sony Ericsson X10a", which is what XP shows me in both the "found new hardware" wizard and the device manager, but no luck. The result is always the same. Here are my steps: Plug in the phone via USB "found new hardware wizard" appears Choose "No, not this time" and "next" Choose "Install from a list or specific location..." and "next" Choose "Search for best driver...", check "Include this location...", and browse for the "usb_driver" folder in the Android SDK installation. Click "next" It does a quick search and then says "Cannot install this hardware", "... because the wizard cannot find the necessary software". I've tried more things that i can recall now, including deleting registry entries, but it just won't work. Any help would be appreciated at this point. Regards, m@

    Read the article

  • Decisions in teaching someone else to program: language selection

    - by Dinah
    My friend would like for me to guide her into learning programming. She's already proven enormous aptitude for thinking like a programmer but is scared of the idea of programming since in her mind it's relegated to some magical realm accessible only to smart people and trained computer scientists (ironically, I am neither but that's beside the point). My main question is the age-old and irritating question: which language should I chose? I've limited it down to these: PHP: dead simple to start with and I remember enough of the language to answer all novice questions. However, I can think of a million reasons why I wouldn't recommend this as a first language. The most diplomatic of which is that there's no desktop app option to which I would feel comfortable subjecting a novice. Python: supposed to be wonderful for beginners and generally everything I've heard about it screams that this is the correct choice. That's the problem: everything I've heard about it. I don't know it yet and have a lot of projects going on right now so I don't feel like learning it yet -- but I'm going to be the tech-support when any little thing goes wrong. I know there are tons of online resources but in the frustration of the moment, it's always going to be just me. C#: this is the language I'm most comfortable with so I know I can be good tech support. I also love this language and its versatility and community. The big drawback here is that I remember when I first learned it after doing mainly PHP, Perl, and JavaScript and I found the experience overwhelming. You are simultaneously learning: programming concepts, C# syntax, strong typing, OOP, and a complex powerful IDE with a bazillion options and buttons all over it.

    Read the article

  • What is the best "forgot my password" method?

    - by Edward Tanguay
    I'm programming a community website. I want to build a "forgot my password" feature. Looking around at different sites, I've found they employ one of three options: send the user an email with a link to a unique, hidden URL that allows him to change his password (Gmail and Amazon) send the user an email with a new, randomly generated password (Wordpress) send the user his current password (www.teach12.com) Option #3 seems the most convenient to the user but since I save passwords as an MD5 hash, I don't see how option #3 would be available to me since MD5 is irreversible. This also seems to be insecure option since it means that the website must be saving the password in clear text somewhere, and at the least the clear-text password is being sent over insecure e-mail to the user. Or am I missing something here? So if I can't do option #1, option #2 seems to be the simplest to program since I just have to change the user's password and send it to him. Although this is somewhat insecure since you have to have a live password being communicated via insecure e-mail. However, this could also be misused by trouble-makers to pester users by typing in random e-mails and constantly changing passwords of various users. Option #1 seems to be the most secure but requires a little extra programming to deal with a hidden URL that expires etc., but it seems to be what the big sites use. What experience have you had using/programming these various options? Are there any options I've missed?

    Read the article

  • Implementing arrays using a stack

    - by Zack
    My programming language has no arrays, no lists, no pointers, no eval and no variable variables. All it has: Ordinary variables like you know them from most programming languages: They all have an exact name and a value. One stack. Functions provided are: push (add element to top), pop (remove element from top, get value) and empty (check if stack is empty) My language is turing-complete. (Basic arithmetics, conditional jumps, etc implemented) That means, it must be possible to implement some sort of list or array, right? But I have no idea how... What I want to achieve: Create a function which can retrieve and/or change an element x of the stack. I could easily add this function in the implementation of my language, in the interpreter, but I want to do it in my programming language. "Solution" one (Accessing an element x, counting from the stack top) Create a loop. Pop off the element from the stack top x times. The last element popped of is element number x. I end up with a destroyed stack. Solution two: Do the same as above, but store all popped off values in a second stack. Then you could move all elements back after you are done. But you know what? I don't have a second stack!

    Read the article

  • How do you pass .net objects values around in F#?

    - by Russell
    I am currently learning F# and functional programming in general (from a C# background) and I have a question about using .net CLR objects during my processing. The best way to describe my problem will be to give an example: let xml = new XmlDocument() |> fun doc -> doc.Load("report.xml"); doc let xsl = new XslCompiledTransform() |> fun doc -> doc.Load("report.xsl"); doc let transformedXml = new MemoryStream() |> fun mem -> xsl.Transform(xml.CreateNavigator(), null, mem); mem This code transforms an XML document with an XSLT document using .net objects. Note XslCompiledTransform.Load works on an object, and returns void. Also the XslCompiledTransform.Transform requires a memorystream object and returns void. The above strategy used is to add the object at the end (the ; mem) to return a value and make functional programming work. When we want to do this one after another we have a function on each line with a return value at the end: let myFunc = new XmlDocument("doc") |> fun a -> a.Load("report.xml"); a |> fun a -> a.AppendChild(new XmlElement("Happy")); a Is there a more correct way (in terms of functional programming) to handle .net objects and objects that were created in a more OO environment? The way I returned the value at the end then had inline functions everywhere feels a bit like a hack and not the correct way to do this. Any help is greatly appreciated!

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • If I wanted to make a Pac-Man Game?

    - by SoulBeaver
    I am immediately placing this as a community wiki thing. I don't want to ask for help in programming yet or have even a specific question about programming, but rather the process and the resources needed to make such a game. To put it simply: My college friend and I decided to give ourselves a really big challenge to further our skills in programming. In six months time we want to show ourselves a Pac-Man game. Pac-Man will be AI-controlled like the Ghosts and whichever Pac-Man lives the longest after a set of tries wins. This isn't like anything we've done so far. The goal here, for me, isn't to create a perfect game, but to try and complete it, learn a whole bunch in the process. Even if I don't finish in the time, which is a good possibility, I would want to have at least tried this. So my question is this: How should I start preparing myself? I already have started vector math, matrices, all that fun stuff. My desired platform would be DirectX 9.0c; is that advisable? Keep in mind that this is not a preference just for this project, but I wish to have some kind of future in graphics develepment, so I want to pick a platform that is future-safe. As for the game development in general, what should I take into consideration? I have never done a real game before, so any and all advise to development of mid-scale projects( if this would be a mid-scale project ) is greatly appreciated. My main concerns are the pit-falls and demotivators. Sorry if the question is so vague. If it doesn't belong here, then I will remove it. Otherwise, any and all advise regarding making larger projects is greatly appreciated.

    Read the article

  • I cannot grok MVC, what it is, and what it is not?

    - by Hao
    I cannot grok what MVC is, what mindset or programming model should I acquire so MVC stuff can instantly "lightbulb" on my head? If not instantly, what simple programs/projects should I try to do first so I can apply the neat things MVC brings to programming. OOP is intuitive and easier, object is all around us, and the benefits of code reuse using OOP-paradigm instantly click to anyone. You can probably talk to anybody about OOP in a few minutes and lecture some examples and they would get it. While OOP somehow raise the intuitiveness aspect of programming, MVC seems to do the opposite. I'm getting negative thoughts that some future employers(or even clients) would look down upon me for not using MVC technology. Though I probably get the skinnable aspect of MVC, but when I try to apply it to my own project, I don't know where to start. And also some programmers even have diverging views on how to accomplish MVC properly. Take this for instance from Jeff's post about MVC: The view is simply how you lay the data out, how it is displayed. If you want a subset of some data, for example, my opinion is that is a responsibility of the model. So maybe some programmers use MVC, but they somehow inadvertently use the View or the Controller to extract a subset of data. Why we can't have a definitive definition of what and how to accomplish MVC properly? And also, when I search for MVC .NET programs, most of it applies to web programs, not desktop apps, this intrigue me further. My guess is, this is most advantageous to web apps, there's not much problem about intermixed view(html) and controller(program code) in desktop apps.

    Read the article

  • one page filter results in new page in javascript

    - by Jake
    I have links set up on one page and the relationship between the links is a parent child relationship. (For example: Parent: All, Children: Software; Hardware) These links of course lead the user to a new page that shows the results from a table that is populated. Currently these links are all Similar destinations, but just a filter in the url. But the problem is that there is a javascript filter on the page that gives the user to choose between All, Software, or Hardware. Understand basically that if the url is still reading that there on the software page but they just filtered on the page to be Hardware that doesn't look good IMO. So what I was trying to do was make the links on the inital page all go the the exact same destination and somehow still know on the new page which link was clicked and run the javascript filter from knowing which link was clicked on that page. Is there a way to found that out from javascript? I guess a way to pass that value to the new page and retrieving it in javascript without showing it in the url so I can filter the table for the user based on that value?

    Read the article

  • Any book on designing and implementing a CRPG engine?

    - by Fabzter
    Hi! First, let me tell you, I am not really interested in making my own rpg engine (at least not in the near future, hehe), but I do feel like I want to understand the internals of how a rpg engine works. Why? Well, because I like to read about programming and design, It keeps me motivated and excited, and because I know I will learn a lot, for, even when I have been programming for some years now, I never stop considering myself an ignorant... there are simply SO many things involving a game engine (specially rpg ones, like branching storylines, and items and economics!) I'm eager to know. I've been searching (and thus, finding) lots of info online, but it is never focused in what I'm interested (most of it talks about the mathematics and AI algorithms implementation, which I know quite well), which is the design of overall structure, patterns, scripting engine, decision engine... damn, so many things I can't even imagine, since I've never done any game programming. I hope you know have an idea of how I feel, and how I want to learn for the sake of learning, and why would I want you to tell me if you know if there exist books touching the topics that interest me the most.

    Read the article

  • Good Learning Method for Objective-C?

    - by Josh Kahane
    Hi I know this must be asked a millions times and can't be easy to answer as there is o definitive method, but any help would be appreciated, thanks. I have been playing around with all sorts of things in Xcode and with Objective-C, however I can't seem to find a good way of learning things in an efficient way. I have bought the book 'Programming in Objective-C 2.0' and its great but just lays down the basics it seems. I want to learn in the 2D game development direction, then of course 3D after nailing that, if thats the right thing to do? I am 17 currently in year 13, last year of school/A Levels and am almost definitely taking a gap year. Any good, well known reputable courses online or offline (real world)? This is my first programming language, and I am absolutely serious about learning this. One last question, is when learning things online, I have in the past started building a feature and learning a certain aspect in programming only to find out after adding more its slows down the app or its to inefficient. Is the key to use a certain method in a certain situation (being os many ways to do the same thing) or use any of those methods and refine it in your app to make it run smoothly? Sorry, its hard for me to know when I have little experience, thus far. Sorry for rambling on! I would appreciate any help, thank you!

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >