Search Results

Search found 4079 results on 164 pages for 'parts'.

Page 72/164 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Antenna Aligner Part 3: Kaspersky

    - by Chris George
    Quick one today. Since starting this project, I've been encountering times where Nomad fails to build my app. It would then take repeated attempts at building to then see a build go through successfully. Rob, who works on Nomad at Red Gate, investigated this and it showed that certain parts of the message required to trigger the 'cloud build' were not getting through to the Nomad app, causing the HTTP connection to stall until timeout. After much scratching heads, it turns out that the Kaspersky Internet Security system I have installed on my laptop at home, was being very aggressive and was causing the problem. Perhaps it's trying to protect me from myself? Anyway, we came up with an interim solution why the Nomad guys investigate with Kaspersky by setting Visual Studio to be a trusted application with the Kaspersky settings and setting it to not scan network traffic. Hey presto! This worked and I have not had a single build problem since (other than losing internet connection, or that embarrassing moment when you blame everyone else then realise you've accidentally switched off your wireless on the laptop).

    Read the article

  • Why isn't DSM for unstructured memory done today?

    - by sinned
    Ages ago, Djikstra invented IPC through mutexes which then somehow led to shared memory (SHM) in multics (which afaik had the necessary mmap first). Then computer networks came up and DSM (distributed SHM) was invented for IPC between computers. So DSM is basically a not prestructured memory region (like a SHM) that magically get's synchronized between computers without the applications programmer taking action. Implementations include Treadmarks (inofficially dead now) and CRL. But then someone thought this is not the right way to do it and invented Linda & tuplespaces. Current implementations include JavaSpaces and GigaSpaces. Here, you have to structure your data into tuples. Other ways to achieve similar effects may be the use of a relational database or a key-value-store like RIAK. Although someone might argue, I don't consider them as DSM since there is no coherent memory region where you can put data structures in as you like but have to structure your data which can be hard if it is continuous and administration like locking can not be done for hard coded parts (=tuples, ...). Why is there no DSM implementation today or am I just unable to find one?

    Read the article

  • The Iron Bird Approach

    - by David Paquette
    It turns out that designing software is not so different than designing commercial aircraft.  I just finished watching a video that talked about the approach that Bombardier is taking in designing the new C Series aircraft.  I was struck by the similarities to agile approaches to software design.  In the video, Bombardier describes how they are using an Iron Bird to work through a number of design questions in advance of ever having a version of the aircraft that can ever be flown.  The Iron Bird is a life size replica of the plane.  Based on the name, I would assume the plane is built in a very heavy material that could never fly.  Using this replica, Bombardier is able to valid certain assumptions such as the length of each wire in the electric system.  They are also able to confirm that some parts are working properly (like the rudders).  They even go as far as to have a complete replica of the cockpit.  This allows Bombardier to put pilots in the cockpit to run through simulated take-off and landing sequences. The basic tenant of the approach seems to be Validate your design early with working prototypes Get feedback from users early, well in advance of finishing the end product   In software development, we tend to think of ourselves as special.  I often tell people that it is difficult to draw comparisons to building items in the physical world (“Building software is nothing like building a sky scraper”).  After watching this video, I am wondering if designing/building software is actually a lot like designing/building commercial aircraft.   Watch the video here (http://www.theglobeandmail.com/report-on-business/video/video-selling-the-c-series/article4400616/)

    Read the article

  • Ubuntu Lenove OCZ Agility3 - No Grub after install

    - by Michael
    I've tried a dualboot (Win7 + Ubuntu) installation it on a Lenovo E330 with Agility3 240 Gigs... Conclusions: Ubuntu:: Ubuntu 12.04 x86_64 ( 21.06.2012 ) is not able to install grub in a bootable way. Grub will be installed and does update-grub during Installation, recognizes also the Win OS. But after a restart it boots directly to Windows.This is directly connected to the OCZ Agility3. On a good old fashion harddisk (those with the moving parts) Ubuntu is capable to install grub with no problem in a bootable manner. PinguyOS:: PinguyOS 12.04 LTS x86_64 (which is a Ubuntu based distro) is able to handle the Grub installation on OCZ Agility3. However they both use Grub 1.99... What i did:: Installed Windows. Installed Ubuntu. Installed PinguyOS. Grub Updates:: Grub updates are only through Pinguy OS possible, this means you have to edit the Ubuntu Grub entries manually after Kernelupdates on Ubuntu, in the PiguyOS sytem.. What i've already tried: Firmwareupgrade OCZ (livestick, successfull) Install Ubuntu Grub to sda Install Ubuntu Grub to sdc (Ubuntu Partion) Install Ubuntu Grub to /boot update-grub manually after install restore grub any ideas appreciated..

    Read the article

  • What relationship do software Scrum or Lean have to industrial engineering concepts like theory of constraints?

    - by DeveloperDon
    In Scrum, work is delivered to customers through a series of sprints in which project work is time boxed to a fixed number of days or weeks, usually 30 days. In lean software development, the goal is to deliver as soon as possible, permitting early feedback for the next iteration. Both techniques stress the importance of workflow in which software work product does not accumulate in development awaiting release at some future date. Both permit new or refined requirements and feedback from QA and customers to be acted on with as little delay as possible based on priority. A few years ago I heard a lecture where the speaker talked briefly about a family of concepts from industrial engineering called theory of constraints. In the factory, they use an operations model based on three components: drum, buffer, and rope. The drum synchronizes work product as it flows through the system. Buffers that protect the system by holding output from one stage as it waits to be consumed by the next. The rope pulls product from one work station to the next. Historically, are these ideas part of the heritage of Scrum and Lean, or are they on a separate track? It we wanted to think about Scrum and Lean in terms of drum-buffer-rope, what are the parts? Drum = {daily scrum meeting, monthly release)? Buffer = {burn down list, source control system)? Rope = { daily meeting, constant integration server, monthly releases}? Industrial engineers define work flow in terms of different kinds of factories. I-Factories: straight pipeline. One input, one output. A-Factories: many inputs and one output. V-Factories: one input, many output products. T-Plants: many inputs, many outputs. If it applies, what kind of factory is most like Scrum or Lean and why?

    Read the article

  • Efficiency concerning thread granularity

    - by MaelmDev
    Lately, I've been thinking of ways to use multithreading to improve the speed of different parts of a game engine. What confuses me is the appropriate granularity of threads, especially when dealing with single-instruction-multiple-data (SIMD) tasks. Let's use line-of-sight detection as an example. Each AI actor must be able to detect objects of interest around them and mark them. There are three basic ways to go about this with multithreading: Don't use threading at all. Create a thread for each actor. Create a thread for each actor-object combination. Option 1 is obviously going to be the least efficient method. However, choosing between the next two options is more difficult. Only using one thread per actor is still running through every object in series instead of in parallel. However, are CPU's able to create and join threads in the granularity posed in Option 3 efficiently? It seems like that many calls to the OS could be really slow, and varying enormously between different hardware.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • Working with fubar/refuctored code

    - by Keyo
    I'm working with some code which was written by a contractor who left a year ago leaving a number of projects with buggy, disgustingly bad code. This is what I call cowboy PHP, say no more. Ideally I'd like to leave the project as is and never touch it again. Things break, requirements change and it needs to be maintained. Part A needs to be changed. There is a bug I cannot reproduce. Part A is connect to parts B D and E. This kind of work gives me a headache and makes me die a little inside. It kills my motivation and productivity. To be honest I'd say it's affecting my mental health. Perhaps being at the start of my career I'm being naive to think production code should be reasonably clean. I would like to hear from anyone else who has been in this situation before. What did you do to get out of it? I'm thinking long term I might have to find another job. Edit I've moved on from this company now, to a place where idiots are not employed. The code isn't perfect but it's at least manageable and peer reviewed. There are a lot of people in the comments below telling me that software is messy like this. Sure I don't agree with the way some programmers do things but this code was seriously mangled. The guy who wrote it tried to reinvent every wheel he could, and badly. He stopped getting work from us because of his bad code that nobody on the team could stand. If it were easy to refactor I would have. Eventually after many 'just do this small 10minute change' situations had ballooned into hours of lost time (regardless of who on the team was doing the work) my boss finally caved in it was rewritten.

    Read the article

  • HTC Android device mounted as USB drive is read-only unless I'm root

    - by Ian Dickinson
    When I connect my HTC Incredible S to my Ubuntu 10.10 system as a USB drive, the device seems to mount OK, but is read-only unless I access it as root. For example, if I run nautilus, I can't drag and drop files to the SD-card in the phone, but if I run sudo nautilus I can. I have USB debug support set on the phone (Applications > Development > USB debugging) and I have added a rule for the device in /etc/udev/rules.d/51-android.rules on my Ubuntu system. Any suggestions as to how I can mount the drive so that I can copy content to the SD card without needing to sudo? Update Following advice from waltinator, I added the following line to my /etc/fstab: UUID=3537-3834 /media/usb1 vfat rw,user,noexec,nodev,nosuid,noauto However, the Android device is still being auto-mounted on /media/usb1 with uid and gid root. Update 2 syslog output: Nov 21 23:38:40 rowan-15 usbmount[4352]: executing command: mount -tvfat -osync,noexec,nodev,noatime,nodiratime /dev/sdd1 /media/usb1 Nov 21 23:38:40 rowan-15 usbmount[4352]: executing command: run-parts /etc/usbmount/mount.d

    Read the article

  • Advice: How to convince my newly annointed team lead against writing the code base from scratch

    - by shan23
    I work in a pretty reknowned MNC, and the module that I work in has been assigned to a new "lead". The code base is pretty huge (~130K or more, with inter dependencies on other modules) , but stable - some parts have grown ugly over the years, but its provably in working state. (Our products are running for years on them, even new ones). The problem is, our lead wants to rewrite the code from scratch, to encompass "finer granularity and a proactive design". I know in my guts thats not a very good idea, but how do I convince him/the rest of the team(who are pretty much more senior than me in terms of years of exp), without sounding too pedantic myself (Thou shalt not rewrite , as Joel et al have clear articles prohibiting it)? I have a good working relation with the person concerned, and don't want to ruin it, but neither do I want to be party to a decision which would surely plague us for years to come !! Any suggestions for a milder,yet effective approach ? Even accounts of how you have tackled such a situation to your liking would help me a lot! EDIT: The code base I'm talking about is not a product/GUI, but at kernel level with all the critical functionalities for our product. I hope now you know why i sound so apprehensive !!

    Read the article

  • With Choice Comes Complexity

    - by BuckWoody
    "Complex" may be defined as "Having many steps, details or parts." Many of Microsoft's products, including SQL Server, can be complex. I'm stating what most data professionals already know - there's usually multiple ways to do things in SQL Server. For instance, to import some data into a table you can use graphical tools, SQLCMD, bcp, SQL Server Integration Services, BULK INSERT, even PowerShell, just to name a few tools at your disposal. That's really not the issue, though. The bigger issue is that there are normally multiple thought-processes, or methods, that you have available for a task. That's both a strength and a weakness. If things were more simple, you would have fewer choices. Sometimes that's a good thing. Just tell me what I need to do and I'll do it. However, your particular situation may not fit that tool or process, so having more options increases your ability to get your job done the way you need to do it. On the other hand, that's more for you to learn, which is harder. There's another side of this benefit/difficulty that you need to be aware of. Even if you're quite good at what you do, keep in mind that the way you know how to do something may not be the only way to do it. Keep your mind open to new possibilities, and most importantly - to new knowledge. SQL Server professionals teach me something new every day. So embrace the complexity - on balance, it's a good thing! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • MonoGame not all letters being drawn with DrawString

    - by Lex Webb
    I'm currently making a dynamic user interface for my game and are setting up having text on my buttons. I'm having an odd issue where, when i use a specific piece of code to determine the text position, it will not render all of the text passed to DrawString. Even weirder, is if i insert another DrawString after this, drawing more text at a different place, different parts of the text will be drawn. The code for drawing my button with the text attached is: public override void Draw(SpriteBatch sb, GameTime gt) { sb.Draw(currentImage, GetRelativeRectangle(), Color.White); sb.DrawString(font, text, new Vector2(this.GetRelativeDrawOffset().X + this.Width / 2 - font.MeasureString(text).X / 2, this.GetRelativeDrawOffset().Y + this.Height / 2 - font.MeasureString(text).Y / 2), textColor); } The methods in the creation of the Vector2 simply get the draw position of the button. I'm then doing some calculation to center the text. This produces this when the text is set to 'Test': And when i enter this piece of code below the first DrawString: sb.DrawString(font, "test", new Vector2(500, 50), Color.Pink); I should mention that that grey square is being drawn in the same spritebatch, before the button and the text. Any ideas as to what could be causing this? I have a feeling it may be due to draw order, but i have no idea how to control that.

    Read the article

  • Antenna Aligner Part 3: Kaspersky

    - by Chris George
    Quick one today. Since starting this project, I've been encountering times where Nomad fails to build my app. It would then take repeated attempts at building to then see a build go through successfully. Rob, who works on Nomad at Red Gate, investigated this and it showed that certain parts of the message required to trigger the 'cloud build' were not getting through to the Nomad app, causing the HTTP connection to stall until timeout. After much scratching heads, it turns out that the Kaspersky Internet Security system I have installed on my laptop at home, was being very aggressive and was causing the problem. Perhaps it's trying to protect me from myself? Anyway, we came up with an interim solution why the Nomad guys investigate with Kaspersky by setting Visual Studio to be a trusted application with the Kaspersky settings and setting it to not scan network traffic. Hey presto! This worked and I have not had a single build problem since (other than losing internet connection, or that embarrassing moment when you blame everyone else then realise you've accidentally switched off your wireless on the laptop).

    Read the article

  • Introducing a (new) test method to a team

    - by Jon List
    A couple of months ago i was hired in a new job. (I'm fresh out of my Masters in software engineering) The company mainly consists of ERP consultants, but I was hired in their fairly small web department (6 developers), our main task is ERP/ecom integration (ERP-integrated web shops). The department is growing, and recently my manager asked me to start thinking about introducing tests to the team, i love a challenge, but frankly I'm a bit scared (I'm the least experience member of the team). Currently the method of testing is clicking around in the web shop and asking the customer if the products are there, if they look okay, and if orders are posted correctly to the ERP. We are getting a lot of support cases on previous projects, where a customer or a customer's customer have run into errors, which - i suppose - is why my manager wants more structured testing. Off the top of my head, I though of some (obvious?) improvements, like looking at the requirement specification, having an issue tracker, enabling team members to register their time on a "tests"-line on the budget, and to circulate tasks amongst members of the team. But as i see it we have three main challenges: general website testing. (javascript, C#, ASP.NET and CMS integration tests) (live) ERP integration testing (customers rarely want to pay for test environments). adopting a method in the team I like the responsibility, but I am afraid that I'm in a little bit over my head. I expect that my manager expects me to set up some kind of workshop for the team where I present some techniques and ideas and where we(the team) can find some solutions together. What I learned in school was mostly unit testing and program verification, not so much testing across multiple systems and applications. What I'm looking for here, is references/advice/pointers/anecdotes; anything that might help me to get smarter and to improve the current method of my team. Thanks!! (TL;DR: read the bold parts)

    Read the article

  • What did programmers do before variable scope, where everything is global?

    - by hydroparadise
    So, I am having to deal with seemingly archiac language (called PowerOn) where I have a main method, a few datatypes to define variables with, and has the ability to have sub-procedures (essentially void methods) that does not return a type nor accepts any arguements. The problem here is that EVERYTHING is global. I've read of these type of languages, but most books take the aproach "Ok, we use to use a horse and cariage, but now, here's a car so let's learn how to work on THAT!" We will NEVER relive those days". I have to admit, the mind is struggling to think outside of scope and extent. Well here I am. I am trying to figure out how to best manage nothing but global variables across several open methods. Yep, even iterators for for loops have to be defined globaly, which I find myself recycling in different parts of my code. My Question: for those that have this type experience, how did programmers deal with a large amount of variables in a global playing field? I have feeling it just became a mental juggling trick, but I would be interested to know if there were any known aproaches.

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Electronics 101 for kids: littleBits review

    - by Daniel Cazzulino
    I'm always on the lookout for cool toys that can empower my kids (9, 6 and 2yo) to be creative and break the mold of being just consumers of other's ideas. I recently came across littleBits while watching a TED video of their creator. I was immediately hooked. It seemed like the perfect blend of simplicity and self-learning that I was looking to get my kids into electronics. So I went ahead and purchased the kit from SparkFun and a bunch of standalone parts (&quot;bits&quot;) from the site itself. There are also a bunch of videos and pictures on their site to get a better idea of what they are, as well as multitude YouTube videos. This weekend I gave them to my kids, and coincidentally, we also travelled to my hometown and they got to share them too with their cousins. Man, what a blast it was! I decided to approach this &quot;toy&quot; just like one of the iPod/iPad games I buy for them: &quot;How it's used? I've no idea! I just heard it was great, you go figure it out!&quot;. And figure it out they did....Read full article

    Read the article

  • Code structure for multiple applications with a common core

    - by Azrael Seraphin
    I want to create two applications that will have a lot of common functionality. Basically, one system is a more advanced version of the other system. Let's call them Simple and Advanced. The Advanced system will add to, extend, alter and sometimes replace the functionality of the Simple system. For instance, the Advanced system will add new classes, add properties and methods to existing Simple classes, change the behavior of classes, etc. Initially I was thinking that the Advanced classes simply inherited from the Simple classes but I can see the functionality diverging quite significantly as development progresses, even while maintaining a core base functionality. For instance, the Simple system might have a Project class with a Sponsor property whereas the Advanced system has a list of Project.Sponsors. It seems poor practice to inherit from a class and then hide, alter or throw away significant parts of its features. An alternative is just to run two separate code bases and copy the common code between them but that seems inefficient, archaic and fraught with peril. Surely we have moved beyond the days of "copy-and-paste inheritance". Another way to structure it would be to use partial classes and have three projects: Core which has the common functionality, Simple which extends the Core partial classes for the simple system, and Advanced which also extends the Core partial classes for the advanced system. Plus having three test projects as well for each system. This seems like a cleaner approach. What would be the best way to structure the solution/projects/code to create two versions of a similar system? Let's say I later want to create a third system called Extreme, largely based on the Advanced system. Do I then create an AdvancedCore project which both Advanced and Extreme extend using partial classes? Is there a better way to do this? If it matters, this is likely to be a C#/MVC system but I'd be happy to do this in any language/framework that is suitable.

    Read the article

  • rel="nofollow" SEO impact

    - by Torez
    I saw a technique used where there was a block with three parts: 1. Image (wrapped in an anchor tag) 2. Heading (anchor tag with heading text) 3. Paragraph (regular p tag with synopsis content) e.g. <li class="block"> <a rel="nofollow" class="thumb" href="#"><img src="images/placeholder_service_thumbnail.jpg" alt="" /></a> <a class="h3" href="#"Good SEO Heading</a> <pPellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu...</p> </li> With the image tag there was a rel="nofollow" on the wrapped anchor tag. So the idea is that the users still has the ability to click the image and go to the details page, but the image link does not rank. When users click on the heading text, that is only what ranks for that specific page. Q: Is this the correct approach? Does this even do anything? What is the best practice?

    Read the article

  • Documenting mathematical logic in code

    - by Kiril Raychev
    Sometimes, although not often, I have to include math logic in my code. The concepts used are mostly very simple, but the resulting code is not - a lot of variables with unclear purpose, and some operations with not so obvious intent. I don't mean that the code is unreadable or unmaintainable, just that it's waaaay harder to understand than the actual math problem. I try to comment the parts which are hardest to understand, but there is the same problem as in just coding them - text does not have the expressive power of math. I am looking for a more efficient and easy to understand way of explaining the logic behind some of the complex code, preferably in the code itself. I have considered TeX - writing the documentation and generating it separately from the code. But then I'd have to learn TeX, and the documentation will not be in the code itself. Another thing I thought of is taking a picture of the mathematical notations, equations and diagrams written on paper/whiteboard, and including it in javadoc. Is there a simpler and clearer way? P.S. Giving descriptive names(timeOfFirstEvent instead of t1) to the variables actually makes the code more verbose and even harder too read.

    Read the article

  • Social Retailing

    - by David Dorf
    For retailers the move to mobile has been obvious.  More and more consumers are interacting with retailers, both online and in the store, using their mobile devices.  Retailers are quick to invest in both consumer facing mobile apps as well as ones to equip employees.  But when I talk to retailers about social, the value isn't as clear-cut.  Intuitively, retailers know that better relationships with customers will result in higher sales, but the trip to get there has many paths. The interesting thing about social media is that it has the potential to permeate all parts of the business.  Obviously it works well for marketing, but it also has a place with recruiting, knowledge management, trend analysis, and employee collaboration.  Information gathered from social media can enhance existing processes like assortment planning, product development, space planning, promotion planning, and replenishment.  Letting the customer influence each of these areas helps align the experience. One of the things holding retailers back is the lack of consistent and integrated tools to manage social media and make sense of the huge amounts of data.  To that end Oracle has been aggressively acquiring in the space, as depicted in the infographic below.  Soon, social will get the same level of investment as mobile. The Social CRM Arms Race: A Timeline - An infographic by the team at Pardot Marketing Automation

    Read the article

  • Thinking skills to be a good programmer

    - by Paul
    I have been programming for last 15 years with non-CS degree. Main reason I got into programming was that I liked to learn new things and apply them to my work. And I was able to find and fix programming errors and their causes faster than others. But I never find myself a a guru or an expert, maybe due to my non-CS major. And when I saw great programmers, I observed they are very good, much better than me of course, at solving problems. One skill I found good in my mid-career is thinking of requirements and tasks in a reverse order and in abstract. In that way, I can see what is really required for me to do without detail and can quickly find parts of solution that already exist. So I wonder if there are other thinking skills to be a good programmer. I've followed Q&As below and actually read some of books recommended there. But I couldn't really pickup good methods directly applicable for my programming work. What non-programming books should a programmer read to help develop programming/thinking skills? Skills and habits to develop to be good at programming (I'm a newbie)

    Read the article

  • Stuff you learned in school, that you have never used again?

    - by Mercfh
    Obviously we learn plenty of things in our University/College/Whatever that probably don't apply to everyday use, but is there anything that stands out particularly? Maybe something that was concentrated ALOT on? For me it was def. 2 things: OO Concepts and Pointers I still use OO, but not nearly to the amount people made it out to be, i can see where it'd be useful but in my line of work we don't have huge amounts of classes, maybe a couple at most. And there certainly isn't much OO reuse (i finally figured out what that means lol) Pointers are another thing, again I can see where they'd be useful...however I barely barely ever touch them, nor do the others I work with. I guess language choice has alot to do with that but still. What about you guys? edit: For those who are asking I work for a Large Printer company, and most of the Applications we work on are Java+XML and Actionscript for "Printer Apps". But we are moving towards other languages (think like webkits and stuff). So the Code amounts per parts are quite small. I never say OO wasn't useful I just said I personally havent seen it used in my workplace much.

    Read the article

  • Does anybody know of any resources to achieve this particular "2.5D" isometric engine effect?

    - by Craig Whitley
    I understand this is a little vague, but I was hoping somebody might be able to describe a high-level workflow or link to a resource to be able to achieve a specific isometric "2.5D" tile engine effect. I fell in love with http://www.youtube.com/watch?v=-Q6ISVaM5Ww this engine. Especially with the lighting and the shaders! He has a brief description of how he achieved what he did, but I could really use a brief flow of where you would start, what you would read up on and learn and the logical order to implement these things. A few specific questions: 1) Is there a heightmap on the ground texture that lets the light reflect brighter on certain parts of it? 2) "..using a special material which calculates the world-space normal vectors of every pixel.." - is this some "magic" special material he has created himself, or can you hazard a guess at what he means? 3) with relation to the above quote - what does he mean by 'world-space normal vectors of every pixel'? 4) I'm guessing I'm being a little bit optimistic when I ask if there's any 'all-in-one' tutorial out there? :)

    Read the article

  • Recording slow web stream

    - by Budric
    I'm trying to record an mpeg2 video stream from a website that doesn't have the greatest bandwidth. The video often buffers. I want to download the stream and watch it offline. The extract stream format received is: Stream #0.0[0x44]: Audio: mp2, 48000 Hz, stereo, s16, 192 kb/s Stream #0.1[0x45]: Video: mpeg2video (Main), yuv420p, 704x576 [PAR 16:11 DAR 16:9], 15000 kb/s, 27.19 fps, 25 tbr, 90k tbn, 50 tbc I use the following tool to transocde the stream: ffmpeg -i "http://url" -y -vcodec libx264 -b 3000k -acodec copy /tmp/stream.mp4 Unfortunately after a few seconds ffmpeg stops recording with an error [mpegts @ 0x1f0b9c0] PES packet size mismatch [mp2 @ 0x1f14640] incomplete frame Error while decoding stream #0.0 [mpeg2video @ 0x1f16860] ac-tex damaged at 0 26 [mpeg2video @ 0x1f16860] Warning MVs not available I've tried encoding with vlc as well with similar issues. Although vlc doesn't stop encoding, the output video has regions where it hangs. vlc -I dummy "http://url" --network-caching="1000" --sout="#transcode{vcodec=h264,vb=3000,acodec=mp3,ab=192}:std{access=file,mux=mp4,dst=/tmp/stream.mp4}" [mpeg2video @ 0x7f2d4c001e20] ac-tex damaged at 9 33 [mpeg2video @ 0x7f2d4c001e20] Warning MVs not available [mpeg2video @ 0x7f2d4c001e20] concealing 132 DC, 132 AC, 132 MV errors [mpeg2video @ 0x7f2d4c001e20] ac-tex damaged at 16 17 [mpeg2video @ 0x7f2d4c001e20] Warning MVs not available [mpeg2video @ 0x7f2d4c001e20] concealing 836 DC, 836 AC, 836 MV errors libdvbpsi error (PSI decoder): TS discontinuity (received 4, expected 3) for PID 0 I also tried flv transcoding and it shows up with its own set of issues, like output flv file hangs in certain parts. Anyone know what's wrong or how to fix this?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >