Search Results

Search found 6625 results on 265 pages for 'advice'.

Page 130/265 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Is ActionScript 3 used by Serious Indie Developers?

    - by Puedes
    This question is for dedicated independent game developers: My dream is to be a game developer. I am a senior in high school who has taken Computer Science for all four years. I have used Java the whole time, but last year I started using PHP and ActionScript 3 (with Flixel). I also used Game Maker for a brief period. I apologize for this, I wanted to get that out of the way and clarify the fact that I have experience of some kind with game development. I am stuck at the moment because I don't quite know what language to use to develop games at a professional level. I am seriously interested in becoming a dedicated game developer, but this issue is really bothering me. I would like to know what the best option would be for my case, based on your experiences. Any advice is appreciated. Things to consider: I am only interested in making 2D games (I am not worried about 3D support) It would be ideal to use something that can be ported to multiple platforms (so as not to run into this problem later) I can't seem to figure out what the industry likes to use So far, this is what I have: I can't decide if it would be wise to stick with ActionScript 3, or move to C++ I know Flash would be for browser games, but what if I want to make a downloadable game, like Plants Vs. Zombies or Super Crate Box? Would Flash be a smart choice for standalone games, or did they use something else? Thank you for reading this, as I would like to stop worrying about this and make some games! Also, I hope this wasn't all over the place :) tl;dr Should I move ahead with AS3 or use something else i.e. C++

    Read the article

  • How to create a deb package that installs a series of files

    - by fossfreedom
    I would like to create a brand new deb package to install series of files. If at all possible, I would like to untar the folder containing these files as part of the installation into a known folder location. Failing that, some knowledge how to package the source folders and files would be very useful. Question is - is this possible and if so - how? Lets give an example: ~/mypluginfolder/ contains the files x, y, a subfolder called abc and inside that another file called z. I want to tar this folder: tar -cvf myfiles.tar ~/mypluginfolder I presume my debian package would look like myfiles.tar.gz myfiles+ppafoss_0.1-1/ myfiles.tar DEBIAN changelog, compat, control, install, rules source Is it possible to somehow untar myfiles.tar to a known folder location for example /usr/share/rhythmbox/plugins/ Thus the final result would be: /usr/share/rhythmbox/plugins/mypluginfolder /usr/share/rhythmbox/plugins/mypluginfolder\x /usr/share/rhythmbox/plugins/mypluginfolder\y /usr/share/rhythmbox/plugins/mypluginfolder\abc\z If - presuming launchpad needs source, advice is sought as to where I should drop the source folders and files into the deb package structure. This will eventually will become a series of individual launchpad PPA packages. What I prefer (but may not be able to achieve...) is to keep my packaging to a minimum - create a series of packages from a template and adjust the bare minimum (changelog etc + the tar file/file & folder structure).

    Read the article

  • Choosing a (browser) game environment [closed]

    - by Iain
    I apologise in advance if this post is something you've heard a million times already or seems like a trolling attempt. I just want some advice and I'm coming up short with my own Google searches. Basically, I would like to start learning some game development in my own free time (nothing serious, just purely as a hobbyist for fun). I'd like to know what the communities opinions are on the old HTML5/Javascript v Flash argument but purely from a game development perspective. I know people say Flash is dying because of issues like SEO, memory/bandwidth usage and Apple dropping it on tablet and mobile devices, so is it worth me dedicating my free time to learning to use Flash/AS3 for game development or should I focus on HTML5/Javascript? At the moment, I'm not sure HTML5/Javascript is mature enough or has the support tools that Flash does (framework, IDE, etc) and there seems to be a lot more resources online for beginner Flash/AS3 programming. When I'm reading tutorials online for Flash/AS3 I always have it in the back of my head that I'm wasting my time because it won't be around in a few years and I should be investing that time learning HTML5/Javascript. Thoughts? Disclaimer: I'm not trying to spark a flame war or troll anyone - I believe in the right tools for the job and I don't want to waste my time learning something that won't be around in a few years.

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance. Edit: Thanks for the information. Does anybody know of any alternatives to what has been mentioned so far (NUnit, RhinoMocks and CodedUI)?

    Read the article

  • Why would 70-persistent-net.rules have no effect?

    - by Wes Felter
    I've got a saucy server with a lot of NICs and they end up with weird names like "rename19". I know interface names can be changed by modifying the /etc/udev/rules.d/70-persistent-net.rules file. The first clue that something is wrong is that that file did not exist even though it's supposed to be created automatically. So I decided to write my own based on advice from Linux From Scratch: ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.0", NAME="eth0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.1", NAME="eth1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.2", NAME="eth2" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.3", NAME="eth3" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.0", NAME="mezz0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.1", NAME="mezz1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.0", NAME="slot1a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.1", NAME="slot1b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.0", NAME="slot2a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.1", NAME="slot2b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.0", NAME="slot3a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.1", NAME="slot3b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.0", NAME="slot4a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.1", NAME="slot4b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.0", NAME="slot5a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.1", NAME="slot5b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.0", NAME="slot6a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.1", NAME="slot6b" (I'm matching on PCI IDs instead of MAC addresses because I have multiple identical machines that I want to apply this configuration to.) After rebooting, nothing has changed. It's like these rules aren't even being read. There's not much going on in dmesg either: $ dmesg | grep udev [ 3.196629] systemd-udevd[323]: starting version 204 [ 6.719140] systemd-udevd[550]: starting version 204 [ 38.695050] init: udev-fallback-graphics main process (1658) terminated with status 1

    Read the article

  • wine 1.4 regedit makes screen flicker on 12.04 with dual monitor setup

    - by s1lv3r
    I have a dualmonitor setup running dual 23" on 1920x1080 which has the following problem: When running any wine application (for example "wine regedit" from console) the screen flickers and the windows have artifacts like this: Also sometimes taking a screenshot using the print key will make compiz crash (starter and all window bars/menus are gone) when an wine application is started. I don't have the same problems on my notebook which has the same setup. Only difference is the notebook has ATI Graphics and this PC has nvidia. This is the output of lshw -c video: *-display Beschreibung: VGA compatible controller Produkt: G72 [GeForce 7300 LE] Hersteller: NVIDIA Corporation Physische ID: 0 Bus-Informationen: pci@0000:07:00.0 Version: a1 Breite: 64 bits Takt: 33MHz Fähigkeiten: pm msi pciexpress vga_controller bus_master cap_list rom Konfiguration: driver=nvidia latency=0 Ressourcen: irq:16 memory:fa000000-faffffff memory:d0000000-dfffffff memory:fb000000-fbffffff memory:fce00000-fce1ffff I also noticed that running xrandr from console makes the screen flicker for some seconds on this PC, which also doesn't happen on my notebook. Removing one screen from the setup will stop the flickering and the artifacts inside the wine applications from appearing. Does anybody have an advice what I could try to change to make this work?

    Read the article

  • from MS Biology to BS Computer Science [on hold]

    - by Air Borne
    I'm Marco from Italy and I'd like to ask you a piece of advice about my career. I hold a Ms degree in Biology, I enjoyed a lot studying it and I got very good grades but I didn't know what to do with my degree in the real life. Few months ago, I began to read a book about Python programming (Introduction to Computer Science, Zelle J.) and I've great fun learning Python as a beginner, I wake up in the morning thinking about doing excersies and writing simple programs with python :) I'm also watching free lectures from MIT open courseware, and I'm feeling a certain degree of regrets for never asking myself what was computer science, since it seems to me it's a magic world. After weeks of doubts, I made a move :) I applied for a CS bachelor degree abroad, I got an interview and I'm going to start this great adventure next September. I feel incredibly excited at it, but a little bit scared too. Scared because sometimes I think I'm making a great mistake for my life restarting from a bachelor in a completely different area of study. Sometimes I hear people saying the IT market is bad, sometimes I hear other ones saying quite the opposite instead. Moreover, some colleagues of mine suggested me to try to get into Bioinformatics, instead of CS. My question is: I want to really discover if CS is for me, I mean the passion of my life. I know I'm just a beginner and I can't say nothing about it yet. What do you suggest me: CS or Bioinformatics? If I get a Bs in CS, could I get into bioinformatics without relevant experience, taking into account I have a Ms Biology degree? Any comment is appreciated, thanks in advance.

    Read the article

  • Tracking feature requests for small-scale components

    - by DXM
    I'm curious how other development teams (especially those that work in moderate to large development groups) track "future" features/wishlists for functionality for internally developed frameworks or components. I know the standard advice is that a development team should find one good tool for tracking bugs/features and use that for everything and I agree with that if the future requests are for the product itself. In my company we have an engineering department, which is broken up into multiple groups and within each there can be one to several agile teams. The bug tracking product we use has been "a leader since 1997" (their UI/usability seems to also be evaluated against that year even today) but my agile team or even group doesn't really control what is being used by the whole department. What we are looking to track is not necessarily product features but expansion/nice to have functionality for internal components that go into our product. So to name a few for example... framework/utility library on top of CppUnit which our developers share low-level IPC communications framework Common development SDK that myself and several other team leads started to help share some common code/tools at the department-wide level (this SDK is released as internal "product" to each of the groups). Is the standard practice to use the one bug tracking tool? Or would it make more sense to setup something more localized specifically for our needs and maintain it ourselves? It's also unclear how management will feel if developers start performing "IT" roles of maintaining software and servers. At the same time, right now, we use excel files, internal wiki and MS OneNote for this kind of stuff and that just doesn't feel right. (I'm afraid to ask for actual software recommendations, since that might make this question more localized or something. Also developers needs this way more than management, so it would be nice to find something either free or no more than the cost of a happy hour).

    Read the article

  • What do you do to make sure you take proper/enough breaks, while avoiding unwanted side-effects of break taking?

    - by blueberryfields
    preamble It seems to me that computer programmers are one of a select few groups of people who actually take pleasure from sitting in front of computers for long periods of time. Most people in other professions actively dislike their time at computers, and do their best to avoid it (so, I assume, they don't have problems taking breaks). At least for me, having external cues for taking breaks, and clear instructions on what to do with each break (stretch, go for a walk, close my eyes, look into a distance of preferably a few km and focus on faraway objects, etc...), is a must. So far, I've just been making up the breaks and tools to get them as I go along, based on what looks to be low-specificity information found on the net (generic stuff ala ergonomics advice for office staff). This has led to all sorts of side effects - loss of attention as I get distracted if I walk around, breaks in flow with alarm clocks interrupting my thoughts, and people around me assuming I'm low on work due to the frequency of my walking around compared to everyone else. /preamble tl;dr Taking breaks is important My internal break taking system doesn't work, and ad-hoc ones have unwanted side effects What do you do to make sure you take proper breaks? How do you avoid unwanted side-effects, such as getting distracted or interrupting flow or giving your co-workers the impression you're spending a lot of time goofing off?

    Read the article

  • I want to be a programmer, work in corporate environment, earn well, learn fast and eventually become a great programmer [on hold]

    - by Shin San
    I'll try to keep this simple: I'm 29, been dabbling with computers for the past 10 years, had entry level jobs in tech support for different apps, been fixing computers for a while and now want to specialize in something. I'm not 100% stranger to programming but haven't gone past if/then/else with anything. A bit of JavaScript, PHP, Python and currently checking out the "SELECT" statement in SQL :)) I'm curious about programming, I enjoy it and I'm thinking of making a living out of it. So, while I'm at it, why not earn a bit more than the average Joe? So, that's why I'm checking what the best solution, the best learning path and the most useful languages are considering: a) how easy/fast can you find a job by knowing it b) how much would I be able to earn c) how fast can I learn it By reading 10-20 articles online I've come up with an example, but I'm here for some expert advice. Example: * ratings from a) and b) point of view #1 sql ; #2 java ; #3 html (please don't start the markup language debate) ; #4 javascript From this ratings, I'd say a good way to go is learn html/css/(javascript or php) for the web part of apps, some SQL/MySQL/whateverSQL for holding data and loads of Java for the program itself. Please let me know if this is a good idea and if so, what should be the order for learning all of the above. Else, please let me know a better way and why it would be better. Many thanks for taking the time to read my question. Best wishes to you guys Edit: if I think Java + SQL + HTML&JavaScript is the way to go, does the order I'm learning them in matter? Or can I try to learn them all at once?

    Read the article

  • Do support sites like stackoverflow upset the paid-support open source model?

    - by ajax81
    In order to stay relevant in the marketplace, I'm researching new business models for my software company. The open source model with paid support seems like a good fit for our product, but I have concerns about whether or not a paid support model is viable in an era where top-notch help is readily available for free on sites like those in the StackExchange network. Case in point -- I moved my employees to Ubuntu last year because I didn't want to pay for Win 7 licenses and new hardware (plus, the mono platform was highly attractive). My staff had no Linux experience, but were able to achieve relative competency in about 120 days with the help of AskUbuntu, StackOverflow, and a few "For Dummies" books. We did employ an Ubuntu consultant for 7 days to provide training and support, but beyond that spent $0.00 on any kind of paid expertise. In regards to my due diligence, I ran a 3 month beta of the freemium-paid-support model with one of our smaller customers, and achieved mediocre results. I'd like to think its because our software is so stable and easy to use that the customer didn't need much paid support, but I suspect that they circumvented the terms of our SLA in the same manner that we did with the move to Ubuntu. Does anyone out there has any thoughts, advice, or experience relevant to the move I'm considering? What worked, what didn't, etc? Thanks in advance!

    Read the article

  • 25 Favorite JCP Award Memories

    - by heathervc
    As we celebrated the 10th Annual JCP Awards and Party at JavaOne last week, we asked attendees to share their favorite memories.  Add yours to the retrospective list below... The 10th Award party will be the best :-) I won a DSLR camera at the 2011 JCP party and have taken many awesome photos of my family with it ever since!  Thanks JCP! Remembering the password to get in! It was very fascinating talking to all those JUG Members of last years' (2011) party and hearing about their hopes & expectations.  Especially from members of SouJava and LJC. Hanging out with my friends Best food and one of my colleagues won the raffle prize. My friend Brian won a jacket 3 years ago and my friend Craig won a camera last year. 2010 when I took home 2 awards on behalf of JSRs I'm on. When Patrick & Scott sang 'Light My Fire'! Catch up with friends! Being able to attend my first JCP party and and joining JCP community. Of course it's when some people won the award (SouJava and LJC)!   Meeting Crazy Bob! This is my first. Mike  to be JCP Member of the Year in 2011. When SouJava and London Java Community won Member of the Year award! JBoss making CDI Everything! When SouJava won the JCP Member of the Year award. I love feeling like it is the Oscars! First Party! Winning JCP Member of the Year last year. The year I was running for it (JCP Award). 2009 music and hostess. Obscured on legal advice.

    Read the article

  • Precision loss when transforming from cartesian to isometric

    - by Justin Skiles
    My goal is to display a tile map in isometric projection. This tile map has 25 tiles across and 25 tiles down. Each tile is 32x32. See below for how I'm accomplishing this. World Space World Space to Screen Space Rotation (45 degrees) Using a 2D rotation matrix, I use the following: double rotation = Math.PI / 4; double rotatedX = ((tileWorldX * Math.Cos(rotation)) - ((tileWorldY * Math.Sin(rotation))); double rotatedY = ((tileWorldX * Math.Sin(rotation)) + (tileWorldY * Math.Cos(rotation))); World Space to Screen Space Scale (Y-axis reduced by 50%) Here I simply scale down the Y value by a factor of 0.5. Problem And it works, kind of. There are some tiny 1px-2px gaps between some of the tiles when rendering. I think there's some precision loss somewhere, or I'm not understanding how to get these tiles to fit together perfectly. I'm not truncating or converting my values to non-decimal types until I absolutely have to (when I pass to the render method, which only takes integers). I'm not sure how to guarantee pixel perfect rendering precision when I'm rotating and scaling on a level of higher precision. Any advice? Do I need to supply for information?

    Read the article

  • I have deleted python files in usr/bin and cant reinstall it

    - by Plonkaa
    I am a novice at Ubuntu and unfortunately i have deleted 3 files in the usr/bin folder python 2.7 python python 2.6 Now my update manager wont work and when i type in python into gnome it says that it is no longer there. Please help me ive tried loads of different things but it just wont work. The closest i got was the following: I typed in sudo apt-get -f install and i thought i had fixed it but then i got a error message - Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-folks-0.6 gir1.2-polkit-1.0 libcogl5 mutter-common gir1.2-json-1.0 libcaribou0 gir1.2-accountsservice-1.0 gir1.2-clutter-1.0 gir1.2-gkbd-3.0 gir1.2-networkmanager-1.0 caribou libcogl-common libmutter0 gir1.2-mutter-3.0 gjs gir1.2-caribou-1.0 libclutter-1.0-0 gir1.2-telepathylogger-0.2 libclutter-1.0-common cups-pk-helper gir1.2-upowerglib-1.0 gir1.2-cogl-1.0 libmozjs185-1.0 gir1.2-telepathyglib-0.12 gir1.2-gee-1.0 libgjs0c gnome-shell-common Use 'apt-get autoremove' to remove them. The following extra packages will be installed: ubuntu-sso-client The following packages will be upgraded: ubuntu-sso-client 1 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. 2 not fully installed or removed. Need to get 0 B/57.7 kB of archives. After this operation, 16.4 kB of additional disk space will be used. Do you want to continue [Y/n]? y Setting up python-minimal (2.7.2-7ubuntu2) ... /var/lib/dpkg/info/python-minimal.postinst: 4: python2.7: not found dpkg: error processing python-minimal (--configure): subprocess installed post-installation script returned error exit status 127 Errors were encountered while processing: python-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) any advice is appreciated!

    Read the article

  • How do I get my graphic card to work properly?

    - by Lucas
    I been having some problems with my graphic card for a time now. But I had enough when I did'nt get Oil Rush to work on my HP Pavilion g6. The system has suggested hardware drivers for me, but the first time I installed them they pretty much fucked up the graphics. After some time I managed to get the computere to work properly (I thought) again. When the game did'nt work I tried to the hardware drivers for the graphic card anyway. First of all there was to possible choices insted of one, as the last time I installed the drivers (when it did'nt work out so good). The choices are: ATI/AMDs proprietary video drivers FGLRX (update for edition) and Proprietary FGLRX-video drivers for ATI/AMD I realized the drivers probebly are pretty much the same, so I tried the first one. But this did'nt work. Instead I was asked to "Look in to /usr/var/log/jockey.log". This did'nt helped me much. Instead I choiced the other one, wich was installed and after reboot there where some changes. First of all there was a lot more details for Unity that was'nt there before and some shortkeys are now working that did'nt before (like Ctrl + T and the Prt Sc-button). But overall everything doesn't work as it used to. Like when you browse between the workspaces it doesn't look the same. To get to the point: it doesn't work well right now even if I got some things better and now will not Oil Rush (as I mentioned in the beginning) even start. SO! Can someone give me any advice with this? I'm stuck. Can't manage to see whats wrong right now. Any help? My graphic card is AMD Radeon HD 6470M.

    Read the article

  • Delphi Client-Server Application using Firebird 2.5 error

    - by Japie Bosman
    I have got a lengthy question to ask. First of all Im still very new when it comes to Delphi programming and my experience has beem mostly developing small single user database applications using ADO and an Access database. I need to take the transition now to a client server application and this is where the problem starts. I decided to use Firebird 2.5 embeded as my database, as it is open source, and it is can be used with the interbase components in Delphi and that multiple clients can access the database simultanously. So I followed the interbase tutorial in Delphi. I managed to connect the client to the server and see the data in the example (While both are running on my pc), but when i tried to move the client to another pc, keeping the server on mine and running it to see if I can connect to the server it gave me the following error. Exception EIdSocketError in module clientDemo.exe at 0029DCAC. Socket Error # 10061 Connection refused. I understand that this might be because the host is defined as localhost in the client. But here is my first question. In the TSQLConncetion you can set die hostname under Driver-Hostname. The thing I want to know is how do you do this at run time, as I cannot get the property when I try and make an edit box to allow the user to enter the value and then set it via code like for example: SQLConncetion1.Driver.Hostname := edtHost.text; The thing is there is not such property to set, so how do you set the hostname at run time? Im using Delphi XE2 There is still a lot of questions to come especially when it comes to deployment, but I will take this piece by piece and I appreciate the advice.

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • Finishing an iteration early

    - by f1dave
    I'd like some input on this on those working with agile methodologies... A current project is finding that development on our planned user stories is finishing some time before the end of the iteration, and that the testing effort and business acceptance is what's actually dragging us out longer towards the end. This means that the devs in question have spare time, and they're essentially going out to the iteration+1 backlog and starting work on cards there before our current iteration cards are 'done'. As iteration manager, I want to put a stop to this - I want a more team-orientated approach where the group takes ownership of getting all the cards done, as opposed to "Well, dev's done so what do I dev next?" The problem I face is convincing the team of this. On one hand, I understand why the devs don't want to test the code they've written (there are unit tests they write of course, but the manual testing to be done could be influenced by their bias). The team sees working ahead as making our next iterations easier, because a lot of the work is done before we start. I see this as screwing with the whole system of planning/actuals - but it's difficult to convince the team as to why this matters. What advice can you guys and girls give? How do we stop devs reaching ahead? What should they be doing instead? How much of a problem is this in the scheme of things, if things are still getting done?

    Read the article

  • C++ Iterator lifetime and detecting invalidation

    - by DK.
    Based on what's considered idiomatic in C++11: should an iterator into a custom container survive the container itself being destroyed? should it be possible to detect when an iterator becomes invalidated? are the above conditional on "debug builds" in practice? Details: I've recently been brushing up on my C++ and learning my way around C++11. As part of that, I've been writing an idiomatic wrapper around the uriparser library. Part of this is wrapping the linked list representation of parsed path components. I'm looking for advice on what's idiomatic for containers. One thing that worries me, coming most recently from garbage-collected languages, is ensuring that random objects don't just go disappearing on users if they make a mistake regarding lifetimes. To account for this, both the PathList container and its iterators keep a shared_ptr to the actual internal state object. This ensures that as long as anything pointing into that data exists, so does the data. However, looking at the STL (and lots of searching), it doesn't look like C++ containers guarantee this. I have this horrible suspicion that the expectation is to just let containers be destroyed, invalidating any iterators along with it. std::vector certainly seems to let iterators get invalidated and still (incorrectly) function. What I want to know is: what is expected from "good"/idiomatic C++11 code? Given the shiny new smart pointers, it seems kind of strange that STL allows you to easily blow your legs off by accidentally leaking an iterator. Is using shared_ptr to the backing data an unnecessary inefficiency, a good idea for debugging or something expected that STL just doesn't do? (I'm hoping that grounding this to "idiomatic C++11" avoids charges of subjectivity...)

    Read the article

  • New Responsibilities

    - by Robert May
    With the start of the new year, I’m starting new responsibilities at Veracity. One responsibility that is staying constant is my love and evangelism of Agile.  In fact, I’ll be spending more time ensuring that all Veracity teams are performing agile, Scrum specifically, in a consistent manner so that all of our clients and consultants have a similar experience. Imagine, if you will, working for a consulting company on a project.  On that project, the project management style is Waterfall in iterations.  Now you move to another project and in that project, you’re doing real Scrum, but in both cases, you were told that what you were doing was Scrum.  Rather confusing.  I’ve found, however, that this happens on many teams and many projects.  Most companies simply aren’t disciplined enough to do Scrum.  Some think that being Agile means not being disciplined.  The opposite is true! So, my goals for Veracity are to make sure that all of our consultants have a consistent feel for Scrum and what it is and how it works and then to make sure that on the projects they’re assigned to, Scrum is appropriately applied for their situation.  This will help keep them happier, but also make switching to other projects easier and more consistent.  If we aren’t doing the project management on the project, we’ll help them know what good Agile practices should look like so that they can give good advice to the client, and so that if they move to another project, they have a consistent feel. I’m really looking forward to these new duties. Technorati Tags: Agile,Scrum

    Read the article

  • Overwhelmed by complex C#/ASP.NET project in Visual Studio 2008

    - by Darren Cook
    I have been hired as a junior programmer to work on projects that extend existing functionality in a very large, complex solution. The code base consists of C#, ASP.NET, jQuery, javascript, html and xml. I have some knowledge of all these in addition to fair knowledge of object-oriented programming and its fundamental concepts of inheritance, abstraction, polymorphism and encapsulation. I can follow code up through its base classes, interfaces, abstract classes and understand a large part of the code that I read while doing this. However, this solution is so humongous and so many things get tied together whenever I navigate through the code that I feel absolutely overwhelmed. I often find myself unable to fully follow everything that is going on with objects being serialized, large amounts of C# and javascript operating on the same pages and methods being called from template files that consist mainly of markup. I love learning about code, but trying to deal with this really stresses me out. Additionally, I do know that a significant amount of unit testing has been done but I know nothing about unit testing or how to utilize it. Any advice anyone could offer me regarding dealing with a large code base while using Visual Studio 2008 would be greatly appreciated. Are there tools that I can use to help get a handle on what is going on? Perhaps there are things even in Visual Studio that I am not aware of. How can I follow the code to low level functionality in order to get a better grasp of what is going on at a high level?

    Read the article

  • How much knowledge do I need to begin a project in Django

    - by Smock
    I started learning django about a month ago. I have an intermediate C, Java programming experience. I read the first 8 chapters of the django book . Afterwards, I picked up Practical Django Projects by James Bennett and did the first two projects: CMS & Web Blog. Although, I started getting lost when he got to the generic views part. I know that's important but I'm not sure how important that is when trying to implement a project. Anyway, I have a project in mind that I'd like to start; however, I'm nervous as to where to begin. I'm overwhelmed with the number of things that I'd like my project to do but no knowledge or minimal knowledge as to how e.g. how do i implement css and javascript in my project. Moreover, I am aware that some django packages exists to ease development but I don't know if I should use them or not. Anyway, I apologize for my length message. I just want some advice/encouragement. I have a project in mind but do you think I need to read more materials/tutorials or is it smart to just start working on my project based on the minimal knowledge i've gained from those books? Any information that can be provided is much appreciated. I really want to get good at this but I just need some direction.

    Read the article

  • Learning Programming during the job?

    - by Hossein
    Introduction: I have read and heard advice, about learning programming by accepting programming projects. I need real assistance to understand this, because: Problem: Although, it would seem to me that one would gain much more technical knowledge by doing, real world projects, if one doesn't know much about a technology, it would add much more risk to the actual delivery of the final product! Even the smallest of real world projects could be too much for a newbie. There is a contradiction here: You need to know the job to do it! and It's recommended to do the job, in order to learn it! Question: Any personal experiences in this case would be very pleasant to know while describing: How new was the subject to you? didn't have a clue at all? Or, did you have experience with similar technologies? Was it a solo project or were you in a team? If team, then did others help you with learning it? Did it work as expected? Did you deliver on time? Do you recommend this approach to others as well?

    Read the article

  • How to view/mount other partitions on your hard drive

    - by Preston Zacharias
    Recently I have installed Ubuntu 12.04 Beta 2 on a USB flash drive and decided to install it on an old external HDD which I have taken out of the casing and succesfully mounted in my desktop computer. There is no other operating system besides the newly install Ubuntu. However, there is about 500gb of data on the drive. This is why i used a partitioning software on my windows 7 netbook to partition the hard drive to set aside 1tb for files, 350gb of space for linux and the remaining 650gb for Vista which i plan on installing soon. But this is where the problem sets in...when installing Ubuntu it does not recognize that the drive is partitioned at all, it's just one big open block of space...so I used the installers built in partitioning feature to set aside 300gb for main Ubuntu install and 50gb for swap space. I set both of these partitions to be created at the "end" so that it wouldn't delete or write over my data. And this is where i am really lost; when booting into Ubuntu i am able to use it perfectly fine, got on internet, etc...but i have NO CLUE as to how i can view files that were previously on the drive (all of my data that i had prior to install). How can I mount/be able to view the other partition so that i can have access to my data? Thank you ahead of time! I REALLY appreciate any help or advice! ~Preston

    Read the article

  • Prepared statement alternatives for this middle-man program?

    - by user2813274
    I have an program that is using a prepared statement to connect and write to a database working nicely, and now need to create a middle-man program to insert between this program and the database. This middle-man program will actually write to multiple databases and handle any errors and connection issues. I would like advice as to how to replicate the prepared statements such as to create minimal impact to the existing program, however I am not sure where to start. I have thought about creating a "SQL statement class" that mimics the prepared statement, only that seems silly. The existing program is in Java, although it's going to be networked anyways so I would be open to writing it in just about anything that would make sense. The databases are currently MySQL, although I would like to be open to changing the database type in the future. My main question is what should the interface for this program look like, and does doing this even make sense? A distributed DB would be the ideal solution, but they seem overly complex and expensive for my needs. I am hoping to replicate the main functionality of a distributed DB via this middle-man. I am not too familiar with sql-based servers distributing data (or database in general...) - perhaps I am fighting an uphill battle by trying to solve it via programming, but I would like to make an attempt at least.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >