Search Results

Search found 4940 results on 198 pages for 'understanding'.

Page 133/198 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Complete Beginner to Game Programming and Unreal Engine 4, Looking For Advice [on hold]

    - by onemic
    I am currently a 2nd year programming student(Just finished my first year so I will be starting my second year in September) and have mainly learned C and C++ in my classes. In terms of what I know of C++, I know about general inheritance, polymorphism, overloading operators, iterators, a little bit about templates(only class and function templates) etc. but not of the more advanced topics like linked lists and other sequential containers(containers in general I guess), enumerations, most of the standard library(other than like strings and vectors), and probably a bunch of other stuff I dont even know about yet. I subscribed to Unreal Engine 4 as I was very intrigued by their Unreal Tournament announcement earlier this month, especially after hearing that UE4 is going completely C++. Of course my end goal in doing this programming program is to eventually go into game/graphics programming. Since it's my summer off, I thought what better way then to actually apply some of my skills to a personal project so I actually have a firmer understanding of C++ past what my professors tell me. My questions are this: What would be the best way to start off making a small personal game in UE4 as a project for the summer? What should I be aiming for, especially for someone that is still learning C++? Should I focus on making a simple 2D game rather than a 3D one to get started? Seeing the Flappy Chicken showcase intrigued me because before I thought the UE engine was pretty much pigeonholed into being for FPS games What should my expectations be going into UE4 and a game engine for the first time?(UE4 will be my first foray into making a game) What can I expect to gain from making things in UE4, in terms of making games and in terms of further fleshing out my knowledge of C++? Would you recommend I start off 100% using C++ for scripting or using the visual blueprints? Since I'm not a designer, how would I be able to add objects and designs to my game? For someone at my level is retaining the UE4 subscription worth it or is it better to cancel and resub when I learn enough about UE4 and C++? Lastly is there anything to be gained in terms of knowledge/insight through me looking at the source code for UE4? I opened it in VS2013, but noticed that most of the files were C# files and not cpp's. Thanks in advance for taking the time to answer.

    Read the article

  • Still prompted for a password after adding SSH public key to a server

    - by Nathan Arthur
    I'm attempting to setup a git repository on my Dreamhost web server by following the "Setup: For the Impatient" instructions here. I'm having difficulty setting up public key access to the server. After successfully creating my public key, I ran the following command: cat ~/.ssh/[MY KEY].pub | ssh [USER]@[MACHINE] "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys" ...replacing the appropriate placeholders with the correct values. Everything seemed to go through fine. The server asked for my password, and, as far as I can tell, executed the command. There is indeed a ~/.ssh/authorized_keys file on the server. The problem: When I try to SSH into the server, it still asks for my password. My understanding is that it shouldn't be asking for my password anymore. What am I missing? EDIT: SSH -v Log: Macbook:~ michaeleckert$ ssh -v [USER]@[SERVER URL] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug1: Connecting to [SERVER URL] [[SERVER IP]] port 22. debug1: Connection established. debug1: identity file /Users/michaeleckert/.ssh/id_rsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_rsa-cert type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze3 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze3 pat OpenSSH_5* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA [STRING OF NUMBERS AND LETTERS SEPARATED BY SEMI-COLONS] debug1: Host ‘[SERVER URL]' is known and matches the RSA host key. debug1: Found key in /Users/michaeleckert/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/michaeleckert/.ssh/id_rsa debug1: Trying private key: /Users/michaeleckert/.ssh/id_dsa debug1: Next authentication method: password [USER]@[SERVER URL]'s password: debug1: Authentication succeeded (password). Authenticated to [SERVER URL] ([[SERVER IP]]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 Welcome to [SERVER URL] Any malicious and/or unauthorized activity is strictly forbidden. All activity may be logged by DreamHost Web Hosting. Last login: Sun Nov 3 12:04:21 2013 from [MY IP] [[SERVER NAME]]$

    Read the article

  • How to be successful at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Keep coding the wrong way to remain consistent? [closed]

    - by bwalk2895
    Possible Duplicate: Code maintenance: keeping a bad pattern when extending new code for being consistent, or not? To keep things simple let's say I am responsible for maintaining two applications, AwesomeApp and BadApp (I am responsible for more and no that is not their actual names). AwesomeApp is a greenfield project I have been working on with other members on my team. It was coded using all the fancy buzzwords, Multilayer, SOA, SOLID, TDD, and so on. It represents the direction we want to go as a team. BadApp is a application that has been around for a long time. The architecture suffers from many sins, namely everything is tightly coupled together and it is not uncommon to get a circular dependency error from the compiler, it is almost impossible to unit test, large classes, duplicate code, and so on. We have a plan to rewrite the application following the standards established by AwesomeApp, but that won't happen for a while. I have to go into BadApp and fix a bug, but after spending months coding what I consider correctly, I really don't want do continue perpetuate bad coding practices. However, the way AwesomeApp is coded is vastly different from the way BadApp is coded. I fear implementing the "correct" way would cause confusion for other developers who have to maintain the application. Question: Is it better to keep coding the wrong way to remain consistent with the rest of the code in the application (knowing it will be replaced) or is it better to code the right way with an understanding it could cause confusion because it is so much different? To give you an example. There is a large class (1000+ lines) with several functions. One of the functions is to calculate a date based on an enumerated value. Currently the function handles all the various calculations. The function relies on no other functionality within the class. It is self contained. I want to break the function into smaller functions (at the very least) and put them into their own classes and hide those classes behind an interface (at the most) and use the factory pattern to instantiate the date classes. If I just broke it out into smaller functions within the class it would follow the existing coding standard. The extra steps are to start following some of the SOLID principles.

    Read the article

  • ATG Live Webcast Dec. 13th: EBS Future Directions: Deployment and System Administration

    - by Bill Sawyer
    This webcast provides an overview of the improvements to Oracle E-Business Suite deployment and system administration that are planned for the upcoming EBS 12.2 release.   It is targeted to system administrators, DBAs, developers, and implementers. This webcast, led by Max Arderius, Manager Applications Technology Group, compares existing deployment and system administration tools for EBS 12.0 and 12.1 with the upcoming functionality planned for EBS 12.2. This was a very popular session at OpenWorld 2012, and I am pleased to bring it to the ATG Live Webcast series.  This session will cover: Understanding the Oracle E-Business Suite 12.2 Architecture Installing & Upgrading EBS 12.2 Online Patching in EBS 12.2 Cloning in EBS 12.2 Date:             Thursday, December 13, 2012Time:             8:00 AM - 9:00 AM Pacific Standard TimePresenter:   Max Arderius, Manager Applications Technology Group Webcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103194To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  593672805If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Collaborative work (small team) - Best practices

    - by LEM01
    I'm currently working in a very small team of programmers (2-3) and I'm looking for advices/best practices on how to organise our work. We're all working on the same application using PHP. Today we're kind of all working on our way. Today situation: List item that have to be worked on by each dev 1/week. What has to be done is defined at a high functional level (ex: Build the search engine for this product..) Commit / merge our individual branches (git) every week before the next meeting No real dev rules, no code review No test written (aouutch) Problems faced: Code quality issue: discovering someone else code is sometime tough (inline, variable+function+class names, spaces, comments..) Changes in already existing classes (impact on someone else work) Responsibility of each dev unclear: after getting someone else code and discover something messy, should I make the change? Should he make the change? How to plan those things,... What I'm looking for: Basically I'm looking into structuring the way we develop things in order to avoid frustration and improve overall quality. How to define coding standards (naming convention, code rules...)? Do you you any validation script to make sure code is valid before committing? Do you think that defining an architect role in the team is needed? Someone that would actually define what has to be developed during the next phase. By defining interfaces or class descriptions that have to be written. (Does it make sense in such a small team?) Today we're losing time into understanding what others did or tried to do, we're also losing time in discussion like "you should have done it that way! Why is this class doing that and not that..? Shouldn't we have a embedded class rather that this set of data...". I'm looking into a work process, maybe with more defined responsibilities and process in order to improve our performance. If you have experience, advices, best practices or anything to share that we could benefit from it will be much appreciated! Thanks a lot for your time!

    Read the article

  • Frame rate on one of two machines running same code seems to be capped at 60 for no reason

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated.

    Read the article

  • Help writing server script to ban IP's from a list

    - by Chev_603
    I have a VPS that I use as an openvpn and web server. For some reason, my apache log files are filled with thousands of these hack attempts: "POST /xmlrpc.php HTTP/1.0" 404 395 These attack attempts fill up 90% of my logs. I think it's a WordPress vulnerability they're looking for. Obviously they are not successful (I don't even have Wordpress on my server), but it's annoying and probably resource consuming as well. I am trying to write a bash script that will do the following: Search the apache logs and grab the offending IP's (even if they try it once), Sort them into a list with each unique IP on a seperate line, And then block them using the IP table rules. I am a bash newb, and so far my script does everything except Step 3. I can manually block the IP's, but that's tedious and besides, this is Linux and it's perfectly capable of doing it for me. I also want the script to be customizable so that I (or anyone else who wants to use it) can change the variables to suit whatever situation I/they may deal with in the future. Here is the script so far: #!/bin/bash ##IP LIST GENERATOR ##Author Chev Young ##Script to search Apache logs and list IP's based on custom filters ## ##Define our variables: DIRECT=~/Script ##Location of script&where to put results/temp files LOGFILE=/var/log/apache2/access.log ## Logfile to search for offenders TEMPLIST=xml_temp ## Temporary file name IP_LIST=ipstoban ## Name of results file FILTER1=xmlrpc ## What are we looking for? (Requests we want to ban) cd $DIRECT if [ ! -f $TEMPLIST ];then touch $TEMPLIST ##Create temp file fi cat $LOGFILE | grep $FILTER1 >> $DIRECT/$TEMPLIST ## Only interested in the IP's, so: sed -e 's/\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\).*$/\1/' -e t -e d $DIRECT/$TEMPLIST | sort | uniq > $DIRECT/$IP_LIST rm $TEMPLIST ## Clean temp file echo "Done. Results located at $DIRECT/$IP_LIST" So I need help with the next part of the script, which should ban the IP's (incoming and perhaps outgoing too) from the resulting $IP_LIST file. I don't care if it utilizes UFW or IPTables directly, as long as it bans the IP's. I'd probably run it as a cron task. What I'm having trouble with is understanding how to use line of the result file as a seperate variable to do something like: ufw deny $IP1 $IP2 $IP3, ect Any ideas? Thanks.

    Read the article

  • Calculating a circle or sphere along a vector

    - by Sparky
    Updated this post and the one at Math SE (http://math.stackexchange.com/questions/127866/calculating-a-circle-or-sphere-along-a-vector), hope this makes more sense. I previously posted a question (about half an hour ago) involving computations along line segments, but the question and discussion were really off track and not what I was trying to get at. I am trying to work with an FPS engine I am attempting to build in Java. The problem I am encountering is with hitboxing. I am trying to calculate whether or not a "shot" is valid. I am working with several approaches and any insight would be helpful. I am not a native speaker of English nor skilled in Math so please bear with me. Player position is at P0 = (x0,y0,z0), Enemy is at P1 = (x1,y1,z1). I can of course compute the distance between them easily. The target needs a "hitbox" object, which is basically a square/rectangle/mesh either in front of, in, or behind them. Here are the solutions I am considering: I have ruled this out...doesn't seem practical. [Place a "hitbox" a small distance in front of the target. Then I would be able to find the distance between the player and the hitbox, and the hitbox and the target. It is my understanding that you can compute a circle with this information, and I could simply consider any shot within that circle a "hit". However this seems not to be an optimal solution, because it requires you to perform a lot of calculations and is not fully accurate.] Input, please! Place the hitbox "in" the player. This seems like the better solution. In this case what I need is a way to calculate a circle along the vector, at whatever position I wish (in this case, the distance between the two objects). Then I can pick some radius that encompasses the whole player, and count anything within this area a "hit". I am open to your suggestions. I'm trying to do this on paper and have no familiarity with game engines. If any software folk out there think I'm doing this the hard way, I'm open to help! Also - Anyone with JOGL/LWJGL experience, please chime in. Is this making sense?

    Read the article

  • The type of programmer I want to be [closed]

    - by Aventinus_
    I'm an undergraduate Software Engineer student, although I've decided that pure programming is what I want to do for the rest of my life. The thing is that programming is a vast field and although most of its aspects are extremely interesting, soon or later I'll have to choose one (?) to focus on. I have several ideas on small projects I'd like to develop this summer, having in mind that this will gain me some experience and, in the best scenario, some cash. But the most important reason I'd like to develop something close to “professional” is to give myself direction on what I want to do as a programmer. One path is that of the Web Programmer. I enjoy PHP and MySQL, as well as HTML and CSS, although I don't really like ASP.NET. I can see myself writing web apps, using the above technologies, as well as XML and Javascript. I also have a neat idea on a Facebook app. The other path is that of the Desktop Programmer. This is a little more complicated cause I really-really enjoy high level languages such as Java and Python but not the low level ones, such as C. I use both Linux and Windows for the last 6 years and I like their latest DEs (meaning Gnome Shell and Metro). I can see myself writing desktop applications for both OSs as long as it means high level programming. Ideally I'd like being able to help the development of GNOME. The last path that interests me is the path of the Smartphone Programmer. I have created some sample applications on Android and due to Java I found it a quite interesting experience. I can also see myself as an independent smartphone developer. These 3 paths seem equally interesting at the moment due to the shallowness of my experience, I guess. I know that I should spend time with all of them and then choose the right one for me but I'd like to know what are the pros and cons in terms of learning curve, fun, job finding and of course financial rewards with each of these paths. I have fair or basic understanding of the languages/technologies I described earlier and this question will help me choose where to focus, at least for now.

    Read the article

  • I'm graduating with a Computer Science degree but I don't feel like I know how to program.

    - by wp123
    I'm graduating with a Computer Science degree but I see websites like Stack Overflow and search engines like Google and don't know where I'd even begin to write something like that. During one summer I did have the opportunity to work as a iPhone developer, but I felt like I was mostly gluing together libraries that other people had written with little understanding of the mechanics happening beneath the hood. I'm trying to improve my knowledge by studying algorithms, but it is a long and painful process. I find algorithms difficult and at the rate I am learning a decade will have passed before I will master the material in the book. Given my current situation, I've spent a month looking for work but my skills (C, Python, Objective-C) are relatively shallow and are not so desirable in the local market, where C#, Java, and web development are much higher in demand. That is not to say that C and Python opportunities do not exist but they tend to demand 3+ years of experience I do not have. My GPA is OK (3.0) but it's not high enough to apply to the large companies like IBM or return for graduate studies. Basically I'm graduating with a Computer Science degree but I don't feel like I've learned how to program. I thought that joining a company and programming full-time would give me a chance to develop my skills and learn from those more experienced than myself, but I'm struggling to find work and am starting to get really frustrated. I am going to cast my net wider and look beyond the city I've grown up in, but what have other people in similar situation tried to do? I've worked hard but don't have the confidence to go out on my own and write my own app. (That is, become an indie developer in the iPhone app market.) If nothing turns up I will need to consider upgrading and learning more popular skills or try something marginally related like IT, but given all the effort I've put in that feels like copping out. EDIT: Thank you for all the advice. I think I was premature because of unrealistic expectations but the comments have given me a dose of reality. I will persevere and continue to code. I have a project in mind, although well beyond my current capabilities it will challenge me to hone my craft and prove my worth to myself (and potential employers). Had I known there was a career overflow I would have posted there instead. Thanks again!

    Read the article

  • Why is Java the lingua franca at so many institutions?

    - by Billy ONeal
    EDIT: This question at first seems to be bashing Java, and I guess at this point it is a bit. However, the bigger point I am trying to make is why any one single language is chosen as the one end all be all solution to all problems. Java happens to be the one that's used so that's the one I had to beat on here, but I'm not intentionality ripping Java a new one :) I don't like Java in most academic settings. I'm not saying the language itself is bad -- it has several extremely desirable aspects, most importantly the ability to run without recompilation on most any platform. Nothing wrong with using the language for Your Next App ^TM. (Not something I would personally do, but that's more because I have less experience with it, rather than it's design being poor) I think it is a waste that high level CS courses are taught using Java as a language. Too many of my co-students cannot program worth a damn, because they don't know how to work in a non-garbage-collected world. They don't fundamentally understand the machines they are programming for. When someone can work outside of a garbage collected world, they can work inside of one, but not vice versa. GC is a tool, not a crutch. But the way it is used to teach computer science students is a as a crutch. Computer science should not teach an entire suite of courses tailored to a single language. Students leave with the idea that all good design is idiomatic Java design, and that Object Oriented Design is the ONE TRUE WAY THAT IS THE ONLY WAY THINGS CAN BE DONE. Other languages, at least one of them not being a garbage collected language, should be used in teaching, in order to give the graduate a better understanding of the machines. It is an embarrassment that somebody with a PHD in CS from a respected institution cannot program their way out of a paper bag. What's worse, is that when I talk to those CS professors who actually do understand how things operate, they share feelings like this, that we're doing a disservice to our students by doing everything in Java. (Note that the above would be the same if I replaced it with any other language, generally using a single language is the problem, not Java itself) In total, I feel I can no longer respect any kind of degree at all -- when I can't see those around me able to program their way out of fizzbuzz problems. Why/how did it get to be this way?

    Read the article

  • Why do some user agents have spam urls in them (and why are they always Opera/Presto User-Agents)?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Should i continue my self-taught coding practice or learn how to do coding professionally?

    - by G1i1ch
    Lately I've been getting professional work, hanging out with other programmers, and making friends in the industry. The only thing is I'm 100% self-taught. It's caused my style to extremely deviate from the style of those that are properly trained. It's the techniques and organization of my code that's different. It's a mixture of several things I do. I tend to blend several programming paradigms together. Like Functional and OO. I lean to the Functional side more than OO, but I see the use of OO when something would make more sense as an abstract entity. Like a game object. Next I also go the simple route when doing something. When in contrast, it seems like sometimes the code I see from professional programmers is complicated for the sake of it! I use lots of closures. And lastly, I'm not the best commenter. I find it easier just to read through my code than reading the comment. And most cases I just end up reading the code even if there are comments. Plus I've been told that, because of how simply I write my code, it's very easy to read it. I hear professionally trained programmers go on and on about things like unit tests. Something I've never used before so I haven't even the faintest idea of what they are or how they work. Lots and lots of underscores "_", which aren't really my taste. Most of the techniques I use are straight from me, or a few books I've read. Don't know anything about MVC, I've heard a lot about it though with things like backbone.js. I think it's a way to organize an application. It just confuses me though because by now I've made my own organizational structures. It's a bit of a pain. I can't use template applications at all when learning something new like with Ubuntu's Quickly. I have trouble understanding code that I can tell is from someone trained. Complete OO programming really leaves a bad taste in my mouth, yet that seems to be what EVERYONE else is strictly using. It's left me not that confident in the look of my code, or wondering whether I'll cause sparks when joining a company or maybe contributing to open source projects. In fact I'm rather scared of the fact that people will eventually be checking out my code. Is this just something normal any programmer goes through or should I really look to change up my techniques?

    Read the article

  • How to synchronize the ball in a network pong game?

    - by Thaars
    I’m developing a multiplayer network pong game, my first game ever. The current state is, I’ve running the physic engine with the same configurations on the server and the clients. The own paddle movement is predicted and get just confirmed by the authoritative server. Is a difference detected between them, I correct the position at the client by interpolation. The opponent paddle is also interpolated 200ms to 100ms in the past, because the server is broadcasting snapshots every 100ms to each client. So far it works very well, but now I have to simulate the ball and have a problem to understanding the procedure. I’ve read Valve’s (and many other) articles about fast-paced multiplayer several times and understood their approach. Maybe I can compare my ball with their bullets, but their advantage is, the bullets are not visible. When I have to display the ball, and see my paddle in the present, the opponent in the past and the server is somewhere between it, how can I synchronize the ball over all instances and ensure, that it got ever hit by the paddle even if the paddle is fast moving? Currently my ball’s position is simply set by a server update, so it can happen, that the ball bounces back, even if the paddle is some pixel away (because of a delayed server position). Until now I’ve got no synced clock over all instances. I’m sending a client step index with each update to the server. If the server did his job, he sends the snapshot with the last step index of each client back to the clients. Now I’m looking for the stored position at the returned step index and compare them. Do I need a common clock to sync the ball? EDIT: I've tried to sync a common clock for the server and all clients with a timestamp. But I think it's better to use an own stepping instead of a timestamp (so I don't need to calculate with the ping and so on - and the timestamp will never be exact). The physics are running 60 times per second and now I use this for keeping them synchronized. Is that a good way? When the ball gets calculated by each client, the angle after bouncing can differ because of the different position of the paddles (the opponent is 200ms in the past). When the server is sending his ball position, velocity and angle (because he knows the position of each paddle and is authoritative), the ball could be in a very different position because of the different angles after bouncing (because the clients receive the server data after 100ms). How is it possible to interpolate such a huge difference? I posted this question some days ago at stackoverflow, but got no answer yet. Maybe this is the better place for this question.

    Read the article

  • Are there any good Java/JVM libraries for my Expression Tree architecture?

    - by Snuggy
    My team and I are developing an enterprise-level application and I have devised an architecture for it that's best described as an "Expression Tree". The basic idea is that the leaf nodes of the tree are very simple expressions (perhaps simple values or strings). Nodes closer to the trunk will get more and more complex, taking the simpler nodes as their inputs and returning more complex results for their parents. Looking at it the other way, the application performs some task, and for this it creates a root expression. The root expression divides its input into smaller units and creates child expressions, which when evaluated it can use to build it's own result. The subdividing process continues until the simplest leaf nodes. There are two very important aspects of this architecture: It must be possible to manipulate nodes of the tree after it is built. The nodes may be given new input values to work with and any change in result for that node needs to be propagated back up the tree to the root node. The application must make best use of available processors and ultimately be scalable to other computers in a grid or in the cloud. Nodes in the tree will often be updating concurrently and notifying other interested nodes in the tree when they get a new value. Unfortunately, I'm not at liberty to discuss my actual application, but to aid understanding a little bit, you might imagine a kind of spreadsheet application being implemented with a similar architecture, where changes to cells in the table are propagated all over the place to other cells that need the result. The spreadsheet could get so massive that applying multi-core multi-computer distributed system to solve it would be of benefit. I've got my prototype "Expression Engine" working nicely on a single multi-core PC but I've started to run into a few concurrency issues (as expected because I haven't been taking too much care so far) so it's now time to start thinking about migrating the Engine to a more robust library, and that leads to a number of related questions: Is there any precedent for my "Expression Tree" architecture that I could research? What programming concepts should I consider. I realise this approach has many similarities to a functional programming style, and I'm already aware of the concepts of using futures and actors. Are there any others? Are there any languages or libraries that I should study? This question is inspired by my accidental discovery of Scala and the Akka library (which has good support for Actors, Futures, Distributed workloads etc.) and I'm wondering if there is anything else I should be looking at as well?

    Read the article

  • How to implement a multi-part snake with smooth movement? [closed]

    - by Jamie
    Sorry that i couldnt answer on my previous post but it got closed. I couldnt answer because i had to prepair for my finals. As there were problems with understanding of what im trying to achieve, im going to describe a little bit more in depth. Im creating a game in which you steer a snake. I assume everybody knows how that works. But in my case the snake isnt just propagating in an array element by element. Imagine a 2Dgrid on which the snake moves. Its 10x10 tiles. Lets say one tile is 4x4 meters. The snakes head spawns in the middle of the (3,2) tile (beginning with (0,0)), so its position is (4*3+2,4*2+2)(the 2's are so that the snake is in the middle of the 4x4 tile). And heres where the fun begins. when the snake moves, it doesnt jump to next tile. Instead it moves a fraction of the way there. So lets say the snake is heading to tile (4,2). After it moved once, its position is (4*3+2+0.1,4*2+2), where 0.1 is the fraction of the way it moved. This is done to achieve smooth movement. So now im adding the rest of the body. The rest is supposed to move along the exact same path as the head did. I implemented it so that each part of the body has its own position and direction. Then i apply this algorithm: 1.Move each part in its direction. 2.If a part is in the middle of the tile(which implies all of them are), change each parts direction to the direction of the part proceeding it. As i said before i could make this work, but i cant stop thinking that im overlooking a much easier and cleaner solution. So this is my question. Is there any easier/better/faster way to do this?

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • Why can't Java/C# implement RAII?

    - by mike30
    Question: Why can't Java/C# implement RAII? Clarification: I am aware the garbage collector is not deterministic. So with the current language features it is not possible for an object's Dispose() method to be called automatically on scope exit. But could such a deterministic feature be added? My understanding: I feel an implementation of RAII must satisfy two requirements: 1. The lifetime of a resource must be bound to a scope. 2. Implicit. The freeing of the resource must happen without an explicit statement by the programmer. Analogous to a garbage collector freeing memory without an explicit statement. The "implicitness" only needs to occur at point of use of the class. The class library creator must of course explicitly implement a destructor or Dispose() method. Java/C# satisfy point 1. In C# a resource implementing IDisposable can be bound to a "using" scope: void test() { using(Resource r = new Resource()) { r.foo(); }//resource released on scope exit } This does not satisfy point 2. The programmer must explicitly tie the object to a special "using" scope. Programmers can (and do) forget to explicitly tie the resource to a scope, creating a leak. In fact the "using" blocks are converted to try-finally-dispose() code by the compiler. It has the same explicit nature of the try-finally-dispose() pattern. Without an implicit release, the hook to a scope is syntactic sugar. void test() { //Programmer forgot (or was not aware of the need) to explicitly //bind Resource to a scope. Resource r = new Resource(); r.foo(); }//resource leaked!!! I think it is worth creating a language feature in Java/C# allowing special objects that are hooked to the stack via a smart-pointer. The feature would allow you to flag a class as scope-bound, so that it always is created with a hook to the stack. There could be a options for different for different types of smart pointers. class Resource - ScopeBound { /* class details */ void Dispose() { //free resource } } void test() { //class Resource was flagged as ScopeBound so the tie to the stack is implicit. Resource r = new Resource(); //r is a smart-pointer r.foo(); }//resource released on scope exit. I think implicitness is "worth it". Just as the implicitness of garbage collection is "worth it". Explicit using blocks are refreshing on the eyes, but offer no semantic advantage over try-finally-dispose(). Is it impractical to implement such a feature into the Java/C# languages? Could it be introduced without breaking old code?

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • ADF Real World Developers Guide Book Review

    - by Grant Ronald
    I'm half way through my review of "Oracle ADF Real World Developer's Guide" by Jobinesh Purushothaman - unfortunately some work deadlines de-railed me from having completed my review by now but here goes.  First thing, Jobinesh works in the Oracle Product Management team with me, so is a colleague. That declaration aside, its clear that this is someone who has done the "real world" side of ADF development and that comes out in the book. In this book he addresses both the newbies and the experience developers alike.  He introduces the ADF building blocks like entity objects and view obejcts, but also goes into some of the nitty gritty details as well.  There is a pro and con to this approach; having only just learned about an entity or view object, you might then be blown away by some of the lower details of coding or lifecycle.  In that respect, you might consider this a book which you could read 3 or 4 times; maybe skipping some elements in the first read but on the next read you have a better grounding to learn the more advanced topics. One of the key issues he addresses is breaking down what happens behind the scenes.  At first, this may not seem important since you trust the framework to do everything for you - but having an understanding of what goes on is essential as you move through development.  For example, page 58 he explains the full lifecycle of what happens when you execute a query.  I think this is a great feature of his book. You see this elsewhere, for example he explains the full lifecycle of what goes on when a page is accessed : which files are involved,the JSF lifecycle etc. He also sprinkes the book with some best practices and advice which go beyond the standard features of ADF and really hits the mark in terms of "real world" advice. So in summary, this is a great ADF book, well written and covering a mass of information.  If you are brand new to ADF its still valid given it does start with the basics.  But you might want to read the book 2 or 3 times, skipping the advanced stuff on the first read.  For those who have some basics already then its going to be an awesome way to cement your knowledge and take it to the next levels.  And for the ADF experts, you are still going to pick up some great ADF nuggets.  Advice: every ADF developer should have one!

    Read the article

  • The term "interface" in C++

    - by Flexo
    Java makes a clear distinction between class and interface. (I believe C# does also, but I have no experience with it). When writing C++ however there is no language enforced distinction between class and interface. Consequently I've always viewed interface as a workaround for the lack of multiple inheritance in Java. Making such a distinction feels arbitrary and meaningless in C++. I've always tended to go with the "write things in the most obvious way" approach, so if in C++ I've got what might be called an interface in Java, e.g.: class Foo { public: virtual void doStuff() = 0; ~Foo() = 0; }; and I then decided that most implementers of Foo wanted to share some common functionality I would probably write: class Foo { public: virtual void doStuff() = 0; ~Foo() {} protected: // If it needs this to do its thing: int internalHelperThing(int); // Or if it doesn't need the this pointer: static int someOtherHelper(int); }; Which then makes this not an interface in the Java sense anymore. Instead C++ has two important concepts, related to the same underlying inheritance problem: virtual inhertiance Classes with no member variables can occupy no extra space when used as a base "Base class subobjects may have zero size" Reference Of those I try to avoid #1 wherever possible - it's rare to encounter a scenario where that genuinely is the "cleanest" design. #2 is however a subtle, but important difference between my understanding of the term "interface" and the C++ language features. As a result of this I currently (almost) never refer to things as "interfaces" in C++ and talk in terms of base classes and their sizes. I would say that in the context of C++ "interface" is a misnomer. It has come to my attention though that not many people make such a distinction. Do I stand to lose anything by allowing (e.g. protected) non-virtual functions to exist within an "interface" in C++? (My feeling is the exactly the opposite - a more natural location for shared code) Is the term "interface" meaningful in C++ - does it imply only pure virtual or would it be fair to call C++ classes with no member variables an interface still?

    Read the article

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

  • Is it smart to take a year off from school to get experience?

    - by user134147
    firstly I apologize if this question is not appropriate for the site, but I've seen other similar (though slightly deviant) questions on this sight before and I know the people here are the most qualified to answer my question. Anyways, I'm currently between my sophomore and junior years at a 4 year university, and after a bit of deliberation I've decided on computer science as a major (BA, by the way, as a BS would require me to stay at least an extra year the way our program is set up). I've been interested now in programming for a few months and I've developed a passion for it in a very short time. I began learning C++, migrating to Java recently when I learned my school focuses on this language. Now, I should mention that the concept of higher education has never sat well with me, so part of my motivation for wanting to take time off is to truly challenge myself and see what I can accomplish when I actually try at something. The autodidact in me finds it difficult to focus on my passions while trying to keep a high GPA in unrelated classes. However, I understand the times we live in and therefore would plan to complete my degree after this year. So my question is whether or not the skills I learn in a year off from college could justify the time off from school. Unfortunately, I don't believe I know enough yet to gain any professional experience (internship, etc.) so I would mostly focus my time on learning Java and another language, possibly Wordpress (to gain an understanding of web programming concepts as I have not yet decided what field I want to get into, and to make some money to fund my off-year), and to delve into security concepts, which also interest me. I'm hoping I could work on projects, such as simple applications or contributions to open source software during this time to enhance my resume once I do finish school, so I can find a job out of college easier. I do not want to be the new hire who knows nothing beyond the concepts of his Java textbooks. Does anyone have any input about these thoughts of mine, or any ideas for where I should focus my studies or how high I might set the bar for my work? Thanks a lot everyone!

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >