Search Results

Search found 6515 results on 261 pages for 'half life'.

Page 108/261 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • ArchBeat Link-o-Rama for 2012-06-15

    - by Bob Rhubart
    URGENT BULLETIN: Disable JRE Auto-Update for All E-Business Suite End-Users All desktop administrators must IMMEDIATELY disable the Java Runtime Environment (JRE) Auto-Update option for all Windows end-user desktops connecting to Oracle E-Business Suite Release 11i, 12.0, and 12.1. WebLogic JMS / AQ bridge with JBoss AS 7 | Edwin Biemond Oracle ACE Edwin Biemond explains "how you can retrieve JMS messages from JBoss with the help of a WebLogic Foreign Server and how to push messages to JBoss AS with the help of a WebLogic JMS Bridge." The Healthy Tension That Mobility Creates | Hernan Capdevila "Mobile device management in the cloud makes good sense," says Hernan Capdevila. "I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service." OPN: Fusion Middleware Summer Camps in July in Lisbon and Munich For specialized Oracle Partners. Participation is limited to two people per company at each bootcamp. Registration is first come first serve. Take note of the skill requirements and, prerequisites. Podcast: Cows in the Cloud and the importance of standards In part two of a four-part program Cloud experts Jim Baty, Mark Nelson, William Vambenepe, and Ajay Srivastava explain cows in the cloud and talk about the importance of standards. Community members talk about the challenges and opportunities mobile computing presents for IT architects. Apple has sold 55 million iPads since 2010. Gartner expects a 98% increase in tablet sales in 2012, to 118 million. Nielsen reports that smartphones now account for nearly half of all mobile phones in the U.S., a 38% increase over 2011. And the mobile juggernaut is just getting started. Thought for the Day "Why are video games so much better designed than office software? Because people who design video games love to play video games. People who design office software look forward to doing something else on the weekend." — Ted Nelson Source: SoftwareQuotes.com

    Read the article

  • How to improve Algorithmic Programming Solving skill? [closed]

    - by gaurav
    Possible Duplicate: How can I improve my problem-solving ability? How do you improve your problem solving skills? Should I learn design patterns or algorithms to improve my logical thinking skills? What to do when you're faced with a problem that you can't solve quickly? Are there non-programming related activities akin to solving programming problems? I am a computer engineering graduate. I have studied programming since three years. I am good in coding and programming. I have been trying to compete in algorithmic competitions on sites such as topcoder,spoj since one and a half year, but I am still unable to solve problems other than too easy problems. I have learned from people that it takes practice to solve such problems. I try to solve those problems but sometimes I am unable to understand and even if I do understand I am unable to think of a good algorithm for solving it. Even if I solve I get Wrong answer and I am unable to figure out what is the problem with my code as it works on samples given on the sites but fails on test cases which they do not provide. I really want to solve those problems and become good in algorithms. I have read books for learning algorithms like Introduction to algorithms by CLRS,practicing programming questions. I have gone through some questions but they don't answer this question. I have seen the questions which are said duplicates but those questions focus on overall programming, but I am asking for algorithm related programming, basically for competing in programming which involve solving a problem statement then online judge will automatically evaluate it, such type of programming is quite different from the type of programming these questions discuss.

    Read the article

  • Blank screen after Switch User or Resume

    - by matt wilkie
    About half the time when I switch users or resume from standby or resume the screen goes blank (black). If I work the cursor keys I can hear the system bell when it gets to the end of the user list. I can also successfully login, going from memory, but screen stays black. Sometimes closing and re-opening the lid will light up the screen again. Pressing the special Function key to enable/disable external monitor connection has no effect [Fn]-[F5],[Fn]-[F6]. If none of the previous work I need to put the computer into hibernation or full power off to restore screen function. If I watch closely when switching users I think I can see the screen initially start to light up and then quickly fade to black. The computer is an Acer Aspire 3500, model ZL6, running Ubuntu 10.10 installed 2 days ago. No proprietary drivers are in use. I'll provide a list of hardware details as soon as I can figure out how to generate that (didn't there used to be an entry for hardware details under the System menu?). Possibly related questions: No resume after Hibernate or Standby When I resume from suspension - the screen is blank Switch user fails to complete successfully For what it's worth, blank after resume also used to happen occasionally when the laptop was running XP-Home, but nowhere near as often, perhaps 6 or 8 times a year. UPDATE: I found System Administration System Testing and ran the Monitor test. It went very very dark, but the window elements could be discerned, and the whole screen flashed (from very very dark to black). On the third repeat of that same test the screen went to full blaupck and stayed there. Moving the mouse, via touchpad, or touch keys did not wake it up again. I had to close the lid and put the computer into hibernate, and press the power button to restore it. UPDATE2: output of lshw: http://pastebin.com/q7n8676r, lspci: http://pastebin.com/6ujzVK4r UPDATE3: sometimes I can restore the screen by flipping to console 1 with ctrl-alt-F1 and then back to graphical with ctrl-alt-F7.

    Read the article

  • Keeping Aspect Screen Ratio While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • Summary of the Solaris 11 webcast's livechat QnA session

    - by Karoly Vegh
    This is a followup post to the previous summary on the "What's new with Solaris 11 since the launch" webcast. That webcast has had a chatroom for a live Questions and Answers session running. I went through the archive of those and compiled a list of some of the (IMHO) most relevant and most frequently asked questions, I'd like to share. This is the first part, covering the QnA of Session I and II of the webcast, in a followup post we can have a look of the rest of the sessions if required - let me know in the comments. Also, should you have questions, as usual, feel free to ask those there, too.  ...and here come the answered questions:  When will Exadata be based on Solaris in place of Oracle Enterprise Linux?Exadata offers both Solaris 11 or Oracle Enterprise Linux.  The choice can be made at deployment time based on your OS needs.What are all other benefits and futures avilable in solaris 11 (cloud O.S.) compared to cloud based Red Hat Linux and Windows?suggest you check out our cloud white paper for a view of this. Also the OTN Solaris 11 page has some good articles. Here are the links:  http://www.oracle.com/technetwork/server-storage/solaris11/documentation/o11-106-sol11-cloud-501066.pdf http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.htmlWill 11.1 have a more complete IPS respository for Oracle and FOSS software?Yes, we are adding additional packages to the various package repositories. Since Solaris 11 was launched, both the Oracle Solaris Studio tools as well as Oracle Solaris Cluster have been made available along with numerous new FOSS packages. We will continue to be adding additional Oracle products and open source packages in the future. Will Exadata be based on Sparc in place of intel-amd x86 in next future ?We can't publically discuss futures, but we actually have a SPARC version of Exadata today, it's called SuperCluster, this is such a powerfull multipurpose system that it actually have multiple personalities built into one system: Exadata, Exalogic, and it can be a general purpose platform if you want. Have I understood this right? Livepatching KSplice-style is coming to Solaris 11 too?We're looking at that for certain types of Solaris patches in the future.Will there be a security framework like SST/JASS for Solaris 11?We can't talk about the future projects on a public forum, but we recognize the need for SST/JASS and want to address this as soon as possible. On the other side there are a whole bunch of "best practices" that are now embedded into Solaris 11 by default, so out of the box Solaris 11 should already address part of what SST/JASS gave you. (For example we did a lot of work on improving the auditing performance so that we can now have it turned on by default). On x86 can install VirtualBox in a Zone and use that to host other OSes.Yes, this was one of the first things we made sure would work when we acquired VirtualBox when we were still Sun Microsystems. If I have a Solaris 11 Control Domain on a T-series, can I run a Solaris 10 Ldom with Solaris 8 branded containers?Yes, you can.Is Oracle Solaris free or do we need to purchase?Solaris is free, the entitlement to run it comes either with a Sun system (new or historical) or for 3rd party systems the entitlement comes with a support contract. Note that for production use you will be expected to get a support contract. If you don't want to use the Solaris system (Sun or 3rd party) for production use (i.e. development) you can get an OTN license on the Oracle Technical Network website. Will encryption and deduplication both work on a share?This should work at the same time. What approaches does Solaris use to monitor usage?There are many different tools in Solaris to monitor usage. The main ones are the "stats" (vmstat, mpstat, prstat, ...), the kstat interface, and DTrace (to get details you couldn't see before). And then there are layered tools that can interface with these tools (Ops Center, BMC, CA, Tivoli, ...) Apart little-endian, big-endian how is it easy to port Solaris applications on Sparc to x86 and vice-versa ?Very easy. Except for certain hardware specific applications (those that utilize hardware specific drivers), all of the same Oracle Solaris APIs exist for all architectures. Is IPS based patching aware of the fact that zones can reside on ZFS and move from one physical server to another ?IPS is definitely aware of zones and uses ZFS to support boot environments for non-global zones in the same way that's used for the global zone. With respect to moving a zone from one physical server to another, Solaris 11 supports to the same zone attach/deattach method that was introduced in Solaris 10. Is vnic support in Ldoms planned?This is currently being investigated for a future LDOM release. Is it possible with the new patching system to build a system later with the same patch level as a system built a few months earlier?Yes, you can choose/define exactly which version should go to the system and it will always put the same bits in place. The technical answer is that you choose the version of the "entire" package you want on the system and the rest flows from there. Is it in the plans to allow zones to add/remove zpools to running zones dynamically in future updates?Work in this area is currently under investigation. Any plans to realese Solaris 11 source code? i.e. opensolaris?We currently can't comment on publicly releasing the source code. If you need/want this access please let your Oracle account team know. What about VirtualBox and Solaris11 for virtualization?Solaris 11 works great with VirtualBox, as both a client and a host system. Will Oracle DB software eventually be supplied as IPS packages? When?We don't have a date yet but this is actively being worked on. What are the new artifacts in Oracle Solaris 11 than the previous versions?There are quite a few actually. The best start is to look at our "Evaluate Solaris 11" page, and there you also can find a Transition Guide. http://www.oracle.com/technetwork/server-storage/solaris11/overview/evaluate-1530234.html So, this seems just like RedHat's YUM environment?IPS offers certain features beyond those in YUM or other packaging systems. For example, IPS works with ZFS and Solaris Boot Environments to provide a safe environment for software lifecycle management so that changes can be reverted by switching to an older boot environment. With Zones on solaris 11, can I do paravirtualitation?The great thing about zones is you don't *need* paravirtualization. You're making the same direct kernel calls that you would outside of a zone.  It's an incredibly significant performance win over hypervisor-based virtualization. Are zones/containers officially supported to run Oracle Databases?  EBIZ?Hi Calvin, the answer is yes, here is the support matrix for DB:  http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html I've found some nasty bugs in Solaris 11 (one of which today) that have been fixed in community forks (i.e., Illumos). Will Oracle ever restart collaboration with the community?We continue to work with the community, just not as open on all projects as we did before (For example IPS is an open project) and the source of more than half of the Solaris packages is posted on our opensource websites. I can't comment on what we will do in the future. And with regards to bugs please file them through the support organization and we will get them resolved. Is zpool vdev removal on-the fly now possible ?This issue is actively being investigated although we don't have a date for when this feature will be available. Is pgstat now the official replacement for corestat ?It's intended to provide similar functionality Where are the opensource website?For Oracle Solaris, visit http://www.oracle.com/technetwork/opensource/systems-solaris-1562786.html As a cloud-scale virtualization, is it going to be easier to move zones between machines? maybe even automatic in case of a hardware failure?Hi Gashaw, we already have customers that have implemented what they refer to as "flying zones" that they can move around very easily. They use Solaris Cluster to do this. What about VMware vMotion like feature?We have secure live migration with both Logical Domains on SPARC T series systems, and with Oracle VM on x86 systems. When running Solaris 10/11 on an enterprise server with a lot of zones, what are best practises commands to show the system is running fine? (has enough hardware resources). For example CPU / Memory / I/O / system load. What are the recommended values?For Solaris 11, look into the new zonestat(1M) command that provides a great deal of information about zone utilization. In addition, there is new work underway in providing additional observability in areas such as per-zone file system I/O. Java optimizations done with Solaris 11? For X86 platforms too? Where can I find more detail about this?There is lots of work that go into optimizing Java for Oracle Solaris 10 & 11 on both SPARC and x86. See http://www.oracle.com/technetwork/articles/servers-storage-dev/solarisforjavadevelop-168642.pdf What is meant by "ZFS Shadow Migration"?It's a way to migrate data from another file system to ZFS: http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-3.html Is flash archive available with S11?Flash archive is not.  There is a procedure for disaster recovery, and we're working on a modern archive-based deployment tool for a future update.  The disaster recovery tool is here: http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-091-sol-dis-recovery-489183.html  You can also use Distribution Constructor to build common golden images. Will solaris 11 be available on the ODA soon?The idea's under evaluation -- we'll share your interest with the team. What steps can be taken to ensure that breaches of security are identified quickly?There are a number of tools, including the "bart" tool and "pkg verify" to ensure that software has not been compromised.  Solaris Audit can also be used to detect unauthorized access.  You can also use Immutable Zones to protect against compromise.  There are a wide variety of security tools, and I've covered only a few. What is the relation from solaris to java 7 speed optimization?There is constant work done between the Oracle Solaris and Java teams on performance optimizations. See http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html for examples. What is the difference in the Solaris 11 installation compared to solaris 10 ? where i can find the document describing basic repository concepts ?The best place to start is: http://www.oracle.com/technetwork/server-storage/solaris11/index.html Hope you found the post useful. For questions, input, requests for the second half of the QnA, please find the comment section below.  -- charlie  

    Read the article

  • Pathfinding and BSP with Box2D

    - by Amplify91
    I'm looking into implementing AI in my 2D side-scrolling platformer, and I'm looking into using algorithms such as A*. For many kinds of pathfinding, we need some sort of grid or systems of nodes or polygon areas. My problem is that I am using Box2d for physics and I am not sure how best to create a structure that my AI can use besides placing individual nodes manually (something I really want to avoid) and using some sort of steering behavior. My level design is tile-based with each tile being about half of the height/width of my main character. The tiles are not all square (some are sloped). I'd like to have a system that can see what the terrain looks like for pathfinding and also keep track of the positions of other actors such as enemies. I'd like to avoid directly placing any nodes into my level design except for possible endpoints or goals. This question is related: How do you do AI path following within a 2d physics engine like farseer/box2d?, but it doesn't specify what kind of structure I could use instead of a list of nodes. I'm looking for some kind of grid or type of BSP that I can query for algorithms like A*.

    Read the article

  • Is it useful to unit test methods where the only logic is guards?

    - by Vaccano
    Say I have a method like this: public void OrderNewWidget(Widget widget) { if ((widget.PartNumber > 0) && (widget.PartAvailable)) { WigdetOrderingService.OrderNewWidgetAsync(widget.PartNumber); } } I have several such methods in my code (the front half to an async Web Service call). I am debating if it is useful to get them covered with unit tests. Yes there is logic here, but it is only guard logic. (Meaning I make sure I have the stuff I need before I allow the web service call to happen.) Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). For example, if my service assumes responsibility to check for Widget availability then I may not want that guard any more. If it is under unit test, I have to change two places now. I see pros and cons in both ways. So I thought I would ask what others have done.

    Read the article

  • Where to find Hg/Git technical support?

    - by Rook
    Posting this as a kind of a favour for a former coleague, so I don't know the exact circumstances, but I'll try to provide as much info as I can ... A friend from my old place of employment (maritime research institute; half government/commercial funding) has asked me if I could find out who provides technical support (commercial) for two major DVCS's of today - Git and Mercurial. They have been using VCS for years now (Subversion while I was there, don't know what they're using now - probably the same), and now they're renewing their software licences (they have to give a plan some time in advance for everything ... then it goes "through the system") and although they will be keeping Subversion as well, they would like to justify beginning of DVCS as an alternative system (most people root for Mercurial since it seems simpler; mostly engineers and physicians there who are not that interested in checking Git repos for corruption and the finer workings of Git, but I believe any one of the two could "pass") - but it has to have a price (can be zero; no problem there) and some sort of official technical support. It is a pro forma matter, but it has to be specified. Most of the people there are using one of the two already, but this has to be specified to be official. So, I'm asking you - do you know where could one go for Git or Mercurial technical support (can be commercial)? Technical forums and the like are out of the question. It has to work on the principle: - I have a problem. - I post a question with the details. - I get an answer in specified time. It can be "we cannot do that." but it has to be an official answer and given in agreed time. I'm sure by now most of you understand what I'm asking, but if not - post a comment or similar. Also, if you think of any reasons which could decide justification of introducing Git/Hg from an technical and administrative viewpoint, feel free to write them down also.

    Read the article

  • Personal knowledge base [closed]

    - by AlexLocust
    Possible Duplicate: How do you manage your knowledge base? Hello. Now I am using easy writing pad for scratches while doing some researches, solving troubles or doing workarounds. But.. you now, it's really difficult to remember all details of founded solution and it's alternatives after few months. And writing pad is not wery useful. Usually writing pad doesn't contains half of needed inforation: links founded in internet, output of some test commands and etc. Now I'am looking for a tool for storing information about my researches and it's results. Basic reqirements: ability to store files; ability to store formatted text (kind of HTML will be nice) with hyperlinks and code snippets; tags on notes; easy to use. Now I'am thinking about some kind of file-system organized storage, but I think it will be inconvenient. Another thinks is local wiki or blog. So, my question is: How do you organize you knowlege base? What tools do you use, and what "pros and cons".

    Read the article

  • Inheriting projects - General Rules? [closed]

    - by pspahn
    Possible Duplicate: When is a BIG Rewrite the answer? Software rewriting alternatives Are there any actual case studies on rewrites of software success/failure rates? When should you rewrite? We're not a software company. Is a complete re-write still a bad idea? Have you ever been involved in a BIG Rewrite? This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • Eculidean space and vector magnitude

    - by Starkers
    Below we have distances from the origin calculated in two different ways, giving the Euclidean distance, the Manhattan distance and the Chebyshev distance. Euclidean distance is what we use to calculate the magnitude of vectors in 2D/3D games, and that makes sense to me: Let's say we have a vector that gives us the range a spaceship with limited fuel can travel. If we calculated this with Manhattan metric, our ship could travel a distance of X if it were travelling horizontally or vertically, however the second it attempted to travel diagonally it could only tavel X/2! So like I say, Euclidean distance does make sense. However, I still don't quite get how we calculate 'real' distances from the vector's magnitude. Here are two points, purple at (2,2) and green at (3,3). We can take two points away from each other to derive a vector. Let's create a vector to describe the magnitude and direction of purple from green: |d| = purple - green |d| = (purple.x, purple.y) - (green.x, green.y) |d| = (2, 2) - (3, 3) |d| = <-1,-1> Let's derive the magnitude of the vector via Pythagoras to get a Euclidean measurement: euc_magnitude = sqrt((x*x)+(y*y)) euc_magnitude = sqrt((-1*-1)+(-1*-1)) euc_magnitude = sqrt((1)+(1)) euc_magnitude = sqrt(2) euc_magnitude = 1.41 Now, if the answer had been 1, that would make sense to me, because 1 unit (in the direction described by the vector) from the green is bang on the purple. But it's not. It's 1.41. 1.41 units is the direction described, to me at least, makes us overshoot the purple by almost half a unit: So what do we do to the magnitude to allow us to calculate real distances on our point graph? Worth noting I'm a beginner just working my way through theory. Haven't programmed a game in my life!

    Read the article

  • What Poor Project Management Might Be Costing You

    - by Sylvie MacKenzie, PMP
    For project-intensive organizations, capital investment decisions define both success and failure. Getting them wrong—the risk of delays and schedule and cost overruns are ever present—introduces the potential for huge financial losses. The resulting consequences can be significant, and directly impact both a company’s profit outlook and its share price performance—which in turn is the fundamental measure of executive performance. This intrinsic link between long-term investment planning and short-term market performance is investigated in the independent report Stock Shock, written by a consultant from Clarity Economics and commissioned by the EPPM Board. A new international steering group organized by Oracle, the EPPM Board brings together senior executives from leading public and private sector organizations to explore the critical role played by enterprise project and portfolio management (EPPM). Stock Shock reviews several high-profile recent project failures, and combined with other research reviews the lessons to be learned. It analyzes how portfolio management is an exercise in balancing risk and reward, a process that places the emphasis firmly on executives to correctly determine which potential investments will deliver the greatest value and contribute most to the bottom line. Conversely, it also details how poor evaluation decisions can quickly impact the overall value of an organization’s project portfolio and compromise long-range capital planning goals. Failure to Deliver—In Search of ROI The report also cites figures from the Economist Intelligence Unit survey that found that more organizations (12 percent) expected to deliver planned ROI less than half the time, than those (11 percent) who claim to deliver it 90 percent or more of the time. This fact is linked to a recent report from Booz & Co. that shows how the average tenure of a global chief executive has fallen from 8.1 years to 6.3 years. “Senior executives need to begin looking at effective project delivery not as a bonus, but as an essential facet of business success,” according to Stock Shock author Phil Thornton. “Consolidated and integrated visibility into individual projects is the most practical solution to overcoming these challenges, which explains the increasing popularity of PPM technologies as an effective oversight and delivery platform.” Stock Shock is available for download on the EPPM microsite at http://www.oracle.com/oms/eppm/us/stock-shock-report-1691569.html

    Read the article

  • LWJGL texture bleeding fix won't work

    - by user1990950
    I tried a lot of things to fix texture bleeding, but nothing works. I don't want to add a transparent border around my textures, because I already got too many and it would take too much time and I can't do it with code because I'm loading textures with slick. My textures are seperate textures and they seem to wrap on the other side (texture bleeding). Here are the textures that are "bleeding": The head, body, arm and leg are seperate textures. Here's the code I'm using to draw a texture: public static void drawTextureN(Texture texture, Vector2f position, Vector2f translation, Vector2f origin,Vector2f scale,float rotation, Color color, FlipState flipState) { texture.setTextureFilter(GL11.GL_NEAREST); color.bind(); texture.bind(); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST); GL11.glTranslatef((int)position.x, (int)position.y, 0); GL11.glTranslatef(-(int)translation.x, -(int)translation.y, 0); GL11.glRotated(rotation, 0f, 0f, 1f); GL11.glScalef(scale.x, scale.y, 1); GL11.glTranslatef(-(int)origin.x, -(int)origin.y, 0); float pixelCorrection = 0f; GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(0,0); GL11.glTexCoord2f(1,0); GL11.glVertex2f(texture.getTextureWidth(),0); GL11.glTexCoord2f(1,1); GL11.glVertex2f(texture.getTextureWidth(),texture.getTextureHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(0,texture.getTextureHeight()); GL11.glEnd(); GL11.glLoadIdentity(); } I tried a half pixel correction but it didn't make any sense because GL12.GL_CLAMP_TO_EDGE. I set pixelCorrection to 0, but it still wont work.

    Read the article

  • Math questions at a programmer interview?

    - by anon
    So I went to an interview at Samsung here in Dallas, Texas. The way the recruiter described the job, he didn't make it sound like it was too math-oriented. The job basically involved graphics programming and C++. Yes, math is implied in graphics programming, especially shaders, but I still wasn't expecting this... The whole interview lasted about an hour and a half and they asked me nothing but math-related questions. They didn't ask me a single programming question, which I found odd. About all they did was ask me how to write certain math routines as a C++ function, but that's about it. What about programming philosophy questions? Design patterns? Code-correctness? Constness? Exception safety? Thread safety? There are a zillion topics that they could have covered. But they didn't. The main concern I have is that they didn't ask any programming questions. This basically implies to me that any programmer who is good at math can get a job here, but they might put out terrible code. Of course, I think I bombed the interview because I haven't used any sort of linear algebra in about a year and I forget math easily if I haven't used it in practice for a while. Are any of my other fellow programmers out there this way? I'm a game programmer too, so this seems especially odd. The more I learn, the more old knowledge that gets "popped" out of my "stack" (memory). My question is: Does this interview seem suspicious? Is this a typical interview that large corporations have? During the interview they told me that Google's interview process is similar. They have multiple, consecutive interviews where the math problems get more advanced.

    Read the article

  • Google local search rankings is it possible without the use of citations

    - by bybe
    I have a client that is wanting a website design for his self-run business... Basically he is a self employed plumber so his home address is not visitable by the public, however the problem here is that he does not want his home address visible on the internet at all for one reason or another. I have informed him the benefits of having his address visible for such reasons as trust by customers as well as the benefits via Google's local search algorithms (Citations - Visible Address Details) on various directories including Google Maps, and Google Places. But he is clear that he does not want his address online and wants SEO + Web Design without any citations. Now, I care about my reputation in my local area and do not like do half-cut jobs, If I do SEO I want them to be the best they can otherwise word of mouth that customer could say to someone else after my services they are no where to be seen, (I know you can't keep them all happy but none the less). This is kinda new for me since Google introduced local rankings and something I've never had to do... So my question is fairly simple and hope that people who reply have some kind of experience in attempting ranking websites locally without citations.. Is it even possible to rank a local website with Google's local algorithms without the use of citations (address information)?

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Four Proven Advantages of Online Learning | Outside Cost, Accessibility or Flexibility

    - by Mohit Phogat
    Coursera believes that online courses complement and supplement traditional education (versus a common misconception online will “replace” traditional.) Our research shows that Coursera’s platform, when used concurrently with a traditional classroom setup, is ideal for “blended learning” (i.e., students watch lectures pre-class, then class-time focuses on interactive work and discussion.) Additionally, we agree with Brad Zomick of SkilledUp—an online learning aggregator—who acknowledges an online course “isn’t an alternative at all but rather a different path with its own rewards.” The advantages of Coursera and our apps for mobile were straightforward and conspicuous from the start: we’re free, open, and flexible to learners’ unique needs and style. Over the past two years, however, the evidence proves there are many more tangible benefits to open, online learning. In SkilledUp’s “The Advantages of Online Courses [Infographic]”–crafted from findings of leading educational research–four observations stand out from the overt characteristics: Speedier Learning - “Research shows that online students achieve same or better learning results in about half the time as those in traditional courses” More Active, Engaged & Motivated - Learners thrive “when working with coursework that is challenging but within their capacity to master.” Tangible Skill Building - with an “improved attitude toward learning” Better Teaching Quality - Courses are taught by experts, with various multimedia and cutting-edge technology, and “are usually better organized than traditional courses” This is only the beginning, Courserians! Everyday we hear your incredible stories on how open online courses enrich your lives and enhance your careers. Meanwhile we study the steady stream of scientific, big-data research proving their worth on a large-scale (such as UPenn’s latest research on the welcomed diversity in Coursera-hosted Wharton MBA courses.) Our motto “Learning without Limits” reminds us that open, online courses give tremendous opportunity to those that might not otherwise have access (or time, or money) to study at a high-caliber institution. Source: Coursera

    Read the article

  • Meaning of offset in pygame Mask.overlap methods

    - by Alan
    I have a situation in which two rectangles collide, and I have to detect how much did they collide so so I can redraw the objects in a way that they are only touching each others edges. It's a situation in which a moving ball should hit a completely unmovable wall and instantly stop moving. Since the ball sometimes moves multiple pixels per screen refresh, it it possible that it enters the wall with more that half its surface when the collision is detected, in which case i want to shift it position back to the point where it only touches the edges of the wall. Here is the conceptual image it: I decided to implement this with masks, and thought that i could supply the masks of both objects (wall and ball) and get the surface (as a square) of their intersection. However, there is also the offset parameter which i don't understand. Here are the docs for the method: Mask.overlap Returns the point of intersection if the masks overlap with the given offset - or None if it does not overlap. Mask.overlap(othermask, offset) -> x,y The overlap tests uses the following offsets (which may be negative): +----+----------.. |A | yoffset | +-+----------.. +--|B |xoffset | | : :

    Read the article

  • Donkey Kong Wall Shelves [DIY Project Inspiration]

    - by Asian Angel
    Are you looking for inspiration for a geeky DIY project to get into over the holiday weekend? Then take a look at this fantastic looking set of Donkey Kong wall shelves created by artist Igor Chak! From the website: So here is a Donkey Kong wall, strong, good looking but still has its character. The wall is made out of individual sections; each section is made out of durable but light carbon fiber, anodized aluminum pixels that are joined with strong stainless steel rods and toughened glass tops. The special mounts themselves are made out of steel and can support up to 60 lbs. Igor’s notes and additional images for the project can be found approximately half way down the webpage linked below. If Donkey Kong is not your favorite game, this could still inspire a shelving project focused on the one you like best! Donkey Kong Wall [via Neatorama] HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • Game-a-Week One

    - by Matt Christian
    Anyone who chats with me on a semi-regular basis knows I am absolutely horrible at completing something from beginnning to end.  Often times I'll begin something, lose interest at some point, and end up moving onto the next thing.  For example, I have 1/2 a full game created, 1/3 of a novel written, and half of a model set created.  Needless to say, unless I have some sort of pressure to finish something I don't stick to it. Recently however one of my online buddies challenged me to create a simple game.  The start date was last Thursday and the final game needed to be delivered by this next Sunday (giving me just over a week).  However, I am going out of town this Friday so will need to deliver it by Thursday, giving me exactly 1 week to develop a game.  Here is what the game needed to include: The player should be able to shoot Shooting things should score points Sounds very simple, but given a single week to produce all art assets plus the game isn't an easy task.  So far I've developed: An animated Main Menu that loads via script files, allows the user to start a new game or exit the game The game is 3D and the player can move around the play area with an 'over-the-shoulder' camera HUD elements are drawn to display the player's current score When the player presses Esc they are shown a pause menu where they can resume the game by pressing Esc again, or quit the game by pressing Space There are also 2 items implemented that don't work perfectly: JigLibX physics library implementation On the main menu there is an arrow symbol that rotates to always point at your mouse I've got 2 days of development left so hopefully I can get collision working, some of the art cleaned up, and some more of the camera functionality working.  Also, I'll need to take some time to package the game up which hopefully shouldn't take too long.

    Read the article

  • .NET development on Macs

    - by Jeff
    I posted the “exciting” conclusion of my laptop trade-ins and issues on my personal blog. The links, in chronological order, are posted below. While those posts have all of the details about performance and software used, I wanted to comment on why I like using Macs in the first place. It started in 2006 when Apple released the first Intel-based Mac. As someone with a professional video past, I had been using Macs on and off since college (1995 graduate), so I was never terribly religious about any particular platform. I’m still not, but until recently, it was staggering how crappy PC’s were. They were all plastic, disposable, commodity crap. I could never justify buying a PowerBook because I was a Microsoft stack guy. When Apple went Intel, they removed that barrier. They also didn’t screw around with selling to the low end (though the plastic MacBooks bordered on that), so even the base machines were pretty well equipped. Every Mac I’ve had, I’ve used for three years. Other than that first one, I’ve also sold each one, for quite a bit of money. Things have changed quite a bit, mostly within the last year. I’m actually relieved, because Apple needs competition at the high end. Other manufacturers are finally understanding the importance of industrial design. For me, I’ll stick with Macs for now, because I’m invested in OS X apps like Aperture and the Mac versions of Adobe products. As a Microsoft developer, it doesn’t even matter though… with Parallels, I Cmd-Tab and I’m in Windows. So after three and a half years with a wonderful 17” MBP and upgraded SSD, it was time to get something lighter and smaller (traveling light is critical with a toddler), and I eventually ended up with a 13” MacBook Air, with the i7 and 8 gig upgrades, and I love it. At home I “dock” it to a Thunderbolt Display. A new laptop .NET development on a Retina MacBook Pro with Windows 8 Returning my MacBook Pro with Retina display .NET development on a MacBook Air with Windows 8

    Read the article

  • How can I strip down Ubuntu?

    - by Thomas Owens
    I'm trying to fix what I consider a bloated install of Ubuntu. When I install Ubuntu on a machine, I get things that I don't want - web browsers, office applications, media players, accessibility utilities, Ubuntu One, and so on. My goal is to create a way that I can have an install of Ubuntu that contains only the most minimal packages - the administrative tools and package manager, a GUI (my preference would be GNOME), a text editor, core drivers (video cards, network cards - wired and wireless, input devices), and anything else that I have to have to run a stable distribution. From there, I would like to pick and choose which packages I install to create my own customized system. After playing around with other distros like Arch and Slackware, like how they provide a barebones install by default. However, I get trapped in a "configuration hell" - right now, I tried moving away from Ubuntu and to Arch, but after spending 6 hours with it, I still don't have a usable system. It's half configured and I don't have any usable software packages to enable me to work. Is anything that can help me available? Either something like the OpenSUSE builder that lets you choose applications and packages for the CD, an advanced installation mode where I can choose the packages to install and which to ignore, or a guide on how to strip Ubuntu down to its bare bones? And I suppose a natural follow up to this is once I have a stripped down Ubuntu, will this affect updating at all? When Canonical releases the next version of Ubuntu, I don't want any bloatware reinstalled. And yes, most of the applications that come with Ubuntu, I simply don't use. Ever.

    Read the article

  • Engineering as a Service

    - by jgelhaus
    Oracle Exadata Database Machine is known for great compute performance, and over the past few years, it has also become known as a great platform for any type of Oracle Database workload, from data warehousing to online transaction processing (OLTP). But now organizations are turning to Oracle Exadata for business efficiencies and private cloud solutions—for consolidation and database as a service (DBaaS). University of Minnesota For an inside look at how DBaaS is working in the real world, it’s worth checking into the University of Minnesota’s database hotel.  With more than 50,000 students, the University of Minnesota in Minneapolis is one of the largest universities in the United States. The university’s centralized IT group not only has to support all those students but also must provide support and services to more than 40 departments and colleges within the university. They have two Exadata Database Machine X2-2 half-rack systems from Oracle, with four database nodes each and roughly 30 terabytes of usable disk space for each of the Oracle Exadata systems. The university is using Oracle Real Application Clusters (Oracle RAC) for high availability and the Data Guard feature of Oracle Database, Enterprise Edition, for disaster recovery capabilities. The deployment has been live in production since May 2011. Overhead Door When it comes to overhead, revolving, sliding, or other specialty residential and commercial doors, Overhead Door is the worldwide leader. But when they needed to open doors with their customers through a better, faster, and more agile IT infrastructure, Overhead Door turned to Oracle and Oracle Exadata. Oracle Exadata Database Machine plays an important part in Overhead Door’s IT and business strategy. The organization has two Exadata Database Machine X2-2s deployed, one in production and one in development and testing Read the full Oracle Magazine article Engineering as a Service

    Read the article

  • C++ and function pointers assessment: lack of inspiration

    - by OlivierDofus
    I've got an assessment to give to my students. It's about C++ and function pointers. Their skill is middle: it the first year of a programming school after bachelor. To give you something precise, here's a sample of a solution of one of 3 exercices they had to do in 30 minutes (the question was: "here's a version of a code that could be written with function pointers, write down the same thing but with function pointers"): typedef void (*fcPtr) (istream &); fcPtr ArrayFct [] = { Delete , Insert, Swap, Move }; void HandleCmd (const string && Cmd) { string AvalaibleCommands ("DISM"); string::size_type Pos; istringstream Flux (Cmd); char CodeOp; Flux >> CodeOp; Pos = AvalaibleCommands.find (toupper (CodeOp)); if (Pos != string::npos) { ArrayFct [Pos](Flux); } } Any idea where I could find some inspiration? Some of the students have understood the principles, even though it's very hard for them to write C++ code. I know them, I know they're clever, and I'm pretty sure they should be very good project managers. So, writing C++ code is not that important after all. Understanding is the most important part (IMHO). I'm wondering about maybe break the habits, and give half of the questions about the principle, or even better, give some sample in other language and ask them why it's better to use function pointers instead of classical programming (usually a big switch case). Any idea where I could look? Find some inspiration?

    Read the article

  • Inheriting projects - General Rules?

    - by pspahn
    This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >