Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 494/842 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Mobile Web Applications – A guide for professional development

    - by JuergenKress
    (Tobias Bosch, Stefan Scheidt, Torsten Winterberg / Opitz Consulting Deutschland GmbH). There is a real hype around mobile solutions. Smartphones and tablets are everywhere. Frontend architecture is changing quickly to adopt cross browser technologies like HTML5 and extensive JavaScript-based development. In this book we introduce our software development process to build test-driven Single-Page JavaScript Web Applications, which will be the future next to native apps. We start with a short introduction of our RYLC showcase (know from our SOA articles), give a very short introduction to JavaScript, then talk about jQuery Mobile, Angular JS, Testing, Backend-communication and we close with deploying our RYLC-Webapp as a hybrid app using the PhoneGap (Cordova) framework. Don’t expect too much theory – it’s a practical guide explaining how RYLC Web App was built, to kickstart your own development. Currently only available in German as paperback and eBook. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: adf mobil

    Read the article

  • Leading Analyst Firm Positions Oracle in Leaders Quadrant for Web Content Management

    - by Christie Flanagan
    Gartner, Inc. has named Oracle a Leader in its latest “Magic Quadrant for Web Content Management.” Gartner’s Magic Quadrants position vendors within a particular quadrant based on their completeness of vision and their ability to execute on that vision. According to Gartner, “WCM plays an increasingly important role in business performance. It has become the central point of coordination for initiatives involving the enterprise's online presence, and these initiatives have become more sophisticated and more important to enterprises' business strategies. Thus, WCM is key for organizations wishing to execute a strategy of OCO (online channel optimization) that embraces areas such as customer experience management, e-commerce, digital marketing, multichannel marketing and website consolidation.” Gartner continued, “Leaders should drive market transformation. Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of OCO. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit. Leaders can: demonstrate enterprise deployments’ offer integration with other business applications and content repositories; provide a vertical-process or horizontal-solution focus.” Oracle WebCenter, the engagement platform powering exceptional experiences for customers, employees and partners, connects people and information by bringing together the most complete portfolio of portal, Web experience management, content, social, and collaboration technologies into a single integrated product suite. Oracle WebCenter also provides the foundation for Oracle Fusion Middleware and Oracle Fusion Applications to deliver a next-generation user experience.  To see the latest reports, webcasts and demonstrations about Oracle's web experience management solution, Oracle WebCenter Sites, please visit our Connected Customer Experience Resource Center.

    Read the article

  • Can't detect collision properly using Rectangle.Intersects()

    - by Daniel Ribeiro
    I'm using a single sprite sheet image as the main texture for my breakout game. The image is this: My code is a little confusing, since I'm creating two elements from the same Texture using a Point, to represent the element size and its position on the sheet, a Vector, to represent its position on the viewport and a Rectangle that represents the element itself. Texture2D sheet; Point paddleSize = new Point(112, 24); Point paddleSheetPosition = new Point(0, 240); Vector2 paddleViewportPosition; Rectangle paddleRectangle; Point ballSize = new Point(24, 24); Point ballSheetPosition = new Point(160, 240); Vector2 ballViewportPosition; Rectangle ballRectangle; Vector2 ballVelocity; My initialization is a little confusing as well, but it works as expected: paddleViewportPosition = new Vector2((GraphicsDevice.Viewport.Bounds.Width - paddleSize.X) / 2, GraphicsDevice.Viewport.Bounds.Height - (paddleSize.Y * 2)); paddleRectangle = new Rectangle(paddleSheetPosition.X, paddleSheetPosition.Y, paddleSize.X, paddleSize.Y); Random random = new Random(); ballViewportPosition = new Vector2(random.Next(GraphicsDevice.Viewport.Bounds.Width), random.Next(GraphicsDevice.Viewport.Bounds.Top, GraphicsDevice.Viewport.Bounds.Height / 2)); ballRectangle = new Rectangle(ballSheetPosition.X, ballSheetPosition.Y, ballSize.X, ballSize.Y); ballVelocity = new Vector2(3f, 3f); The problem is I can't detect the collision properly, using this code: if(ballRectangle.Intersects(paddleRectangle)) { ballVelocity.Y = -ballVelocity.Y; } What am I doing wrong?

    Read the article

  • Connecting a USB laptop to a RJ45 serial port

    - by Jon
    We are about to get our first managed switch at work (Procurve 2520G-24-PoE), and this lowly programmer gets to put on his admin hat and try to configure it. The switch has an RJ45 serial port for console access. My laptop has USB ports but no serial port. In fact, there isn't a single computer in the office with a serial port. I've seen USB-to-DB9 adapters, but I need to go from USB to RJ45 (serial). How would I go about accomplishing this? Do I need two adapters? Will USB-to-DB9 and then DB9-to-RJ45 work? Thanks in advance.

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

  • How do I keep controversy in check?

    - by Aaron Digulla
    This is probably OT but it's less OT here than on any other SO site, so please bear with me. I'm working on a new project votEm. The goal is to give independent candidates a platform to introduce themselves to get elected for a political office. My main reason is that today, it's too expensive to run for an office. Some politicians in the US spend as much as 30 million dollars (!) for a single campaign. That money is better spent elsewhere. In a similar fashion, people who want to change countries like Egypt, could use such a platform to present themselves. Now I expect a lot of emotions and pressure on my site. People with a lot of money (and a lot to lose) will try to game it (political parties, secret services of ... errr ... "not 100% democratic countries", big companies, ...) To avoid as many mistakes as possible, I need a list of resources, ideas and tips how to keep such a site out of too much trouble. PS: I'd make this CW but the option seems to be gone...

    Read the article

  • Shared storage solution for our sql server backups

    - by Gokhan
    We have 3 clustered sql servers. We have 5+ multi terrabyte databases and their backup files (compressed using quest litespeed) are hitting over 600gb each, We are required to keep at least a week or two weeks (if we can) of weekly full backups and then 6 days differential backups, and a week or 2 weeks worth of log backups local. We are currently limited to 2TB volumes from our san team, we can have multiple volumes but they are expensive ($200 per raw TB per month) and having to deal with many backup volumes instead of a single big volume is difficult. I think if we could have a shared network storage of 20TB+ raid 10 or so for all our servers for keeping the backups and another department will copy them to tape from the network storage and delete files according to the retention period would be good, if this box would be a build in operating system (even unix a complete file storage system) that would be good. What do you guys think, does this make sense to you, is there any manufacturer that sells a storage product like that which that work in a clustered environment? Thank you

    Read the article

  • Viewing file: protocol requests made from a swf

    - by Erik Vold
    I've got a swf file in a index.html file, the swf is basically loading images, but it requests a xml file which details where the image/asset files are located. Now the index file (which is just html, js, and css) works (ie: the swf file is working) when used on this url: http://localhost:8500/core/index.html which I'm able to do with Coldfusion 8 single server development environment. But when I access the file directly with this url: file:///C:/ColdFusion8/wwwroot/core/index.html the swf file does not work. So I'm guessing that the swf is having trouble locating the files that it needs. The problem is I have no idea what file urls it is try to access atm, and both Firebug and Fiddler are not able to inform me what requests are made on the file: protocol. So is there another tool that I can use?

    Read the article

  • I want Lotus Domino to only send one email to users that are both recipients and members of a cc'ed lotus group.

    - by Marcus
    Lotus Domino 7 and now Lotus Domino 8.5 The scenario: A@mycompany writes an email to b@internet and cc's it to group@mycompany. A@mycompany is a member of group@mycompany. With the initial email Domino is intelligent enough to not send the email which a@mycompany just wrote to a@mycompany again. But when b@internet answers to all (a@mycompany + group@mycompany) then a@mycompany gets this email twice, because he is not only the author but also a member of group@mycompany. During the smtp session the email is sent once with the recipients set to a@mycompany and group@mycompany and a single esmtp id. So Domino should well be able to see that the mail should only be sent to a@mycompany once. Can I make Lotus Domino behave in this sane fashion?

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • IIS 6 on x64 and long URLs

    - by mausch
    I have a very long URL on a site hosted on Windows 2003 x64 that looks like this: http://myhost/a_very_very_long_url_around_300_chars_long (i.e. a single, very long segment around 300 chars long) Problem is, I'm getting a 400 Bad Request response from HTTP.SYS (it doesn't even reach IIS). I can tell because these requests show up in system32\LogFiles\HTTPERR, e.g: 2009-09-17 19:51:29 200.123.179.9 3636 192.168.129.50 80 HTTP/1.1 GET /a_very_very_long_url_around_300_chars_long 400 - URL - I tried setting UrlSegmentMaxLength in the registry and this fixes the issue on my Windows 2003 x86 box but not on the x64 production server. I tried this on another Win2k3 x64 server and it also failed. Any hints?

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • What is an appropriate language for expressing initial stages of algorithm refinement?

    - by hydroparadise
    First, this is not a homework assignment, but you can treat it as such ;). I found the following question in the published paper The Camel Has Two Humps. I was not a CS major going to college (I majored in MIS/Management), but I have a job where I find myself coding quite often. For a non-trivial programming problem, which one of the following is an appropriate language for expressing the initial stages of algorithm refinement? (a) A high-level programming language. (b) English. (c) Byte code. (d) The native machine code for the processor on which the program will run. (e) Structured English (pseudocode). What I do know is that you usually want to start your design implementation by writing down pseuducode and then moving/writing in the desired technology (because we all do that, right?) But I never thought about it in terms of refinement. I mean, if you were the original designer, then you might have access to the original pseudocode. But realisticly, when I have to maintain/refactor/refine somebody elses code, I just keep trucking with the language it currently resides in. Anybody have a definitive answer to this? As a side note, I did a quick scan of the paper as I havn't read every single detail. It presents various score statistics, can't find where the answers are with the paper.

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • What is the maximum number of TCP connections I can have in Windows Server 2008?

    - by evilfred
    I would like to have as many connections (single connections from many different clients) as humanly possible in a server running on Windows Server 2008, in order to support a Comet-style application. The application is written in C#. The connections will not be chatty, they just need to be open (and stay open). Buying boatloads of memory and fast CPUs are not a problem. As far as I can tell, I will be limited to 65k simultaneous open connections per NIC - the maximum number of ports. Is this accurate? Or can I go beyond 65k connections / NIC somehow? It seems like there are server products for Linux at least that support hundreds of thousands of connections. How do they do this?

    Read the article

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • Cursor seems to freeze in the first attempt of typing - Unity 3D, 12.04

    - by Denis
    It happens in the first attempt of typing, no matter is after the startup, or 5 minutes later, or then after. The cursor (or maybe it's the system) seems to freeze, no matter the application I use, taking up 5 sec to appear what is typed. Subsequently, everything is normal, using another applications. @Anwar Shah suggested it could be a daemon waiting to run before the lauching of the first application. Turning off Zeitgest didn't help. It occurs only with Unity-3d. Tested with Unity-2d, everything is fine. Tried to change some Compiz settings, nothing worked, although not tested with every single parameter. Also I deactivated Ati proprietary driver, no effect. My system: AMD E350 1.6Gh, 2G-Ram, ATI graphics - Ubuntu 12.04, 64bits. Update 1: the cursor is blinking normally before I start typing. After the first character (which is not showed), seems to freeze, taking 5 seconds to get normal again. Very annoying, specially when you want to access login sites. Update 2: I tested on a different and old machine (Athlon 64 4800 x2, 4Gb ram, no problems - takes 2 seconds, acceptable. I think it could be related to my specific hardware (Samsung RV415), but not sure about it. Anyone experiencing something similar? Is that what I should expect, or can be fixed or improved? Thanks.

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • Basic collision direction detection on 2d objects

    - by Osso Buko
    I am trying to develop a platform game for Android by using ANdroid GL Engine (ANGLE). And I am having trouble with collision detection. I have two objects which is shaped as rectangular. And no change in rotation. Here is a scheme of attributes of objects. What i am trying to do is when objects collide they block each other's movement on that direction. Every object has 4 boolean (bTop, bBottom, bRight, bLeft). For example when bBottom is true object can't advance on that direction. I came up with a solution but it seems it only works on one dimensional. Bottom and top or right and left. public void collisionPlatform (MyObject a, MyObject b) { // first obj is player and second is a wall or a platform Vector p1 = a.mPosition; // p1 = middle point of first object Vector d1 = a.mPosition2; // width(mX) and height of first object Vector mSpeed1 = a.mSpeed; // speed vector of first object Vector p2 = b.mPosition; // p1 = middle point of second object Vector d2 = b.mPosition2; // width(mX) and height of second object Vector mSpeed2 = b.mSpeed; // speed vector of second object float xDist, yDist; // distant between middle of two object float width , height; // this is average of two objects measurements width=(width1+width2)/2 xDist=(p1.mX - p2.mX); // calculate distance // if positive first object is at the right yDist=(p1.mY - p2.mY); // if positive first object is below width = d1.mX + d2.mX; // average measurements calculate height = d1.mY + d2.mY; width/=2; height/=2; if (Math.abs(xDist) < width && Math.abs(yDist) < height) { // Two object is collided if(p1.mY>p2.mY) { // first object is below second one a.bTop = true; if(a.mSpeed.mY<0) a.mSpeed.mY=0; b.bBottom = true; if(b.mSpeed.mY>0) b.mSpeed.mY=0; } else { a.bBottom = true; if(a.mSpeed.mY>0) a.mSpeed.mY=0; b.bTop = true; if(b.mSpeed.mY<0) b.mSpeed.mY=0; } } As seen in my code it simply will not work. when object comes from right or left it doesn't work. I tried couple of ways other than this one but none worked. I am guessing right method will include mSpeed vector. But I have no idea how to do it. I really appreciate if you could help. Sorry for my bad english.

    Read the article

  • Why Swipe left doesn't work? [on hold]

    - by Hitesh
    I wrote the below code to detect and perform a sprite action on the single tap and swipe right event. @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { float x = 0F; int tapCount = 0; boolean playermoving = false; // TODO Auto-generated method stub if (pSceneTouchEvent.getAction() == MotionEvent.ACTION_MOVE) { if (pSceneTouchEvent.getX() > x) { playermoving = true; players.runRight(); } if (pSceneTouchEvent.getX() < x) { Log.i("Run Left", "SPRITE Left"); } /* * if (pSceneTouchEvent.getX() < x) { System.exit(0); * Log.i("SWIPE left", "SPRITE LEFT"); } */ } if (pSceneTouchEvent.getAction() == MotionEvent.ACTION_DOWN) { playermoving = false; x = pSceneTouchEvent.getX(); tapCount++; Log.i("X CORD", String.valueOf(x)); } if (pSceneTouchEvent.isActionDown()) { if (tapCount == 1 && playermoving != true) { tapCount = 0; players.jumpRight(); } } return true; } The code works fine. The only problem is that the swipe left event is not being detected due to some reasons. What can i do to make the swipe left action work? Please help

    Read the article

  • Parameterized Django models

    - by mgibsonbr
    In principle, a single Django application can be reused in two or more projects, providing functionality relevent to both. That implies that the same database structure (tables and relations) will be re-created identically in different databases, and most times this is not a problem (assuming the projects/databases are unrelated - for instance when someone downloads a complete app to use in their own projects). Sometimes, however, the models must be "tweaked" a little to better fit the problem needs. This can be accomplished by forking the app, but I wondered if there wouldn't be a better option in cases where the app designer can anticipate the most common customizations. For instance, if I have a model that could relate to another as one-to-one or one-to-many, I could specify the unique property as a parameter, that can be specified in the project's settings: class This(models.Model): other = models.ForeignKey(Other, unique=settings.OTHER_TO_THIS) Or if a model can relate to many others, I could create an intermediate table for each of them (thus enforcing referential integrity) instead of using generic fks: for related in settings.MODELS_RELATED_TO_OTHER: model_name = '%s_Other' % related globals()[model_name] = type(model_name, (models.Model,) { me:models.ForeignKey(find_model_class(related)), other:models.ForeignKey(Other), # Some other properties all intersection tables must have }) Etc. Let me stress out that I'm not proposing to change the models at runtime nor anything like that; once the parameters were defined and syncdb called for the first time, those parameters are not to be changed again (unless you're doing a schema migration). Is this a good design? Are there better ways to accomplish the same thing, or maybe drawbacks I coulnd't anticipate? This technique is meant to be used sparingly (only on apps meant to be reused in wildly different contexts, and only when a specific need of customization can be detected while the app model is being designed).

    Read the article

  • What are performance limits of a database?

    - by Tommy
    What are some rough performance limits (read/s, write/s) for a single database server (no master-slave architecture), assuming storage on disk? How many read/s, write/s, depending on the kind of disk? (SSD vs non-SSD) , assuming simple operations (select one row by primary key, update one row, correctly indexed). I assume this limit is dependent on disk seek/write. EDIT: My question is more about getting rough metrics of the number of operations a database supports: to be able to know for example, if a new feature triggering 300 inserts/s can be supported without scaling out with additional servers.

    Read the article

  • script Disk Management configuration

    - by Joseph
    I have 10 workstations with large monitors that have USB slots and several card readers built in. The card readers cannot be disabled and will map to drive letters when I image the computers. I go into Disk Management and delete the drive mappings and add mappings to a single folder in C:\ with a folder for each slot. I have to do this because of scripts that run that are expecting specific letter drive mappings to network resources. Is there a way to script the deleting and adding of drive mappings instead of having to use the Disk Management GUI manually on each workstation? The workstations are running XP Professional.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >