Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 754/1338 | < Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >

  • Why Swipe left doesn't work? [on hold]

    - by Hitesh
    I wrote the below code to detect and perform a sprite action on the single tap and swipe right event. @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { float x = 0F; int tapCount = 0; boolean playermoving = false; // TODO Auto-generated method stub if (pSceneTouchEvent.getAction() == MotionEvent.ACTION_MOVE) { if (pSceneTouchEvent.getX() > x) { playermoving = true; players.runRight(); } if (pSceneTouchEvent.getX() < x) { Log.i("Run Left", "SPRITE Left"); } /* * if (pSceneTouchEvent.getX() < x) { System.exit(0); * Log.i("SWIPE left", "SPRITE LEFT"); } */ } if (pSceneTouchEvent.getAction() == MotionEvent.ACTION_DOWN) { playermoving = false; x = pSceneTouchEvent.getX(); tapCount++; Log.i("X CORD", String.valueOf(x)); } if (pSceneTouchEvent.isActionDown()) { if (tapCount == 1 && playermoving != true) { tapCount = 0; players.jumpRight(); } } return true; } The code works fine. The only problem is that the swipe left event is not being detected due to some reasons. What can i do to make the swipe left action work? Please help

    Read the article

  • .htaccess folder rewrite

    - by Lisa
    I have 3 URLS all pointing to the same site. www.abc.co.uk www.xyz.com www.123abc.org I have a folder /foo/bar which has lots of sub folders and files in etc. I want to rewrite this to /bar. So if I have www.abc.co.uk/foo/bar/sheep/page.html I want it to redirect to www.abc.co.uk/bar/sheep/page.html. Is this possible. Sometime I may have a URL like www.abc.co.uk/foo/bar/foo/page.html so this would become www.abc.co.uk/bar/foo/page.html. Only the first instance of foo would be rewritten.

    Read the article

  • Are the only types of data "sources" static and dynamic?

    - by blunders
    Thinking that there might be others, but not sure -- but before getting into that, let me explain what I mean by static and dynamic data sources. Static (or datastore) - Meaning that the data's state is non-changing, and if was changed, that would be a new state, and the old data would be considered stateless; meaning it no longer is known to exist, or not exist. Another way of possibly looking at a static data source might be that if read and written back without modification, the checksum for before and after should be exactly the same regardless of the duration of time between the reading and rewriting of the data. Examples: Photos, Files, Database Record, Dynamic (or datastream) - Meaning that the data's state is known to be in flux, and never expected to be the same per input. Example: Live video/audio feed, Stock Market feed, First let me say, the above is a very loose mapping of the concepts, and I'd welcome any feedback. Next, onto the core of the question, that being are these the only two types of data sources. My guess, is that yes, they are -- but that there are hybrid versions of the two. That being, streaming data that has a fixed state. For example, the data being streamed has a checksum given and each unique checksum is known to be a single instance of static data. On the flip side, static data could be chained via say a version control system; when played back, each version might be viewed as a segment of a stream; thing is, the very fact that it can be played back makes the data source static. Another type might be that the data source is being organically discovered, and it's simply unknown what the state is. Questions, feedback, requests -- just comment, thanks!!

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • Mac OS X: pushing all traffic through a VMWare VM

    - by bj99
    I want to set up an Astaro (Sophos) UTM in a Virtual Machine. The Setup should be at the end the following: Cable Modem (one IP adress) | [Ethernet] Sophos UTM (running as VM [VMWare Fusion 5] on the MacMini) | [WIFI] Airport Express v2 (for sharing Local Network to wireless and wired clients) 1)| [WIFI] 2)| [Ethernet over Thunderbolt Ethernet Adapter]* Clients MacMini (Local File Server) *To have the Mini also protected behind the UTM So the setup process for the UTM works fine, but then the problems start: I just have one external IP (from my cable modem provider)== So if I put the VM in briged mode my Internet connection drops, because the MacMini also has its IP adress. If I put the VM to NAT mode the Mini itself is not protected by the UTM So: is there a way to hide the en0 interface(Ethernet) and the en1 interface (Wifi) from the MacMini, so that they not even appear in System Preferences Network section but are available to the VM? That way the Mini must connect to the en2 interface (Thunderbolt adapter) to make any Internet/LAN connection and I just use the given single IP from the Cable Modem. Thaks for any suggestions... Sebastian

    Read the article

  • IIS7 IE9 PHP Errors Hidden - Error 500 - How to show detailed PHP errors?

    - by sandy
    I scrutinized every single article that there is on the web for this one but NO JOY! I am getting the infamous error 500 after that I have installed internet explorer 9. Here is a detailed step of what I had done.... STEP 1 - Installed PHP using Web Platform installer , IIS 7 on Windows 7 x64 STEP 2 - Configured PHP.ini to display_errors so I turned that on STEP 3 - With IE8 EVERYTHING was showing up fine .... STEP 4 - When I installed IE9 .... the error 500 message came back STEP 5 - Looked into php.ini and my original settings are all still there STEP 6 - reset iis7 with iisrestart from command prompt.... Help would be much appreciated! What is wrong? Regards Sandy

    Read the article

  • OAuth2 vs Public API

    - by Adam Tannon
    My understanding of OAuth (2.0) is that its a software stack and protocol to allow 2+ web apps to share information about a single end user. User A is a member of Site B and Site C; Site B wants to fetch some data from Site C about User A, and this is where OAuth steps in. So first off, if this assessment is incorrect, please begin by clarifying this for me and correcting me! Assuming I'm on the right track, then I guess I'm not seeing the need for OAuth to begin with (!). I'm sure I'm just not seeing the "forest through the trees" here, but the way I see it, couldn't Site C just expose a public API that Site B could use to fetch the same data (sans OAuth)? If Site C required user credentials to access the data, could this public API just use HTTPS for secure transport and require username/password as a part of each API call? Again, I'm sure I'm missing something, but I'm just not understanding why I would need OAuth when a secure, public API written and exposed by Site C seems more than capable of delivering what Site B needs regarding User A. In general, I'm looking for a set of guidelines to go by when deciding to choose between using OAuth for my web apps or just writing my own web service ( exposing public API). Thanks in advance!

    Read the article

  • With Choice Comes Complexity

    - by BuckWoody
    "Complex" may be defined as "Having many steps, details or parts." Many of Microsoft's products, including SQL Server, can be complex. I'm stating what most data professionals already know - there's usually multiple ways to do things in SQL Server. For instance, to import some data into a table you can use graphical tools, SQLCMD, bcp, SQL Server Integration Services, BULK INSERT, even PowerShell, just to name a few tools at your disposal. That's really not the issue, though. The bigger issue is that there are normally multiple thought-processes, or methods, that you have available for a task. That's both a strength and a weakness. If things were more simple, you would have fewer choices. Sometimes that's a good thing. Just tell me what I need to do and I'll do it. However, your particular situation may not fit that tool or process, so having more options increases your ability to get your job done the way you need to do it. On the other hand, that's more for you to learn, which is harder. There's another side of this benefit/difficulty that you need to be aware of. Even if you're quite good at what you do, keep in mind that the way you know how to do something may not be the only way to do it. Keep your mind open to new possibilities, and most importantly - to new knowledge. SQL Server professionals teach me something new every day. So embrace the complexity - on balance, it's a good thing! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Abstract Factory Method and Polymorphism

    - by Scotty C.
    Being a PHP programmer for the last couple of years, I'm just starting to get into advanced programming styles and using polymorphic patterns. I was watching a video on polymorphism the other day, and the guy giving the lecture said that if at all possible, you should get rid of if statements in your code, and that a switch is almost always a sign that polymorphism is needed. At this point I was quite inspired and immediately went off to try out these new concepts, so I decided to make a small caching module using a factory method. Of course the very first thing I have to do is create a switch to decide what file encoding to choose. DANG! class Main { public static function methodA($parameter='') { switch ($parameter) { case 'a': $object = new \name\space\object1(); break; case 'b': $object = new \name\space\object2(); break; case 'c': $object = new \name\space\object3(); break; default: $object = new \name\space\object1(); } return (sekretInterface $object); } } At this point I'm not really sure what to do. As far as I can tell, I either have to use a different pattern and have separate methods for each object instance, or accept that a switch is necessary to "switch" between them. What do you guys think?

    Read the article

  • How to rename everything matching a certain string in a folder

    - by lostiniceland
    Hello Everyone I am running Linux and I have some basic console knowledge but my current problem is quite difficult and I dont know how to achieve this. I want/need to rename everything within a folder that matches a given string. By everything I mean folders/files content within a file content in hidden files Basically I want to refactor a Java-project. Sure, I could use Eclipse to handle the replacing, but this leaves out the folders or resources outside of my workspace. I was thinking of a script that could do the job for me but this seems rather tricky. For instance when it comes to folder-/file-rename I want to replace only the part of the name that matches my string, the rest should remain untouched. Maybe someone already has something like this in his/her script-collection :-) Thanks in advance Marc

    Read the article

  • Corrupt Windows 7, Chkdsk finds errors continuously after multiple tries

    - by mj_
    My Windows 7 instance is all corrupt on my Dell XPS laptop (my kid hibernated it as it was starting). I got the Windows 7 installation disc and I try to repair the installation. The repair goes through and it says done but it doesn't know if it repaired successfully. It just says reboot. I can also open a command line and I run chkdsk /F manually in there. What's strange is that I've run it 20 times and chkdsk claims that it fixes stuff every time. Eventually it stops and I reboot. Much to my chagrin, my machine is still broken. Chkdsk gives an error at the end that it can't write a log file. I've also run a full diagnostic using the Dell diagnostic tools. The tools say that everything is okay. Any suggestions regarding what I should do next? Any help appreciated. mj

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • Squid - Selective reverse proxy and forward proxy

    - by Dean Smith
    I'd like to setup a squid instance to do selective reverse proxy for a configured list of URLs whilst acting as a normal forward proxy for everything else. We are building new infrastructure, parallel live as it where, and I want to have a proxy that people can use that will force selective traffic into the new platform whilst just acting as a forward proxy for anything else. This makes it very easy for people/systems to test the portions of the new platform we want without having to change too much, just use a proxy address. Is such a setup possible ?

    Read the article

  • UPDATE FOR BI PUBLISHER ENTERPRISE 10.1.3.4.2 NOVEMBER 2011

    - by Tim Dexter
    It's Friday, that means its patch release time. Why do we do this to ourselves, 'we'll release on Friday!' It might 11.59 on Friday but by golly we'll have released on Friday. I can remember a release of BIP years ago that for some reason we went for 12/31 as a release date ... were we mad? I seem to remember we made it but talk about ridiculous pressure! The latest 10g rollup is out in the wild and available from Oracle support. A bug fixing rollup but worth getting to and know that support will want you to get to it and re-test before going forward on an SR. One simple but very useful fix or enhancement:[Cause of the bug] @ ================== @ Customer reports that despite the clock being shown, end users are clicking @ on the View button repeatedly as the initial generation is taking some time.   @ If the button were to be grayed out then  this would prevent the users @ requesting the report more than  once.  Repeated requests are causing a @ system overload and as this is their Production  instance this is extremely @ important to the customer. @ . @ [The Fix] @ ========= @ Added the logic to disable the button after the user clicks on the "view" @ button and re-enable it when the report is loaded. I told a group of customers once that they have a headache and we have a non-steroidal anti-inflammatory drug, alright, I actually said 'aspirin'. This little gem of a fix helps relieve another little headache that our aspirin was causing. The patch number for all this BIP pain killing is 13399232, enjoy!

    Read the article

  • Approach for monitoring internet backbone traffic volume

    - by Greg Harman
    I'm interested in getting a picture of relative volume across different internet backbones. In particular, I'd like to see how traffic volume over a given route differs over the course of a day or from one day to the next. InternetTrafficReport.com is the closest approximation to this that I've found online, and their approach is to test ping times to a number of key routers from several geographically-dispersed servers. This sounds like one straightforward way to measure, but I don't have several geographically-dispersed servers. Is there a different approach for sampling this type of information from a single server?

    Read the article

  • Windows 8 and Cisco AnyConnect client issue

    - by Enrique Lima
    As many of us are doing these days, I have fully moved to Windows 8 on my PCs (laptops and desktops).  And in my role as a consultant I work with many clients, many of them use different vpn technologies.  While pretty much every single vpn client I had installed needed a trick or two to work, well Cisco’s AnyConnect vpn client had some issues.  Installation went well, no problem there.  The problem appeared when I attempted to connect, as I received the following message: Pretty clear what the issue is, right? right??!!?? Doing a bit of research (Google knows!), I cam across the following fix: Using our new favorite shortcut:  Windows Key + X Then Run > regedit. We then Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\vpnva From the image you can tell there are additional characters in the DisplayName that interfere with the device being able to be correctly identified. This is what it looks like originally. We will remove those characters so it looks more like: Close all open windows and attempt your connection.

    Read the article

  • Keyring no longer prompts for password when SSH-ing

    - by Lie Ryan
    I remember that I used to be able to do ssh [email protected] and have a prompt asks me for a password to unlock the keyring for the whole GNOME session so subsequent ssh wouldn't need to enter the keyring password any longer (not quite sure if this is in Ubuntu or other distro). But nowadays doing ssh [email protected] would ask me, in the terminal, my keyring password every single time; which defeats the purpose of using SSH keys. I checked $ cat /etc/pam.d/lightdm | grep keyring auth optional pam_gnome_keyring.so session optional pam_gnome_keyring.so auto_start which looks fine, and $ pgrep keyring 1784 gnome-keyring-d so the keyring daemon is alive. I finally found that SSH_AUTH_SOCK variable (and GNOME_KEYRING_CONTROL and GPG_AGENT_INFO and GNOME_KEYRING_PID) are not being set properly. What is the proper way to set this variable and why aren't they being set in my environment (i.e. shouldn't they be set in default install)? I guess I can set it in .bashrc, but then the variables would only be defined in bash session, while that is fine for ssh, I believe the other environment variables are necessary for GUI apps to use keyring.

    Read the article

  • How can I get nvidia-96 installed?

    - by Bob
    I'm at my wits end here. This is my last effort before I go back to Windows. I need to get nvidia-96 proprietary driver installed. Synaptic won't install it because it says it has dependencies. I installed every single dependency it listed except for "xorg-video-abi-10" which does not show up as an item that can be installed. I have no idea what to do. Using 11.10 with a NVIDIA Geforce 3 GPU. Anyone know how to get this dang driver installed? @fossfreedom: the opensource driver is extremely slow. So slow that the OS is unusable—words appear seconds after I type them—programs take forever to perform actions. Also it is causing my monitor to turn on and off for no reason. @yossile: synaptic shows that I have xserver-xorg-core installed. And xserver-xorg-core-udeb does not show up as something that can be installed. @papseddy: when I try to install the downloaded nvidia driver it says it won't work until I disable Nouveau kernel driver. I have tried everything to get this dang Nouveau kernel driver disabled. Nothing has been successful.

    Read the article

  • Why does Windows share permissions change file permissions?

    - by Andrew Rump
    When you create a (file system) share (on windows 2008R2) with access for specific users does it changes the access rights to the files to match the access rights to the share? We just killed our intranet web site when sharing the INetPub folder (to a few specified users). It removed the file access rights for authenticated users, i.e., the user could not log in using single signon (using IE & AD)! Could someone please tell me why it behaves like this? We now have to reapply the access right every time we change the users in the share killing the site in the process every time!

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • In blender 3D, is it possible to save a keyframe of a mesh that has a soft body property

    - by Steven Rogers
    I am using blender 3D right now, and i 'baked' a cloth soft body. However, i want just one keyframe of the cloth. In this case, i make curtains for a window and made it a cloth. I baked it to just how i want the cloth to look, but for my animation i want a single still cloth object to be placed. I want the curtains to be one still cloth-looking object for the whole animation. So is there a way that i can get that mesh to stay in that one position for the entire animation? If so, then how do i do it?

    Read the article

  • Confirm that two filesystems are identical, ignoring special files

    - by endolith
    /media/A and /media/B should be identical, but I want to confirm before deleting one. Duplicate file finders don't work, because they'll find two copies of the same file within B, for instance. I only want to confirm that every file in one is identical to the other. diff -qr /media/A/ /media/B/ seems to work, but the output is cluttered with garbage like diff: /media/A//etc/alternatives/ControlPanel: No such file or directory and File /media/A//dev/tty8 is a character special file while file /media/B//dev/tty8 is a character special file I can suppress the former with 2> /dev/null, but I don't know about the latter. rsync -avn /media/A/ /media/B/ also produces a bunch of clutter, like "skipping non-regular file". How can I compare the two trees and just make sure that all the real files exist in both and are identical?

    Read the article

  • Can't detect collision properly using Rectangle.Intersects()

    - by Daniel Ribeiro
    I'm using a single sprite sheet image as the main texture for my breakout game. The image is this: My code is a little confusing, since I'm creating two elements from the same Texture using a Point, to represent the element size and its position on the sheet, a Vector, to represent its position on the viewport and a Rectangle that represents the element itself. Texture2D sheet; Point paddleSize = new Point(112, 24); Point paddleSheetPosition = new Point(0, 240); Vector2 paddleViewportPosition; Rectangle paddleRectangle; Point ballSize = new Point(24, 24); Point ballSheetPosition = new Point(160, 240); Vector2 ballViewportPosition; Rectangle ballRectangle; Vector2 ballVelocity; My initialization is a little confusing as well, but it works as expected: paddleViewportPosition = new Vector2((GraphicsDevice.Viewport.Bounds.Width - paddleSize.X) / 2, GraphicsDevice.Viewport.Bounds.Height - (paddleSize.Y * 2)); paddleRectangle = new Rectangle(paddleSheetPosition.X, paddleSheetPosition.Y, paddleSize.X, paddleSize.Y); Random random = new Random(); ballViewportPosition = new Vector2(random.Next(GraphicsDevice.Viewport.Bounds.Width), random.Next(GraphicsDevice.Viewport.Bounds.Top, GraphicsDevice.Viewport.Bounds.Height / 2)); ballRectangle = new Rectangle(ballSheetPosition.X, ballSheetPosition.Y, ballSize.X, ballSize.Y); ballVelocity = new Vector2(3f, 3f); The problem is I can't detect the collision properly, using this code: if(ballRectangle.Intersects(paddleRectangle)) { ballVelocity.Y = -ballVelocity.Y; } What am I doing wrong?

    Read the article

  • Using VirtualBox/VMWare to deploy software to multi sites ?

    - by Sim
    Hi all, I'm currently evaluating the feasibility of using VirtualBox (or VMWare) to deploy the follow project to 10 sites Windows XP MSSQL 2005 Express Edition with Advanced Services JBoss to run 1 in-house software that mostly query master data (customers/products) and feed to other software Why I want to do this ? Because the IT staffs in my 10 sites are not capable enough and the steps taken to setup those "in-house project" are also complicated What are the cons I can forsee ? Need extra power to run that virtualbox instance The IT staffs won't be much knowledgeable 'bout how to install the stuffs Cost (license for VirtualBox in commercial environment as well as extra OS license) I really seek your inputs on the pro/con of this approach, or any links that I can read further Thanks a lot

    Read the article

  • How can I get ssh-agent working over ssh and in tmux (on OS X)?

    - by Rich
    I have a private key set up for my github account, the passphrase to which is, I believe, stored in OS X's keychain. I certainly don't have to type it in when I open a terminal window and enter ssh [email protected]. However, when I'm running bash over an ssh session, or locally inside a tmux session, I have to type in the passphrase every single time I attempt to ssh to github. This question suggests that a similar problem exists with screen, but I don't really understand the issue well enough to fix it in tmux. There's also this page which includes a fairly complicated solution, but for zsh. EDIT: In response to @Mikel's answer, from a local terminal I get the following output: [~] $ echo $SSH_AUTH_SOCK /tmp/launch-S4HBD6/Listeners [~] $ ssh-add -l 2048 [my key fingerprint] /Users/richie/.ssh/id_rsa (RSA) [~] $ typeset -p SSH_AUTH_SOCK declare -x SSH_AUTH_SOCK="/tmp/launch-S4HBD6/Listeners" Whereas over ssh or in tmux I get: [~] $ echo $SSH_AUTH_SOCK [~] $ ssh-add -l Could not open a connection to your authentication agent. [~] $ typeset -p SSH_AUTH_SOCK bash: typeset: SSH_AUTH_SOCK: not found echo $SSH_AGENT_PID returns nothing whatever shell I run it from.

    Read the article

< Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >