Search Results

Search found 17944 results on 718 pages for 'size'.

Page 433/718 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • TightVNC (or any VNC) client windows scaling

    - by mr.b
    Hi, I am currently using TightVNC to connect to multiple remote hosts in LAN. I start 16 VNC instances, set Scaling by: Auto (in connection options display), set Hextile encoding, and then select all windows and use Tile Horizontally, which covers my entire screen with VNC screens. It all works sort of nice, except that desktop interaction is really slow when there are more then 4 VNC clients. My question is, does VNC client (not just TightVNC, but any compatible client) support some kind of smart scaling option, so that client tells server something along the lines of: "Okay, I'm displaying your entire screen in a window size 300x225 px, so can you please start sending encoded images on that resolution?", at which point interactiveness of open connections dramatically increase, and when I decide to go full screen on some connection, client and server re-negotiate and server starts sending full resolution images again? Thanks!!

    Read the article

  • Outlook pst problem

    - by tking
    I've used outlook pst files in the past with great success. a few weeks ago I exported about 2 years worth of email into a pst file. size is around 1.5 gb. when i try ti import that pst back into my outlook it says its not a pst file. I've tried to repair it using pstscan and it repairs errors and will even mount it in Outlook but Outlook cant see any emails, like its an empty pst file. Is there any other way to recover my emails besides loading up backupexec and recovering my mailbox before i made the pst?

    Read the article

  • Difference between tc qdisc and netdev_max_backlog

    - by Mediocre Gopher
    I'm wondering what the difference between these two things on linux is. According to the docs tc qdisc can be used to set the queue size for egress and ingress packets coming in and out of the NIC (or that's how I understood it). But from what I understand netdev_max_backlog can also be used to set this. If I were to set both of them which would be used? Or are there actually two queues that are being manipulated in this case? If there are two queues, which queue is above the other (if the application is at the top and the hardware at the bottom)?

    Read the article

  • How to document requirements for an API systematically?

    - by Heinrich
    I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals. Let me give you an example for some informal requirements of Project A: When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user. It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine. The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs. ... In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable. I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal. I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don't think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go "one level deeper" ... Any advice/learning resources for me?

    Read the article

  • Do you keep your ideas secret? and why?

    - by MainMa
    I believe any programmer has several ideas that she/he considers as innovative or at least valuable. It may be an idea of a new product which will make this world better or a new development approach, etc. But a great idea must be implemented and promoted/advertised. This requires a lot of work (proofs of concept, prototypes, technology previews, etc.) and a lot of money (appropriate advertisement, marketing, etc.). So months later, the idea stays in our heads, but nothing else is done, because it's difficult, long and expensive, sometimes even impossible for a single developer. On the other hand, it would be painful to share our ideas, and see a medium-size company which has enough resources making something useful from it and having success and money. So what do you do with your ideas you can hardly implement or patent? Do you talk freely about them in discussion boards and with other developers? Do you keep them like a precious thing without never talking about them to anybody? If you keep your ideas, why are you doing so? Is it just because you hope that one day, you will be able to implement them and have a huge success, while you know very well by experience that it's an utopia?

    Read the article

  • When to use each user research method

    - by user12277104
    There are a lot of user research methods out there, but sometimes we get stuck in a rut, conducting all formative usability testing before coding, or running surveys to gather satisfaction data. I'll be the first to admit that it happens to me, but to get out of a rut, it just takes a minute to look at where I am in the design & development cycle, what kind(s) of data I need, and what methods are available to me. We need reminders, or refreshers, every once in a while. One tool I've found useful is a graphic organizer that I created many years ago. It's been through several revisions, as I've adapted it to the product cycles of the places I've worked, changed my mind about how to categorize it, and added methods that I've used or created over time. I shared a version of this table at the 2012 International UPA conference, and I was contacted by someone yesterday who wanted to use it in a university course on user-center design. I was flattered at the the thought, but embarrassed, because I was sure it needed updating -- that was a year ago, after all. But I opened it today, and really, there's not much I'd change -- sure, I could add some nuance regarding what types of formative testing, such as modality (remote, unmoderated remote, or in-person) or flavor of testing (RITE, RITE-Krug, comparative, performance), but I think it's pretty much ok as is. Click on the image below, to get the full-size PDF. And whether it's entirely "right" or "wrong" isn't the whole value of looking at these methods across the product lifecycle. The real value lies in the reminder that I have options. And what those options are change as the field changes, so while I don't expect this graphic to have an eternal shelf life, it's still ok a year after I last updated it. That said, if you find something missing or out of place, let me know :) 

    Read the article

  • Determine Server specs for a Rails with MySQL database (on AWS)

    - by Rogier
    I developed a intranet applications with Rails (3.2) for one of my customers. There will be around 30-40 employees working with it. Backend is MySQL (5). What would be the best way to determine the servers specs needed? Given: max. load will be roughly 2400 (40*60) HTTP requests (mixed GET / POST) per hour. 15% of these calls are JSON calls (iOS) avg request will make between 5-10 database calls 500-800 SQL INSERTS per day webpages are fairly simple (no images, just text) avg webpage is 15 request (css/js/etc) and total size is 35-45 KB More specific, since they need access from multiple geographical locations, we are thinking of running a bitnami Ruby stack in the AWS cloud (uptime is important). Any thoughts on a AWS Instance (small/medium) and Utilization (light/medium/heavy) ? Thanks!

    Read the article

  • Collision and Graphics integration

    - by Shlomi Atia
    I'm a little confused about the integration between collision and graphics. They both need to share the same position in the world. The most obvious choice is the center of the entity, which is good for bounding volumes and fixed sized sprites. However, for characters with variable height size sprites like this: http://gamemedia.wcgame.ru/data/2011-07-17/game-sprite-sheet.jpg This is no longer good. The character won't align to the ground if I'll draw it from the center. I can just make the sprites the same height, but it will be a waste of memory (the largest sprite is 4 times larger then the smallest one). Even then, this is not an option at all with skeletal sprites like this one: http://user-generated-content.java-gaming.org/img-vault/212a171fc1ebb27ab77608fb9b2dd9bd9205361ce6300b21a7f8d06d025fbbd8.png It seems that the graphics need to be drawn from the ground for characters, but not for other images such as scenery and obstacles. The only solution I could think of was having another position called draw-position, which is the entity center for images, and is the the bottom of the collision volume for characters. Then when I draw relative to that position, it should work properly. I haven't found any references for something like that, so I'm kinda insecure about it. Does anyone knows of a better approach for this problem? Thanks

    Read the article

  • How many copies are needed to enlarge an array?

    - by user10326
    I am reading an analysis on dynamic arrays (from the Skiena's algorithm manual). I.e. when we have an array structure and each time we are out of space we allocate a new array of double the size of the original. It describes the waste that occurs when the array has to be resized. It says that (n/2)+1 through n will be moved at most once or not at all. This is clear. Then by describing that half the elements move once, a quarter of the elements twice, and so on, the total number of movements M is given by: This seems to me that it adds more copies than actually happen. E.g. if we have the following: array of 1 element +--+ |a | +--+ double the array (2 elements) +--++--+ |a ||b | +--++--+ double the array (4 elements) +--++--++--++--+ |a ||b ||c ||c | +--++--++--++--+ double the array (8 elements) +--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x | +--++--++--++--++--++--++--++--+ double the array (16 elements) +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x || || || || || || || || | +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ We have the x element copied 4 times, c element copied 4 times, b element copied 4 times and a element copied 5 times so total is 4+4+4+5 = 17 copies/movements. But according to formula we should have 1*(16/2)+2*(16/4)+3*(16/8)+4*(16/16)= 8+8+6+4=26 copies of elements for the enlargement of the array to 16 elements. Is this some mistake or the aim of the formula is to provide a rough upper limit approximation? Or am I missunderstanding something here?

    Read the article

  • Is Winpcap able to capture all packets going through a Gigabit NIC without missing any packets?

    - by Patrick L
    I want to use Winpcap to capture all network packets going through a Gigabit NIC of a server. Assuming that I am able to utilize the network link up to 100%, the maximum network speed is 1000Mbps. If we exclude the TCP/IP headers, the maximum TCP data rate should be roughly 940Mbps. Let's say I send a 1GB file through the NIC at 940Mbps using TCP destination port 6000. I use Winpcap to capture all network packets going through the NIC and then dump it to a pcap file. If I use Wireshark to analyze the pcap file and then check the sum of packet size for all network packets sent to TCP port 6000, am I able to get exactly 1GB from the pcap file? Thanks.

    Read the article

  • Video lags/freezes in SMPlayer and VLC

    - by RanRag
    When I try to play my video files in SMPlayer it works fine but as soon as I switch to fullscreen mode(16:9) following thing happens: 1) Video starts lagging. 2) Audio and video goes out of sync. 3) CPU usage rises to ~50%. 4) SMPlayer starts to hang. My current SMPlayer configuration: 1)Video Output Driver = x11(slow) 2)Audio Output Driver = alsa(0.0-HDA Intel) 3)Cache = 8192 KB 4)Threads for decoding(MPEG-1/2 and H.264 only = 2 Things I tried solve this problem: 1) Tried changing video o/p driver to xv,gl. 2) Tried changing audio o/p driver to pulse. 3) Tried increasing cache size and also tried using nocache. Everything works fine on windows but I don't want to switch to windows just to play video files. My system config: Acer Aspire One D270 Atom N2600(Cedar Trail) 1.6GHz 2GB Memory Intel GMA 3600 graphics. Ubuntu 12.04 Kernel Release: 3.2.0-23-generic-pae Rest all things are working fine I have no resolution issue, bluetooth, wireless also working fine. Just ask me to submit any other log file I will be happy to post. SMPlayer log MPlayer Terminal output Codec Information(currently playing file):

    Read the article

  • How to use second volume devide of amazon EC2

    - by Khoyendra Pande
    I have two volumes of amazon EC2 where by default 1 GiB volume using which has fulled. Now I want to use my second volume which is 9 Gim. I used command cat /proc/partitions I got major minor #blocks name 202 1 1048576 xvda1 202 80 9437184 xvdf Then I hit mkfs.ext3 -F /dev/sdf its showing mkfs.ext3: No such file or directory while trying to determine filesystem size then I hit command df and I got Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 1032088 1031280 0 100% / tmpfs 313160 8 313152 1% /lib/init/rw udev 297800 24 297776 1% /dev tmpfs 313160 4 313156 1% /dev/shm overflow 1024 32 992 4% /tmp means still I am unable to use my 9 GiB space Volume. I am conform I have two volume where attachment information is i-7e4fb41c:/dev/sda1 (attached) and i-7e4fb41c:/dev/sdf (attached) where only sda1 is using. Any one know how may I use my second volume(sdf). Thx

    Read the article

  • Dual monitors with one above the other?

    - by Felix
    I'm using Gnome 3 and proprietary Nvidia drivers. I have tried to set in nvidia-settings my external monitor to be "above" my main one (it's a laptop). However, when I try to drag a window up from the main display to the external one, it gets stuck and can't move past a certain point. Trying to maximize it changes its decoration so it looks maximized (i.e. no borders, etc), but its size or position doesn't change. Now, if I set my external monitor to be "to the left" of the main one, it works, which is why I'm suspecting this is a Gnome issue, not an Nvidia one. Anyone know how to fix this? Update: some versions: Gnome: 3.2.2.1 Nvidia: 280.13 Update 2: I can see that Gnome 3.4 is out, and among the release notes is better external monitor support. However, they only mention a small fix that is unrelated to my problem. Can anyone with Gnome 3.4 and access to an external monitor please test this out and tell me if it works? I don't want to go through the hassle of upgrading my Ubuntu installation unless I know for certain it's going to fix the problem.

    Read the article

  • Why Outlook 2007 pasted images are larger than original?

    - by Jersey Dude
    I have been using Outlook 2007 for over a year with no image problems. Around September 1st, images that I paste into messages are enlarged in the messages. This happends with WinSnap, the Vista Snipping Tool, or any jpeg pasted into the message I tried jpegs with 96dpi settings w/o sucess I tried different Outlook Format Picture ... and Size ... settings Problem happens with both RTF and HTML messages Attached images are ok Something mysteriously changed and I cannot figure it out. I googled this to death without any success (others have the problem but there is no solution). This is driving me nuts because I snap screenshots all day long ("a picture is worth a thousand words"). Thanks in advance.

    Read the article

  • Ubuntu 12.04.1 LTS and Nvidia dirver (304.51) 64bit: problem 640x480

    - by nibianaswen
    I have a problem with this configuration: Asus K55V, Ubuntu 12.04 LTS and Nvidia driver 304.51. I have remove the nouveau driver with: apt-get --purge remove xserver-xorg-video-nouveau I installed the official nvidia driver (from www.nvidia.com) but when I reboot the PC the resolution of screen is only 640x480 and the monitor is resized. Mo solution at this problem if i change the xorg.conf. Now i have uninstall the nvidia driver and reinstall with sudo apt-get purge nvidia-current sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current When I reboot the screen resolution and size is OK, but if I start nvidia-setting I received the message: You do not appear to be using the NVIDIA X driver. and with command: sudo lshw -c display | grep driver I received configuration: driver=i915 latency=0 This sound like the system is using the Intel card. When I launch command lspci | grep VGA the output is: 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1058 (rev ff) And there is no /etc/X11/xorg.conf. I have read a lot of guides on internet but without success.. How i can use nvidia card with the driver that i have installed?

    Read the article

  • failed to use mutt to send mail to company mailbox

    - by Acewind
    I'm using mutt&postfix on CentOS 6.2: mutt-1.5.20-2.20091214hg736b6a.el6_1.1.x86_64 postfix-2.6.6-2.2.el6_1.x86_64 When I try to send mail to my company mailbox, I receive an error: mutt -s "test" [email protected] < /home/mail.txt error from postfix: : host out1.ourcompany.com[10.30.17.100] said: 555 Syntax error (in reply to MAIL FROM command) Then I try to use service sendmail as SMTP server, but also failed: **----- The following addresses had permanent fatal errors ----- (reason: 555 Syntax error) ----- Transcript of session follows ----- ... while talking to out1.ourcompany.com.: MAIL From: SIZE=667 <<< 555 Syntax error 554 5.0.0 Service unavailable** Any body can tell me why? Thanks!!!! I can make sure DNS is OK, and I set realname "root@myserver" in /etc/Muttrc

    Read the article

  • Installing ruby 1.9.1 on OS X with RVM, getting error I can't make sense of

    - by Pselus
    I'm trying to update my ruby install on Leopard to at least 1.9.1. I found a tutorial that tells me how to do it with RVM and I get as far as downloading, configuring and compiling the version I want, but during the compile I get errors. When checking the make.error.log file this is the message I get: [2010-11-07 13:43:44] make main.c: In function ‘objcdummyfunction’: main.c:19: warning: implicit declaration of function ‘objc_msgSend’ main.c: At top level: main.c:19: warning: ‘objcdummyfunction’ defined but not used eval.c: In function ‘ruby_cleanup’: eval.c:139: warning: passing argument 1 of ‘ruby_init_stack’ discards qualifiers from pointer target type gc.c: In function ‘garbage_collect_with_gvl’: gc.c:597: warning: cast from pointer to integer of different size w: illegal option -- L usage: w [hi] [user ...] make: [libruby.1.9.1.dylib] Error 1 (ignored) readline.c: In function ‘username_completion_proc_call’: readline.c:1159: error: ‘username_completion_function’ undeclared (first use in this function) readline.c:1159: error: (Each undeclared identifier is reported only once readline.c:1159: error: for each function it appears in.) make[1]: *** [readline.o] Error 1 make: *** [mkmain.sh] Error 1 I have no idea what any of that means. Help?

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Linux servers seeing bad download performance behind Sonicwall firewall

    - by Joshua Penix
    I'm working with a pair of co-located CentOS Linux servers sitting behind a Sonicwall PRO 2040 Enhanced firewall running in transparent bridge mode. These servers are having a strange problem downloading files more than a few megabytes in size. For example, if I try to wget or FTP a copy of the Linux kernel from kernel.org, the first ~1-2MB will download at 600+K/s, and then throughput will drop off a cliff to 1K/s. I've reviewed all the firewall configuration settings for anything suspicious, but found nothing. More interestingly, I performed the same download with a Windows server sitting behind the same firewall, and it sailed right through at 600+K/s the whole way. Has anyone seen this? Where should I start looking to troubleshoot this problem?

    Read the article

  • Nvidia 9 series or Intel HD 2000? [closed]

    - by EApubs
    I just tested an Nvidia 9300 GS card with a Intel Corei3 HD 2000 graphics system. Here is the windows experience index scores I got : Nvidia 9300 GS : Base Score 3.9 Processor : 7.1 Memory : 7.5 Graphics : 3.9 Gaming Graphics : 5.1 Hard Disk : 5.9 Intel HD 2000 : Base Score : 5.2 Processor : 7.1 Memory : 5.9 Graphics : 5.2 Gaming Graphics : 5.8 Hard Disk : 5.9 My questions are : When using Intel HD graphics, it reduces the score of my Ram! How is that possible? It checks the speed of the ram. Not the size (i think). Intel graphics take some of the ram space but how can that effect the speed? From both of them, what will be the good choice?

    Read the article

  • When mapping the surface of a sphere with tiles, how might you deal with polar distortion?

    - by clweeks
    It's easy to deal with the way locations interact on a clean Cartesian grid. It's just vanilla math. And you can kind of ignore the geometry of the sphere's surface for a bunch of it if you want to just truncate the poles or something. But I keep coming up with ideas for games where the polar space matters. Geo-coded ARGs and global roguelikes and stuff. I want square(ish?) locations -- reasonably representable by square tiles of the same size across the globe, anyway. This has to be a solved problem, right? What are the solutions? ETA: At the equator -- and assuming that your square locations are reasonably small, it's close enough to true that you can get away with having one square in the rows north and south of the most equatorial row. And you could probably get away with that by just hand-waving the difference up to like 45-degrees or so. But eventually, you need to have fewer squares in a pole-ward circumferential row. If I reduce the length of the row by one and offset the squares by 1/2 then they're just like hexes and it's relatively easy to do the coding to keep track of the connections. But as you get pole-ward, it gets more and more extreme. Projecting the surface of the world onto the surface of a cube is tempting. But I figured there must be more elegant solutions already in use. If I did the cube thing (not dissecting it further through geodesy) Are there any pros and cons related to placing the pole at the center of a face or at the vertex of three sides?

    Read the article

  • UDP multicast streaming of media content over WIFI

    - by sajad
    I am using vlc to stream media content over wireless network in scenario like this (from content streamer to stream receiver client): The bandwidth of wireless network is 54 Mb/s and UDP stream's required bandwidth is only 4 Mb/s; however there is trouble in receiving media stream and quality of playing specifically in multicast mode; means I can play the stream but it has jitter and does not play smoothly. In uni-cast I can stream up to 5 media streams correctly, but in multicast mode there is problem with streaming just one media! However when I stream from client some multicast streams; the wifi access-point can receive data correctly and I can see the video in "udp streamer" side correctly even when number of multicast streams increases to 9; But as you see I want to stream from streaming server and receive media in client size. Is this a typical problem of streaming real-time contents over wireless networks? Is it necessary to change configurations of my WIFI switch or it is just a software trouble? thank you

    Read the article

  • Can't detect collision properly using Rectangle.Intersects()

    - by Daniel Ribeiro
    I'm using a single sprite sheet image as the main texture for my breakout game. The image is this: My code is a little confusing, since I'm creating two elements from the same Texture using a Point, to represent the element size and its position on the sheet, a Vector, to represent its position on the viewport and a Rectangle that represents the element itself. Texture2D sheet; Point paddleSize = new Point(112, 24); Point paddleSheetPosition = new Point(0, 240); Vector2 paddleViewportPosition; Rectangle paddleRectangle; Point ballSize = new Point(24, 24); Point ballSheetPosition = new Point(160, 240); Vector2 ballViewportPosition; Rectangle ballRectangle; Vector2 ballVelocity; My initialization is a little confusing as well, but it works as expected: paddleViewportPosition = new Vector2((GraphicsDevice.Viewport.Bounds.Width - paddleSize.X) / 2, GraphicsDevice.Viewport.Bounds.Height - (paddleSize.Y * 2)); paddleRectangle = new Rectangle(paddleSheetPosition.X, paddleSheetPosition.Y, paddleSize.X, paddleSize.Y); Random random = new Random(); ballViewportPosition = new Vector2(random.Next(GraphicsDevice.Viewport.Bounds.Width), random.Next(GraphicsDevice.Viewport.Bounds.Top, GraphicsDevice.Viewport.Bounds.Height / 2)); ballRectangle = new Rectangle(ballSheetPosition.X, ballSheetPosition.Y, ballSize.X, ballSize.Y); ballVelocity = new Vector2(3f, 3f); The problem is I can't detect the collision properly, using this code: if(ballRectangle.Intersects(paddleRectangle)) { ballVelocity.Y = -ballVelocity.Y; } What am I doing wrong?

    Read the article

  • How do large companies handle software updates for users without administrative rights?

    - by CT
    I just started working for a small-medium size company doing IT support. Maybe 150 or less users. Right now every user has administrative rights to their own machine. This allows them to install updates or whatever else they would like to. I'm tired of getting on user's machines that are bloated with crap they put on themselves. So my first thought would be to take away administrative rights to their computer. This would also have other advantages such as preventing a lot of drive-by malware on the web etc. The problem arises that users are unable to install updates. (Even though I find most ignore these anyway) How do large companies handle software updates on all client machines? EDIT: Windows environment. Most servers are Windows Server 2003 Enterprise. Clients are all Windows. Win XP, Vista, and 7.

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >