Search Results

Search found 10444 results on 418 pages for 'macbook pro retina'.

Page 22/418 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • How to create a CGBitmapContext which works for Retina display and not wasting space for regular display?

    - by ????
    Is it true that if it is in UIKit, including drawRect, the HD aspect of Retina display is automatically handled? So does that mean in drawRect, the current graphics context for a 1024 x 768 view is actually a 2048 x 1536 pixel Bitmap context? (is there a way to print this size out to verify it). We actually enjoy the luxury of 1 point = 4 pixels automatically handled for us. However, if we use CGBitmapContextCreate, then those will really be pixels, not points? (at least if we provide a data buffer for that bitmap, the size is not for the higher resolution, but for the standard resolution, and even if we pass NULL as the buffer so that CGBitmapContextCreate handles the buffer for us, the size probably is the same as if we pass in a data buffer, and it is just standard resolution, not Retina's resolution). We can always create 2048 x 1536 for iPad 1 and iPad 2 as well as the New iPad, but it will waste memory and processor and GPU power, as it is only needed for the New iPad. So do we have to use a if () { } else { } to create such a bitmap context and how do we actually do so? And all our code CGContextMoveToPoint has to be adjusted for Retina display to use x * 2 and y * 2 vs non-retina display of just using x, y as well? That can be quite messy for the code. (or maybe we can define a local variable scaleFactor and set it to 1 for standard resolution and 2 if it is retina, so our x and y will always be x * scaleFactor, y * scaleFactor instead of just x and y.) It seems that UIGraphicsBeginImageContextWithOptions can create one for Retina automatically if the scale of 0.0 is passed in, but I don't think it can be used if I need to create the context and keep it (and using ivar or property of UIViewController to hold it). If I don't release it using UIGraphicsEndImageContext, then it stays in the graphics context stack, so it seems like I have to use CGBitmapContextCreate instead. (or do we just let it stay at the bottom of the stack and not worry about it?)

    Read the article

  • Are you aware of .NET Reflector Pro?

    I'm sure many of my readers know Reflector, that tool to decompile the assemblies to see what it contains, maybe investigating what Microsoft has done with the base assemblies in .NET or maybe trying to understand 3rd party assemblies (or maybe just trying to recover the lost source code ;-) ) It's invaluable tool to have in your tool box. One nice scenario where it helps a lot is Sharepoint development in case you are in problems with the API. But are you aware that MS gave the product to Red Gate Software (http://www.red-gate.com) which released a Pro version of Reflector (http://www.red-gate.com/products/reflector/index.htm) a couple of months ago? Have a look at the feature set on top of the free version.Full support for .NET 1.0, 1.1, 2.0, 3.0, 3.5, and 4.0Decompile an entire assembly to either C# or VB to view and debug in Visual Studio Step-through debugging of any assembly in Visual Studio (as long as it's not obfuscated): Step into and set breakpoints anywhere in any assemblyWatch variables in the decompiled codeUse Visual Studio's advanced debugging features in decompiled code: Set Next Statement, modify variable values, and dynamic expression evaluation in the immediate window I strongly encourage you to have a look at .NET Reflector in case you haven't done so already. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Pro BizTalk 2009

    - by Sean Feldman
    I have finished reading Pro BizTalk 2009 book from APress. This is a great book  if you’ve never dealt with BizTalk in the past and want to have a quick “on-ramp”. Although the book is very concerned about right way of building traditional BizTalk applications, it also dedicates a chapter to ESB Toolkit and does a good job in analyzing it. The fact that authors were concerned with subject such as coupling, hard-coding, automation, etc. makes it very interesting. One warning, if you are expecting to have a book that will guide you how to apply step by step examples, forget it. Nor this book does it, neither it’s possible due to multiple erratas found in it. I really was disappointed by the number of typos, inaccuracies, and technical mistakes. These kind of things turn readers away, especially when they had no experience with BT in the past. But this is the only bad thing about it. As for the rest – great content. Next week I am taking a deep dive course on BizTalk 2009. I really feel that this book has helped me a lot to get my feet wet. Next target will be ESB Tookit. To get to that, I will use what the book authors wrote “you have to understand how BizTalk as an engine works”.

    Read the article

  • no dual screens with 11.10 and Asus m4A89 GTD Pro

    - by Alex
    I'm having an issue getting dual monitors working for Kubuntu 11.10. I have Asus m4A89 GTD pro/USB3 mother board with integrated Ati HD4290 graphics chip. When I try to enable multiple monitors through the system settings, it says "This module is only for configuring systems with a single desktop spread across multiple monitors. You do not appear to have this configuration." I had previously attempted to fix this problem with another installation of Ubuntu 11.10, but ended up having to reinstall ubuntu because i messed up the software center dependencies. After I installed Ubuntu the first time, a notification showed up asking me to install an Ati graphics driver. I installed this driver, then restarted, and dual monitors did not work. That was when I went to the ATI site and attempted to install the fglrx driver. When I tried to run the shell script for the fglrx driver, it said i had a previous version of an fglrx driver installed, and needed to remove it in order to install the new one. So I looked up some tutorial on how to remove it and found some apt-get remove command, which i ran. Then I was able to install the new driver. Dual monitors still did not work, and i couldn't use the software center any more because it was corrupted and was unable to repair itself. So i just reinstalled ubuntu, and now i'm trying to go about this the correct way. Does anyone have this same configuration and which driver works for you?

    Read the article

  • Questions about Mac Book Pro Keyboard and shortcuts

    - by SimonSolnes
    I have been researching a lot about this and I cannot seem to find any useful information about this topic, what so ever. I have now been using Ubuntu in one week, and have gotten pretty confident with almost everything. Except keyboard layout and shortcuts. If you know a tutorial or a documents explaining keyboard shortcuts in Ubuntu or linux in general, can you please list it? I use a Mac Book Pro with a Norwegian keyboard, and I have several questions about this: Is there a program for having a consise list of absolutely all keyboard shortcuts, and be able to change them? How do I use my fn keys? (fn button doesnt do the job for some reason) How can I use my alt+letter-key-or-number-key or alt+shift+letter-key-or-number-key to get fancy characters. (Like I do on Mac OS X) How can I swap cmd and ctrl key system wide? Also I really want to be oriented around this subject, since this is the only thing holding me back on Ubuntu, so if there exists some in-depth material on this, it would be great. Also, if there exists some programs or material out there making it easier with Mac hardware I would enjoy that. Sorry if my questions seems vague. Thank you very much Simon

    Read the article

  • Please help with bounding box/sprite collision in darkBASIC pro

    - by user1601163
    So I just recently learned BASIC and figured I would try making a clone of pong on my own in darkBASIC pro, and I made everything else work just fine except for the part that makes the ball bounce off the paddle. And yes I'm aware that the game is not yet finished. The error is on lines 39-51 EVERYTHING IS 2D. /////////////////////////////////////////////////////////// // // Project: Pong // Created: Friday, August 31, 2012 // Code: Brandon Spaulding // Art: Brandon Spaulding // Made in CIS lab at CPAVTS // Pong art and code © Brandon Spaulding 2012-2013 // ////////////////////////////////////////////////////////// y=150 x=0 ay=150 ax=612 ballx=300 bally=200 ballx_DIR=1 bally_DIR=1 hide mouse set global collision on //objectnumber=10 //make object box objectnumber,5,150,0 do load image "media\paddle1.png",1 load image "media\paddle2.png",2 load image "media\ball.png",3 sprite 1,x,y,1 sprite 2,ax,ay,2 sprite 3,ballx,bally,3 if upkey()=1 then y = y - 4 if downkey()=1 then y = y + 4 //num_1 = sprite collision(1,0) //num_2 = sprite collision(2,0) num_3 = sprite collision(3,0) for t=1 to 2 //ball&paddle collision if num_3 > 0 if bally_DIR=1 bally_DIR=0 else bally_DIR=1 endif if ballx_DIR=0 ballx_DIR=1 else ballx_DIR=0 endif endif //if bally > 1 and bally < 500 then bally=bally + 2.5 if bally_DIR=1 bally=bally-2.5 if bally<-2.5 bally_DIR=0 endif else bally=bally+2.5 if bally>452.5 bally_DIR=1 endif endif if ballx_DIR=1 ballx=ballx-2.5 if ballx<-2.5 ballx_DIR=0 endif else ballx=ballx+2.5 if ballx>612 ballx_DIR=1 endif endif //bally = bally + t //if bally < 600 or bally > 1 then bally = bally - 2.5 //if ballx < 400 or ballx > 1 then ballx = ballx + 2.5 //move sprite 3,1 next t if escapekey()=1 then exit loop end Thank you in advance for the help.

    Read the article

  • Twitter Keyboard Shortcuts – Use Twitter Like a Pro

    - by Gopinath
    Keyboard shortcuts are the way to go for every ninja to get things done on computer very quickly. If you want to become a Twitter ninja , here are the keyboard shortcuts to quickly read, reply, retweet and to do more. . – Refresh list of tweets. / – Go to Search box. M – Opens a new Message in a pop-up window. N – Opens a new tweet in a pop-up window. Press G, then R – Open Replies. Press G, then M – Open Messages Inbox Press G, then F – Open Favourites. Press G, then H - Go Home. Press G, then P – Display your profile. Press G, then U – Go to another user’s profile, input Twitter name in displayed box. Shift + F – Add selected tweet to Twitter Favourites. Shift + R - Reply to selected tweet. Shift + T – Retweet selected tweet. cc image credit: flickr/davemott This article titled,Twitter Keyboard Shortcuts – Use Twitter Like a Pro, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • 12.04 Wi-Fi not working correctly with Intel Pro/Wireless 3945ABG

    - by Jerel
    This is a very strange problem. The only time the Wi-Fi works is when I have the wired connection connected. I have a Dell 6400/e1505 with the Broadcom BCM 4401-B0 internal NIC, and the Intel Pro/Wireless 3945ABG Wi-Fi adapter. I was on 10.0.4 LTS before, and I remember having to screw around to get the Wi-Fi to work, but that was years ago, and now it's acting even stranger than it did back then. If I boot up with the wire connected, the Wi-Fi starts up. If I boot up without the wire connected, the Wi-Fi won't start at all. If after booting with the wire connected, and the Wi-Fi starts up, I then disable the wireless, the wired connection works fine. Without the wire connected, dmesg shows the iwl3945 driver loading and starting with no errors, and the wireless adapter activated. Running rfkill list shows no blocks, but there is no connection to my Wi-Fi. Help! I am more confused than a chameleon in a bag of Skittles.

    Read the article

  • Yoga Pro 2 Wi-Fi not working

    - by user293004
    I installed Ubuntu 14.04 on my new Yoga Pro 2 and the wireless is not working. It started with Windows 8 on it. The Network Manager says Wi-Fi is disabled by hardware switch. I tried putting a blacklist file in ect/modprobe.d as has been suggested in many places. I called the file "blacklist-ideapad_laptop.conf" and wrote in the file blacklist ideapad_laptop I checked to make sure that the wireless is enabled in the BIOS. It is. I ran rfkill list all and it displayed: 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: yes I ran iwlist wlan0 scan and it displayed: wlan0 Failed to read scan data : Network is down I ran sudo rmmod ideapad_laptop and it displayed: rmmod: ERROR: Module ideapad_laptop is not currently loaded. I ran ifconfig wlp1s0 up and it displayed: wlp1s0: ERROR while getting interface flags: No such device. I ran "lspci" and it displayed: 01:00.0 Network controller: Intel Corporation Wireless 7260 (rev 6b) I ran sudo lshw -c network and it displayed: *-network DISABLED description: Wireless interface product: Wireless 7260 vendor: Intel Corporation physical id: 0<br> bus info: pci@0000:01:00:0.0 logical name: wlan0 version: 6b serial: 7c:7a:91:5f:9b:fa width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.13.0-24-generic firmware=22.24.8.0 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:61 memory:b0400000-b0401fff This No wireless with Intel Centrino Advanced-N 7260 seems to be dealing with a similar issue. It suggests that I need to update my firmware. So I downloaded iwlwifi-7260-ucode-23.214.9.0 from Intel's website. I put the file "iwlwifi-7260-9.ucode" in /lib/firmware and ran "sudo lshw -c network" again. It displayed exactly as before. Is there something else I need to do install the new firmware?

    Read the article

  • Microsoft Desktop Player is a Valuable Tool for IT Pro’s

    - by Mysticgeek
    If you are an IT Professional, a new education tool introduced by Microsoft is the MS Desktop Player. Today we take a look at what it has to offer, from Webcasts, White Papers, Training Videos, and more. Microsoft Desktop Player You can run the player from the website (shown here) or download the application for use on your local machine (link below). It allows you to easily access MS training and information in a central interface. To get the Desktop version, download the .msi file from the site… And run through the installer…   When you first start out, enter in if you’re an IT Pro, Developer and your role. Then you can decide on the resources you’re looking for such as Exchange Server, SharePoint, Windows 7, Security…etc. Here is an example of checking out a Podcast on Office 2007 setup and configuration from TechNet radio. Under Settings you can customize your search results and local resources. This helps you narrow down pertinent information for your needs. If you find something you really like, hover the pointer over the screen and you can add it to your library, share it, send feedback, and check for additional resources. If you don’t need items in your library they can be easily deleted. Under the News tab you get previews of Microsoft news items, clicking on it will open the full article in a separate browser. While you’re watching a presentation you can show or hide the details related to it. Conclusion Microsoft Desktop Player is currently in Beta, but has a lot of cool features to offer for your learning needs. You can easily find Podcasts, Webcasts, and more without having to browse all over the place. In our experience we didn’t notice any bugs, and what it offers so far works well. If you’re a geek who’s constantly browsing TechNet and other Microsoft learning sites, this helps keep everything consolidated in one app.  Download Microsoft Desktop Player Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesBuilt-in Quick Launch Hotkeys in Windows VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineHow to Get Virtual Desktops on Windows XPWindows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool

    Read the article

  • Why is SSH finding remote keys for other accounts?

    - by Brian Pontarelli
    This is a strange issue I'm having with SSH from my Macbook Pro to a Linux (Ubuntu 11.10) server. I have a DSA key setup on the remote Linux server under my home directory like this: /home/me/.ssh/authorzied_keys I also have the same DSA key setup for a few other accounts on the machine named "foo" and "bar". I can log into all of the accounts fine without any password. Therefore, the DSA keys are all setup correctly. The strange behavior I'm seeing is when debugging the SSH connection. During the connection, the SSH debug is outputting this: debug2: key: /Users/me/.ssh/id_dsa (0x7f91a1424220) debug2: key: /home/foo/.ssh/id_dsa (0x7f91a1425620) debug2: key: /home/bar/.ssh/id_rsa (0x7f91a1425c60) debug2: key: /Users/me/.ssh/id_rsa (0x0) This is strange for so many reasons, but essentially, why is SSH listing out keys on the server (/home/foo/.ssh/id_dsa and /home/bar/.ssh/id_rsa)? These files don't even exist on the server, so why are they listed? I'm not logging into the "foo" or "bar" accounts, so why is SSH even listing those? On my Macbook Pro, I only have a DSA key, but SSH is listing out an RSA key, what's that all about? Another user on the server doesn't get any of these messages when they log in and they have the exact same setup for their DSA key and the exact same Macbook Pro setup as mine? Does anyone know what these messages are and why SSH is outputting them?

    Read the article

  • how to access inaccessible mac os x hard drive via ubuntu

    - by jon
    Background: My intention was to load a Virtual Machine (VM) on my Mac OS X Snow Leopard. My Mac had just enough room for a VM (my thought process was that VM was the same as partition) However, I burned the newest version of Ubuntu onto a CD, thinking that partitioning and running a virtual machine would be the same. I would restart my computer, booting up Ubuntu installer. The installation would not allow me to partition, forcing me to force shutdown my laptop. when I turn on my laptop, I see that my computer is "missing operating system". So, can someone help me fix my a) bootcamp, b) getting files and if a and b are fixed c) to install ubuntu as a VM?

    Read the article

  • Copying content on webpages in safari. To HTML

    - by Carl Smith
    Hi, is there an easier way to copy and paste website content in html? Want to copy and look like this. Product Information: Length: S / M / L Material: Polyester and Elasthane Brand: Roxana Exclusive Style: Basque But when i paste it into my content box it looks like this- Product Information Length: S / M / L Material: Polyester and Elasthane Brand: Roxana Exclusive Style: Basque Then i need to edit it in the html editor to rearrange it. Is the some sort of app or plugin that i can get so i can turn the text of the page into html so it looks right straight away when i copy it into my content box? If that makes any sense? Thanks Carl Smith :-)

    Read the article

  • Apple MBP 7,1 Color profile xcalib

    - by cloudlight
    I have installed Kubuntu 64bit on my MBP 7,1. After installation I was going through following documentation (screen) https://help.ubuntu.com/community/MacBookPro7-1/Maverick#Screen I am confused about line /usr/bin/xcalib "/etc/xcalib/<insert name of profile here>" which profile to put at that place? I am getting following output on command prompt bharat@bharat-MacBookPro:~$ ls /etc/xcalib Color LCD-00000610-0000-9CC5-0000-000004273140.icc EPSON PJ -00004CA3-0000-A600-0000-000028E98001.icc SyncMaster-00004C2D-0000-0117-4C45-31370B4074F5.icc

    Read the article

  • Ubuntu 12.04 update + VirtualBox VM 4.1.14 r77440 - No input

    - by Gubatron
    I can only enter my password with the keyboard and this is what I see after I login If I try to switch to a terminal with CtrlAlt+F1 I get this I don't have a clue how to fix this, or if it's even possible to fix this without access to a terminal. The recovery console does not work either. It opens, but keyboard input doesn't seem to be working for it either. Anybody had this happened? Update (next day) Keyboard input lets me do Alt+F2 and invoke a terminal, but I can't change focus to the terminal window that opens. Any tip on how to switch focus (Alt+Tab wont work) to the terminal window. Maybe looking at this you might have a clue what's going on, maybe there's no window manager loading?

    Read the article

  • MacBookPro 7,1 can only see one cpu

    - by gozzilli
    On a MacBookPro 7,1 running ubuntu 11.10, System monitor only sees 1 core (instead of 2). cat /proc/cpuinfo | grep 'cpu cores' also gives: cpu cores : 1 I followed this guide and added acpi_apic_instance=2 to the line with GRUB_CMDLINE_LINUX_DEFAULT, but that doesn't seem to change the situation. What can I do? I'm using rEFIt, installed under MacOS, and running in dual boot with MacOS. After the rEFIT menu, I'm still presented with the GRUB menu (I'm assuming that's normal). I saw similar posts on this matter, but could not fix my problem with what they suggested. EDIT: With the method mentioned above the computer runs sometimes with 1 core and other times with 2. Why is that? How can it be fixed?

    Read the article

  • ubuntu 11.10 on MacBookPro 7,1 can only see one cpu

    - by gozzilli
    On a MacBookPro 7,1 running ubuntu 11.10, System monitor only sees 1 core (instead of 2). cat /proc/cpuinfo | grep 'cpu cores' also gives: cpu cores : 1 I followed this guide and added acpi_apic_instance=2 to the line with GRUB_CMDLINE_LINUX_DEFAULT, but that doesn't seem to change the situation. What can I do? I'm using rEFIt, installed under MacOS, and running in dual boot with MacOS. After the rEFIT menu, I'm still presented with the GRUB menu (I'm assuming that's normal). I saw similar posts on this matter, but could not fix my problem with what they suggested. Any help appreciated. Thanks!

    Read the article

  • Can't connect to or search Wi-Fi

    - by Mark
    I just installed the latest version of Ubuntu; I think it was 13 or something. Anyway, I got it running, and now I've been trying to connect to my Wi-Fi AP. No matter where I go I don't see an option for Wi-Fi. I looked on the computer and found some commands to use in a terminal to run a scan for Wi-Fi networks, but all of them returned the error: interface doesn't support scanning What are step-by-step directions to make this work?

    Read the article

  • How do I run an iPad emulator ?

    - by John
    I have no real need for the entire SDK, my project involves mainly JS and CSS on an ePub structure. My fiend lend me his MacBook on which I'm typing now. I can create everything on a PC box, import to the MacBook and than test it on the emulator. So - how do I install an iPad emulator on a MacBook ? Thank you for your assistance.

    Read the article

  • How do I disable a Nvidia 9600GT on MacBookPro 5.1?

    - by Gjan
    i finally put up a Dual Boot with Ubuntu and Lion on my old MacBookPro 5.1 As reported in many cases the discrete graphics card is turned on all the time consuming a lot of power and thus heating up the laptop. Since the discrete graphics card does not support the nvidia optimus technology, the corresponding packge nvidia-prime does not help in this case. Therefore my question is, how to manually disable the discrete graphics card Nvdidia 9600GT ? Preferably a 'switch-on-the-run' version, but a 'set-on-boot' would be totally fine!

    Read the article

  • Can't connect to or search wifi

    - by Mark
    I just installed the latest version of Ubuntu, I think it was 13. Something. Anyway I got it running and now I've been trying to connect to my wifi. No matter where I go I don't see an option for wifi. I looked on the computer and found some commands to use in terminal to run a scan for wifi networks but all of them returned the error "interface doesn't support scanning" if anyone could help that would be great...I can't do anything without wifi! Also I'm new to this so if you could give step by step directions that would help a lot. Thanks!

    Read the article

  • Multitask Like a Pro with AquaSnap

    - by Matthew Guay
    Are you tired of shuffling back and forth between windows?  Here’s a handy app that can help you keep all of your windows organized and accessible. AquaSnap is a great free utility that helps you use multiple windows at the same time easily and efficiently.  One of Windows 7’s greatest new features is Aero Snap, which lets you easily view windows side by side by simply dragging windows to side of your screen.  After using Windows 7 for the past year, Aero Snap is one of the features we really miss when using older versions of Windows. With AquaSnap, you now have all of the features of Aero Snap and more in Windows 2000, XP, Vista, and of course Windows 7.  Not only does it give you Aero Snap features, but AquaSnap also gives you more control over your windows to make you more productive. Getting Started AquaSnap is a a free download for Windows 2000, XP, Vista, and 7.  Download the small installer (link below) and install it with the default settings. AquaSnap automatically runs as soon as it is installed, and you will notice a new icon in your system tray. Now you can go ahead and put it to use.  Drag a window to any edge or corner of your desktop, and you will see an icon showing what part of the screen the window will cover. Dragging it to the side of the screen expanded the window to fill the right half of the screen, just like the default Aero Snap in Windows 7.  You can drag the window away to restore it to its former size. AquaSnap works on any corner of the screen too, so you can have 4 windows side-by-side.  We already have 3 windows snapped to the corners, and notice that we’re dragging a fourth window to the bottom right corner. You can also snap windows to the bottom and top of the screen.  Here we have Word snapped to the bottom half of the screen, and we’re dragging Chrome to the top. You can even snap internal windows in Multiple Document Interface (MDI) programs such as Excel.  Here we are snapping a workbook in Excel to the left to view 2 workbooks side-by-side.   Additionally, AquaSnap lets you keep any window always on top.  Simply shake any window, and it will turn semi-transparent and stay on top of all other windows.  Notice the transparent calculator here on top of Excel. All of AquaSnap’s features work great in Windows 2000, XP, and Vista too.  Here we are snapping IE6 to the left of the screen in XP. Here are 3 windows snapped to the sides in XP.  You can mix the snap modes, and have, for instance, two windows on the right side and one window on the left.  This is a great way to maximize productivity if you need more space in one of the windows. Even AquaShake works to keep a window transparent and on top in XP. Settings AquaSnap has a detailed settings dialog where you can tweak it to work exactly like you want.  Simply right-click on its icon in the taskbar, and select Settings. From the first screen, you can choose if you want AquaSnap to start with Windows, and if you want it to show an icon in the system tray.  If you turn off the system tray icon, you can access the AquaSnap settings from Start > All Programs > AquaSnap > Configuration (or simply search for Configuration in Vista or Windows 7). The second tab in settings lets you choose what you want each snapping region to do.  You can also choose two other presets, including AeroSnap (which works just like the default Aero Snap in Windows 7) and AquaSnap simple (which only snaps at the edges of the screen, not the corners). The third tab lets you increase or decrease the opacity of pinned windows when using AquaShake, and also lets you increase or decrease the shaking sensitivity.  Additionally, if you prefer the standard AeroShake functionality, which minimizes all other open windows when you shake a window, you can choose that too. The fourth tab lets you activate an optional feature, AquaGlass.  If you activate this, it will make windows turn transparent when you drag them across the screen.   Finally, the last tab lets you change the color and opacity of the preview rectangle, or simply turn it off. Or, if you want to temporarily turn AquaSnap off, simply right-click on its icon and select Off.  In Windows 7, turning off AquaSnap will restore your standard Windows Aero Snap functionality, and in other version of Windows it will stop letting you snap windows at all.  You can then repeat the steps and select On when you want to use AquaSnap again. Conclusion AquaSnap is a handy tool to make you more productive at your computer.  With a wide variety of useful features, there’s something here for everyone.  Download AquaSnap Similar Articles Productive Geek Tips How to Get Virtual Desktops on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Can't ping from Win7 running on an MacBook Pro and Can't ping MacBook Pro from any other PC

    - by jorame
    I have Windows 7 Enterprise running on MacBook Pro and for some reason I'm not able to ping any PC in the house from it and the other way around meaning I can't ping the MacBook Pro running Win7 from any other PC. I looked online but I can't find anything that explains this issue or how to resolve it. Any help will be really appreciated. Please do let me know if I'm posting this in the wrong area. Thank you

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >