Search Results

Search found 691 results on 28 pages for 'slider'.

Page 23/28 | < Previous Page | 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • add navigation buttons with highlight to jQuery Cycle

    - by JJ Nold
    I am currently using jQuery cycle to display featured projects. I currently have the next and prev buttons setup, I also have four rounded buttons in between. I would like them to function as both a link to the corresponding slide as well as add a class to the button who's slide is currently active. I have the nav setup and the active class setup I just need the jQuery. You can see the code at http://jjnold.com/dev in the main slider at the top. I was thinking it's probably something like this to assign the class but I'm not sure how to link to the corresponding slide and have them work together. Any help is greatly appreciated. $('.featuredNavInt a').click(function() { $('.featuredNavInt a').removeClass('active'); $(this).addClass('active'); });

    Read the article

  • Why does my UI controls work on iPhone but not iPad?

    - by Andrew
    I have an iPhone app, which consists of a table view containing various custom UITableViewCells, with UISlider and UISwitch controls. When running the app on the iphone, I can move the slider and switches contained in each of the table cells. When I run the same app on the iPad. I can not. None of the controls within the table view will respond to touches. I have checked 'User Interaction Enabled' within the Interface Builder. Anybody got any suggestions of where I should be looking for the cause of different behavior between the iPhone and iPad, in respect to UIControls.

    Read the article

  • how to adjust sliding of scroll according to the value calculated by me?

    - by cng
    I am working on scroll. I have made a program wherein if the scroll box is in particular range of the panel then it automatically moves to a particular position in the slider. For eg, I have a panel of size 500. the scroll has height of 100. Now the total intervals are 5. Now if i slide the scroll box at a position 225 then i want that it automatically slides to the start of that interval that is at position 200. or if i slide to position 450 then instead of staying there it goes to position 400. Is it possible?

    Read the article

  • HTML/Javascript - Inconsistent positioning

    - by user1473358
    I'm in the process of designing this site http://www.parisgaa.org/parisgaels and have a problem. The image slider on the homepage messes up sometimes. Most of the time it works and looks fine, but other times its positioning seems to get messed up and it appears underneath the content that should be below it (i.e. with that content overlapping the image). You should be able to replicate this in Chrome - just refresh a couple of times. I'd appreciate any help at all.

    Read the article

  • Setting height of a DIV to correspond with location of anchor inside said DIV

    - by filip
    Core issue : http://jsfiddle.net/pipip/P46Xg/ I have a div container with a few paragraphs of text, and inside one of these paragraphs is the following anchor <a id="stop" /> The container is set to overflow:hidden Is it possible with javascript / JQuery to set the height of the container so that the bottom of the container stops exactly at or below the anchor? Added Depth & Background : http://jsfiddle.net/pipip/yj9dB/ This would be used for a modified JQuery Slider. Where someone using a CMS could type [readmore] anywhere into the Content field, which would be replaced by the above anchor via PHP. This way they would be easily able to control where the Read More break appears within the container. In the associated example I am hard-coding the height to 75px, although what I want is for the height to be dependent on the location of the anchor id="stop" in the text. Thanks. If this is an awful way to go about it, I'm all ears!

    Read the article

  • How To Rip a Music CD in Windows 7 Media Center

    - by DigitalGeekery
    If you’re a Media Center user, you already know that it can play and manage your digital music collection. But, did you know you can also rip a music CD in Windows 7 Media Center and have it automatically added to your music library? Rip a CD in Windows 7 Media Center Place your CD into your optical drive. From within Windows Media Center, open the Music Library and select the CD. If you haven’t previously ripped a CD in Windows 7 with either Windows Media Center or Windows Media Player, you’ll be prompted to select whether or not you’d like to add copy protection. Click Next. By default, your CD will be ripped to .WMA format. The rip settings for Windows Media Center are pulled from Windows Media Player. So to change the rip settings, we’ll need to do so in Media Player. Click Finish. From within Windows Media Player, click on Tools from Menu bar, and select Options. If you are new to Windows Media Player 12, check out our beginner’s guide on how to manage your music with WMP 12. Select the Rip Music tab and choose your output format from the Format drop down list. You can also select the Audio quality (bit rate) by moving the slider bar under Audio quality. Click OK when you are finished.   Now, you are ready to rip your CD. Click on Rip CD. Click Yes to confirm you want to rip the CD. You can follow the progress as each track is being converted.    When the CD is finished you’re ready to start enjoying your music any time you wish in Windows 7 Media Center. Looking for some more tasks you can perform in Media Center with just a remote? Check out our earlier post on how to crop, edit, and print photos in Windows Media Center. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Fixing When Windows Media Player Library Won’t Let You Add FilesStartup Customizations for Media Center in Windows 7Schedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • Silverlight Cream for May 02, 2010 -- #854

    - by Dave Campbell
    In this Issue: Michael Washington, Jason Young(-2-, -3-), Phil Middlemiss, Jeremy Likness, Victor Gaudioso, Kunal Chowdhury, Antoni Dol, and Jacek Ciereszko(-2-). Shoutout: Victor Gaudioso has aggregated All of My Silverlight Video Tutorials in One Place (revised again 05.02.10) From SilverlightCream.com: Unit Testing A Silverlight 'Simplified MVVM' Modal Popup Michael Washington's latest 'Simplified MVVM' post is published at The Code Project and is on Unit Testing with MVVM. Input Localization in Silverlight without IValueConverter Jason Young sent me some links to posts I've not seen... this first one is on localization by using the Language property of the Root Visual. MVVM – The Model - Part 1 – INotifyPropertyChanged Jason Young's next archive post is the first of a series on MVVM and Silverlight 4 ... implementing a simple ViewModel base class. Silverlight, WCF, and ASP.Net Configuration Gotchas Jason Young worked at tracking down the answers to some forum questions and in the process has produced a post of 'gotchas' with using WCF in Silverlight. A Chrome and Glass Theme - Part 5 Phil Middlemiss has part 5 of his Chrome and Glass Theme tutorial up ... in this one, he's looking at the Progress Bar and Slider. Download the files and play along. Silverlight Out of Browser (OOB) Versions, Images, and Isolated Storage Jeremy Likness has a post up responding to his 3 major questions about OOB apps, and he has to code up for the sample too. New Silverlight Video Tutorial: How to Make a Slide In/Out Navigation Bar – All in Blend Victor Gaudioso's latest video tutorial is on building a Behavior for a Slide in/out Navigation bar... kinda like the menu sliders on my GlyphMap Utility... only easier! Command Binding in Silverlight 4 (Step-by-Step) Kunal Chowdhury has another post up at DotNetFunda, and this time he's talking about Command Binding in Silverlight 4 with an eye toward MVVM usage. The Silverlight PageCurl implementation Antoni Dol has a post up about doing a Page Curl effect in Silverlight. He has a manual up on the effect and full application code. How to center and scale Silverlight applications using ViewBox control Jacek Ciereszko has a couple posts up about centering and scaling your app with the ViewBox control. This first one is a code solution. Source is available, as is a Polish version. Silverlight Center And Scale Behavior Jacek Ciereszko's 2nd post, he provides a Behavior that handles the scaling and centering of the previous post. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Emoti-phrases

    - by Tony Davis
    Surely the next radical step in the development of User-interface design is for applications to react appropriately to the rising tide of bewilderment, frustration or antipathy of the users. When an application understands that rapport is lost, it should respond accordingly. When we, for example, become confused by an unforgiving interface, shouldn't there be a way of signalling our bewilderment and having the application respond appropriately? There is surprisingly little in the current interface standards that would help. If we're getting frustrated with an unresponsive application, perhaps we could let it know of our increasing irritation by means of an "I'm getting angry and exasperated" slider. Although, by 'responding appropriately', I don't include playing a "we are experiencing unusually heavy traffic: your application usage is important to us" message, accompanied by calming muzak. When confronted with a tide of wizards, 'are you sure?' messages, or page-after-page of tiresome and barely-relevant options, how one yearns for a handy 'JFDI' (Just Flaming Do It) button. One click and the application miraculously desists with its annoying questions and just gets on with the job, using the defaults, or whatever we selected last time. Much more satisfying, and more direct to most developers and DBAs, however, would be the facility to communicate to the application via a twitter-style input field, or via parameters to command-line applications ("I don't want a wide-ranging debate with you; just open the bl**dy PDF!" or, or "Don't forget which of us has the close button"). Although to avoid too much cultural-dependence, perhaps we should take the lead from emoticons, and use a set of standardized emoti-phrases such as 'sez you', 'huh?', 'Pshaw!', or 'meh', which could be used to vent a range of feelings in any given application, whether it be SQL Server stubbornly refusing to give us the result we are expecting, or when an online-survey is getting too personal. Or a 'Lingua Glaswegia' perhaps: 'Atsabelter' ("very good") 'Atspish' ("must try harder") 'AnThenYerArsFellAff ' ("I don't quite trust these results") 'BileYerHeid' or 'ShutYerGub' ("please stop these inane questions") There would, of course, have to be an ANSI standards body to define the phrases that were acceptable. Presumably, there would be a tussle amongst the different international standards organizations. Meanwhile Oracle, Microsoft and Apple would each release non-standard extensions. Time then, surely, to plant emoti-phrases on the lot of them and develop a user-driven consensus. Send us your suggestions! The best one will win an iPod nano! Cheers! Tony.

    Read the article

  • View Images and Videos in 3D in Firefox

    - by Asian Angel
    Different websites have their own format for viewing images and videos, but may not be a lot of fun to use. The Cooliris extension for Firefox lets you view those same images and videos in a dynamic 3D format. Before For our example we conducted a search for nature photos at Flickr. You could view them in a static format or even as a slideshow but what about something more dynamic looking? After As soon as the extension has finished installing, you will notice a new toolbar button used for launching the Cooliris tab. When you launch the Cooliris tab you will have an expandable menu system in the upper left corner. A speed dial setup in the center. And a small toolbar in the lower right corner Before going further you should check and make any desired adjustments in the preferences to enhance your viewing experience. In the upper right corner you can start your search by selecting from the available sources. The same search for nature images is more focused and clean looking this time. Clicking on an image will bring it forward and enlarge it. You can use the slider tool at the bottom of the tab to browser left or right through the images and videos. And when you find one that interests you, click on the popout button to open it in a new tab. Conclusion The Cooliris extension makes viewing images and videos fun and interactive with its’ 3D style format. Links Download the Cooliris extension (Mozilla Add-ons) Download Cooliris for Firefox, Internet Explorer, Safari (Mac Only), & Chrome Similar Articles Productive Geek Tips Make Firefox Display Large Images Full SizeInstalling Windows Media Player Plugin for FirefoxStop YouTube Videos from Automatically Playing in FirefoxShare Text & Images the Easy Way with JustPaste.itEasily View Source of Included Files in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED

    Read the article

  • Brightness Crash and Fan issues in 12.04.1

    - by S.A. McIntosh
    I would just like to state beforehand that I am a total novice in using Ubuntu when it comes to the more complex issues. So I thought it would be best to finally come here and ask for help before being re-directed or closed out for a solution. I have already looked high and low on this board for one but nothing came up for my particular case, so I might as well take a shot asking for the first time here. This is what I have at the moment: -Dell Insprion 1764 w/ 64-bit Intel i5 Core -Dual Boot: Windows 7/Ubuntu 12.04.1 32-bit (from 12.04 install) -Unity shell -Linux kernel: 3.2.0-32 generic-pae ...and this is my fglrxinfo: OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 5000 Series OpenGL version string: 4.2.11627 Compatibility Profile Context The one issue I have with using Ubuntu is brightness. With the driver in every time I use the slider in the brightness and lock settings or use the keyboard function, it freezes, goes black and comes up with a scrambled colors page like this in the video. So I have looked all over this board and the web for answers looking for a solution that might have an answer. This is what I have done so far to fix this: -First Solution: Looking around, I found this small fix using terminal: sudo gedit /etc/rc.local followed by adding this into "rc.local" echo # > /sys/class/backlight/acpi_video0/brightness This works rarely with the graphics driver still in and I often get lucky say during restart but reboot would only snap back the brightness at max. -Second Solution Simply remove the graphics driver while leaving the solution of first behind. This solves the issue but results in having the monitor flicker and flash at startup which in itself is not a problem to me but maybe not so good for monitor health. Also it causes the fan to speed up throughout the session and render any program that needs the driver useless. -Third Solution This is the most obvious. Just simply use the brightness on AMD Catalyst Control Center software that came with the driver, and I can say that it's form of brightness is HORRIBLE compared to the actual settings. Which leads up to where I am now, back to the driver to stop the fan speed-up and seems that the only solution to the brightness crash is to use the keyboard-controlled brightness at the login screen NOT the desktop if I want the issued effect but will just snap at max bright again if I restart. Fan speed problem is dealt with but now run the risk of crashing my computer if I so much touch the brightness settings. Speaking of which I found this on launchpad and it seems that the issue has been going far since June of 2012. Any help, redirect link or reference would be greatly appreciated. Thank you.

    Read the article

  • SQL SERVER – What is SSAS Tabular Data model and Why to use it – Part 2

    - by Pinal Dave
    In my last article, I talked about the basics of tabular data model and why use it. Then I demonstrated step by step creation of a basic tabular model project. In this part I’m going to throw some light on how to create measures and analyses in excel. If you look at the tabular project closely, you will notice that we have not defined any measure yet. So, in the first step we will define the measure first.  Open the solution and select the column you want to define as a measure. Then, click on the summation icon on the toolbar. You will see the aggregated results at the bottom of that column. You have also other choices as well like average, min, max, count and distinct count. After creating the required measures, we need to analyze our data in excel. To do this, click on the excel icon in the upper left corner of the toolbar. This will open your analysis in excel. Notice the pivot table field list here. I have highlighted the measures that we created in the earlier step. Now, we can use these measures in our analysis Now, we have to put the required fields in their respective places as column labels, row labels, Values and Report filter for analysis. See below snapshot for details, it shows region wise sales on a yearly basis You can even apply filters on the above analysis by placing the slicer field in report filter. In our example, we will take an English product name as a filter. You can use the filter as depicted in the below snapshot. Optionally, you can also use the slider to filter data more interactively. Further to improve our analysis, we can insert pivot charts That’s all for this time, in my next post I’m going to show in detail about how to create hierarchies, perspectives, KPI’s  and many more features. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • How to use different scaling for every monitor?

    - by Mikael Koskinen
    One of the new features in Windows 8.1 is the new "Desktop display scaling", which allows user to configure scaling per monitor. I've been trying to get this working in preview but with no success. If I configure the scaling, it always affects all of my monitors. I have two monitors, the main one with a higher resolution and the secondary with "normal" resolution. The secondary monitor is used in portrait mode. I would like to configure the main monitor's scaling as the text is currently too small. Here's how things look at the configuration screen by default: Now if I adjust the scaling, click apply and do relogin, everything is bigger. On both of my monitors. I haven't clicked the "Let me choose one scaling level for all my displays", but still the slider seems to affect both of them. If I check the "Let me choose one scaling level", the UI changes to look similar to what we have in Windows 8: Still the problem persists. The scaling is applied to both of my monitors. So, it doesn't matter if I check the box or not, the scaling is always applied to all the displays. Any idea how I could get this to work in preview? I've read some comments which seem to indicate that this should work in preview, though Paul Thurrott mentioned at his Winsupersite article that he either didn't get this to work.

    Read the article

  • How can I set the CD audio volume in Linux?

    - by user1296362
    In Windows 7 Control Panel - Sound - Sound Properties window there's an slider for setting CD Audio volume: And it's pretty strange that I can't find corresponding one in generic Linux mixers: alsamixer or amixer. I connected a CD drive to try to set CD audio volume with cdcd (CD Player): $ cdcd setvol 0 Invalid volume It isn't actually an invalid volume, it is because ioctl() call fails. I found that out after searching and changing a bit the source code of this utility (in the libcdaudio): --- cdaudio.c.orig 2004-09-09 06:26:20.000000000 +0600 +++ cdaudio.c 2012-05-30 21:34:34.167915521 +0600 @@ -578,8 +578,10 @@ cdvol_data.CDVOLCTRL_BACK_RIGHT_SELECT = CDAUDIO_MAX_VOLUME; #endif - if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) - return -1; + if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) { + printf("*** cd_set_volume: ioctl() returned error\n"); + return -1; + } return 0; } By the way cdcd's get volume command yields rather weird output: Left Right Front 1281734864 32767 Back 0 0 Also I tried aumix: $ aumix -c 0 But all with no success. I read from this manual — http://tldp.org/HOWTO/Alsa-sound-6.html (section 6.2 The mixer) that CD channel can present in amixer output. Maybe some drivers for sound card are missing in my Ubuntu 12.04 LTS installation. Though I don't think it's the case: $ lsmod | grep snd snd_mixer_oss 22602 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 223867 1 snd_hda_intel 33773 4 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 19 snd_mixer_oss,snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep ,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm All I need is just mute or set to 0 volume level of CD Audio channel, like I did in Windows 7, to get rid of sibilant noise in the speakers.

    Read the article

  • Windows 8.1 Upgrade: I have to run everything as administrator now?

    - by Robert Dailey
    I was running Windows 8 x64 Professional before and I never had to run programs as administrator to get them to function fully. Examples: Chrome OpenVPN GUI I always have my user under the local "Administrators" group and also disable UAC by putting the slider for it at the very bottom. This always did the trick. After the Windows 8.1 upgrade, I run into a few issues: Running Chrome normally, the Chrome icon doesn't appear in the taskbar. Chrome won't run in the background. OpenVPN GUI has errors when launching. Running both as administrator (Right Click Run as Administrator) fixes those issues and they run perfectly. What has changed in Windows 8.1 upgrade to cause these "problems"? I'm an advanced user, I don't want to have to worry about administrative rights. Any advice on how to fix these problems? EDIT I also get prompted for administrator permission to delete directories under "Program Files" now... that never used to happen before. I can hit Continue and it will allow me to delete, but just another symptom of the problem...

    Read the article

  • How do I lower the hardware volume? (volume too high)

    - by Zom-B
    I have a 4yo Dell laptop with Windows XP Pro (modern ones unfortunately don't have a physical volume knob), and lately I'm using my Apple earphones, because they have much better low frequency response than my $10 earphones. They also have the side effect of being much louder. To give an indication of my agony, for most tasks (movie, music, games) I have my main volume at 3 ticks: drag to 0 with the mouse and press the up key 3 times (the handle does not even raise 1 pixel) and my wave volume at 50%. I notice that when I do this, I have a lot of digital noise, because I'm using just a tiny fraction of the 16-bit space. If I drag the Wave slider down until I barely hear the audio, it becomes really distorted and noisy, indicating that this is digital volume (in the DirectSound driver or something) and not hardware volume. I experimented in Audition. When I make a tone of 1000Hz at -50db, (all windows volumes at max) the volume is just below my pain threshold. When I zoom in to see how high the sample values reach, I see that just 8 of the 16 bits are used (about -100 ~ 100). When I generate such tone at -80db (minimum I can specify) then I can still clearly hear the tone, although really noisy. When I zoom in, I see that just 3 out of 16 bits are used. I created a squarewave tone that is just 1 bit high, and I can still hear it! For most uses, this is not a big problem (audiophiles will disagree!), as I just have more noise than usual (about the same as old 8 bit hardware), but I'm also in the process of programming a hearing test program, in which case this problem is a death blow as the test subjects will even hear tones at the bottom of the theoretical range (lowering the windows volume is futile, see above) (I cannot update drivers, as Dell has discontinued XP support for my model)

    Read the article

  • Windows 7 32bit resolution is limited for HDTV monitor

    - by Nick
    I have a small Magnavox HDTV that i am using to test a Frankenstein PC build. The goal is to eventually connect to my old rear projection HDTV which supports 1080i via component input. The goal is also not to buy anymore stuff, otherwise i will just buy a smartTV and be done. I have a ATI Radeon HD 3450 with component out adapter YPrPb. The monitor supports 1080p, but over analog component out, should only go upto 1080i. I have had this working with another setup. On this particular setup, i have Windows 7 32bit, with the latest 12.8 catalyst drivers installed. the windows splash screen starts in 480p, then switches to 480i when the login prompt is shown. When try to change the resolution, 720x480 is the maximum value of the slider. I have also tried the "list all modes" and that also maxes out at 720x480. There are two options for this monitor in the devices seciton, Generic PNP monitor, and Generic non-PNP monitor. Neither setting fixes this. Any ideas on how to get 1080i?

    Read the article

  • My laptop screen keeps dimming

    - by Rowland
    I have a Cryo laptop with Windows 7 installed, bought in December 2011. Sometimes the screen seems to persistently dim and/or brighten up, even as I am doing things. In fact the brightness is varying even as I type this. The battery is always fully charged and connected to the mains. I have checked many times the battery/power-saving settings always leaving the settings the same way; full brightness and never dimming when on the mains. Yet when the screen starts playing up I can end up with the screen dimming and brightening almost continuously. I once went to the "adjust screen brightness" window when the screen had dimmed. I found the brightness slider on 100% (as I expected) but, as I dragged it to the left, to dim the screen, it first brightened and then started dimming, i.e. the screen setting said 100% brightness but it was only at, maybe 80%. I have checked with Cryo and they just say check the power settings. I know what these are and how to work them and always set them to never dim/full brightness, yet still my laptop starts this dimming every so often.

    Read the article

  • How do I disable DirectDraw and Direct3D acceleration on Windows 8? [closed]

    - by Favourite Chigozie Onwuemene
    Some old games that i would really like to play run slowly on some graphic drivers when direct3d acceleration is enabled. I have tried many suggestions but none seems to work. The only thing i have not tried is disabling direct3d acceleration. Is it possible to disable DirectDraw and Direct3D acceleration on my Windows 8 pc? There are certain bad versions of GeForce drivers that cause this problem. This is a problem in the drivers themselves and is unfortunately completely outside our control. The recommended way to fix this problem is to update your graphics card drivers (go to NVIDIA's web site for this). Alternatively, there is a workaround that alleviates or solves the slowdown problem altogether. Try this: Right-click on your desktop and select "Properties". Go to the "Settings" tab and click on "Advanced...". Click on the "Troubleshooting" tab and move the slider to the left until it says that all DirectDraw and Direct3D accelerations have been disabled (around the middle of the range). Finally, click on "OK". Note that this workaround might cause other games on your computer to slow down, so you may have to switch back and forth between settings, but it's certainly worth a try if you can't obtain an updated graphics driver. -source: interactionstudios

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • WPF zoom canvas and maintain scroll position

    - by Alex McBride
    I have a Canvas element, contained within a ScrollViewer, which I'm zooming using ScaleTransform. However, I want to be able to keep the scroll position of the viewer focused on the same part of the canvas after the zoom operation has finished. Currently when I zoom the canvas the scroll position of the viewer stays where it was and the place the user was viewing is lost. I'm still learning WPF, and I've been going backwards and forwards a bit on this, but I can't figure out a nice XAML based way to accomplish what I want. Any help in this matter would be greatly appreciated and would aid me in my learning process. Here is the kind of code I'm using... <Grid> <ScrollViewer Name="TrackScrollViewer" HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto"> <Canvas Width="2560" Height="2560" Name="TrackCanvas"> <Canvas.LayoutTransform> <ScaleTransform ScaleX="{Binding ElementName=ZoomSlider, Path=Value}" ScaleY="{Binding ElementName=ZoomSlider, Path=Value}"/> </Canvas.LayoutTransform> <!-- Some complex geometry describing a motor racing circuit --> </Canvas> </ScrollViewer> <StackPanel Orientation="Horizontal" Margin="8" VerticalAlignment="Top" HorizontalAlignment="Left"> <Slider Name="ZoomSlider" Width="80" Minimum="0.1" Maximum="10" Value="1"/> <TextBlock Margin="4,0,0,0" VerticalAlignment="Center" Text="{Binding ElementName=ZoomSlider, Path=Value, StringFormat=F1}"/> </StackPanel> </Grid>

    Read the article

  • how to loop through menu and remove class and then add class to current menu item

    - by Jonathan Lyon
    Hi all I have this menu structure that is used to navigate through content slider panels. <div id="menu"> <ul> <li><a href="#1" class="cross-link highlight">Bliss Fine Foods</a></li> <li><a href="#2" class="cross-link">Menus</a></li> <li><a href="#3" class="cross-link">Wines</a></li> <li><a href="#4" class="cross-link">News</a></li> <li><a href="#5" class="cross-link">Contact Us</a></li> </ul> </div> I would like to loop through these elements and remove the highlight class and then add the highlight class to the current / last clicked menu item. Any ideas? Any help would be greatly appreciated. Thanks Jonathan

    Read the article

  • IPhone accelerometer works even in flat surface.

    - by Amaresh
    I have an imageView in the view. It moves even if the iphone is still for some time. Why is it so ? Also the image does not respond quickly to the movement of the iphone. Here is my code written for this: I have also set the updateInterval and delegate for the accelerometer. #define kVelocityMultiplier 1000; -(void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { if(currentPoint.x < 0) { currentPoint.x=0; ballXVelocity=0; } if(currentPoint.x > 480-sliderWidth) { currentPoint.x=480-sliderWidth; ballXVelocity=0; } static NSDate *lastDrawTime; if(currentPoint.x<=480-sliderWidth&&currentPoint.x>=0) { if(lastDrawTime!=nil) { NSTimeInterval secondsSinceLastDraw=-([lastDrawTime timeIntervalSinceNow]); ballXVelocity = ballXVelocity + -acceleration.y*secondsSinceLastDraw; CGFloat xAcceleration=secondsSinceLastDraw * ballXVelocity * kVelocityMultiplier; currentPoint = CGPointMake(currentPoint.x + xAcceleration, 266); } slider.frame=CGRectMake(currentPoint.x, currentPoint.y, sliderWidth, 10); } [lastDrawTime release]; lastDrawTime=[[NSDate alloc]init]; } Can anyone help me out please ?

    Read the article

  • Perf: Viewing thousands of images in Silverlight 3 on a 3D Wall

    - by Bob Holland
    I currently work on a very cool Silverlight app that displays photos in a 3D wall space like the Wall3D demo that is thrown in with Blend 3. The problem I am currently facing is performance. The app works like this: As you scroll right or left the 3d photo wall rotates As each movement is made, the next column of photos are downloaded, decoded into a BitmapImage and thrown into a 3D Wall Node. As you can imagine users (if you let them) will want to flip through the photos really quickly, but the problem I have is I cannot display the photos quick enough. In most cases it's a beautiful app that works really well, but when an album contains over 300 photos, you can imagine the sort of memory taken up by all the BitmapImage classes and how moving the slider can jump from photo 20 to photo 120 in a second. Of course we have algorithms in place to not download every photo in between, but I still can't work out a fast way to get the photos displayed. It may be a case that we need to throw away the 'great for show' 3D wall and go to a flat DeepZoom like wall like the Playboy archive one that Vertigo did. Still not sure, let me know your thoughts. P.S. We are using Kit3D for all the 3D work, it's using PerspectiveCamera, Model3DGroup, ModelVisual3D, RotateTransform3D & TranslateTransform3D. Cheers, Bob.

    Read the article

  • Bluetooth development on Windows mobile 6 C#

    - by cheesebunz
    Hi everyone, i recently started on a project which is a puzzle slider game. This application will be using the Bluetooth, and i'm working on the mobile,Samsung omnia i900. This is how my application will work. Any user with this Samsung device plays the game and starts sliding the tiles. There is an option to search and connect to other users with the same device and application, so that they can solve the puzzle together. Right now, i'm working on the Bluetooth Part but am still new to the API. I'm using the 32feet.NET inthehandpersonal.net class library while encountering much difficulties. I am able to search for devices by using: private void btnSearch_Click(object sender, EventArgs e) { BluetoothRadio.PrimaryRadio.Mode = RadioMode.Discoverable; BluetoothRadio myRadio = BluetoothRadio.PrimaryRadio; lblSearch.Text = "" + myRadio.LocalAddress.ToString(); bluetoothClient = new BluetoothClient(); Cursor.Current = Cursors.WaitCursor; BluetoothDeviceInfo[] bluetoothDeviceInfo = { }; bluetoothDeviceInfo = bluetoothClient.DiscoverDevices(10); comboBox1.DataSource = bluetoothDeviceInfo; comboBox1.DisplayMember = "DeviceName"; comboBox1.ValueMember = "DeviceAddress"; comboBox1.Focus(); Cursor.Current = Cursors.Default; } Well, to be honest this is a rip off from some sources i found on the internet but i do understand this part. Next i went on to trying to sending a simple "testing.txt" file and i'm stucked at it. I think i will be using something like the OBEX and Obexwebrequest, obexwebresponse, Uri etc. Could anyone explain it in simple terms for me so that i could understand what they are so that i could continue pairing and etc on Bluetooth development. Sorry making it this long, really appreciate if anyone did waste some time reading it :). Hope alanM sees this :) i'm using their bluetooth library.

    Read the article

  • UIScrollView without paging but with touchesmoved

    - by BittenApple
    I have a UIScrollView that I use to display PDF pages in. I don't want to use paging (yet). I want to display content based on touchesmoved event (so after horizontal swipe). This works (sort of), but instead of catching a single swipe and showing 1 page, the swipe seems to gets broken into 100s of pieces and 1 swipe acts as if you're moving a slider! I have no clue what am I doing wrong. Here's the experimental "display next page" code which works on single taps: - (void)nacrtajNovuStranicu:(CGContextRef)myContext { CGContextTranslateCTM(myContext, 0.0, self.bounds.size.height); CGContextScaleCTM(myContext, 1.0, -1.0); CGContextSetRGBFillColor(myContext, 255, 255, 255, 1); CGContextFillRect(myContext, CGRectMake(0, 0, 320, 412)); size_t brojStranica = CGPDFDocumentGetNumberOfPages(pdfFajl); if(pageNumber < brojStranica){ pageNumber ++; } else{ // kraj PDF fajla, ne listaj dalje. } CGPDFPageRef page = CGPDFDocumentGetPage(pdfFajl, pageNumber); CGContextSaveGState(myContext); CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true); CGContextConcatCTM(myContext, pdfTransform); CGContextDrawPDFPage(myContext, page); CGContextRestoreGState(myContext); //osvjezi displej [self setNeedsDisplay]; } Here's the swiping code: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; UITouch *touch = [touches anyObject]; gestureStartPoint = [touch locationInView:self]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint currentPosition = [touch locationInView:self]; CGFloat deltaX = fabsf(gestureStartPoint.x - currentPosition.x); CGFloat deltaY = fabsf(gestureStartPoint.y - currentPosition.y); if (deltaX >= kMinimumGestureLength && deltaY <= kMaximumVariance) { [self nacrtajNovuStranicu:(CGContextRef)UIGraphicsGetCurrentContext()]; } } The code sits in UIView which displays the PDF content, perhaps I should place it into UIScrollView or is the "display next page" code wrong?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28  | Next Page >