Search Results

Search found 11448 results on 458 pages for 'video camera'.

Page 199/458 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Logitech C510 HD Webcam related question

    - by Ashfame
    I am going to buy Logitech C510 HD webcam and I just checked on other questions here on AskUbuntu that it works out of the box with cheese. My question is can it be limited in any functionality that I would like to do with it? I would like it to be used with everything - Skype, Gtalk video chat, Facebook, Youtube etc I would like the ability to record or do a video call in lesser resolution video (its a 720p one). Also since I read that I should have a Core2Duo 2.2Ghz for 720p but I have a 2.0Ghz one, would it be possible for me to first record it and then encode it after recording if my processor really start giving issue when doing on-the-fly encoding? Anything else that I should consider? I also have a ATI HD 4850 512MB card, if it can help in encoding on-the-fly or is there a chance that my graphics card alone can handle it and those specs were just for a system without a graphics card? I believe so. Also, I got no worries in dealing with console, if I have to do some of the things above in terminal. Other possible significant details: I have a dual screen setup 29" (1360X768) & 22" (1680X1050) which might be using some good power from GPU and I have 2GB DDR2 800Mhz RAM.

    Read the article

  • Banshee doesn't like opening websites

    - by Allan
    I have come across two bugs (which will be added to launchpad if it's not resolved here) When I open any of the websites in Banshee Amazon or Miro Guide as soon as the site is finished loading it crashes Banshee. If I play any video local or remote it will show 1 frame maybe 0.5 sec of video then I get a black screen and audio continues in the backgound. Specs & Details I have a Fujitsu Amilo 1718 laptop with 2 gig of ram (original 1 gig) graphics is provided by ATI Radeon Xpress 200M (don't laugh it works with compiz....just) I have a link to the output of banshee --debug Here Don't have time to read? Here are the Highlights [2 Warn 11:52:34.814] Caught an exception - System.ArgumentNullException: Argument cannot be null. then abit later Debug info from gdb: Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf ptrace: Operation not permitted. ================================================================= Got a SIGSEGV while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. ================================================================= Aborted Not music to my ears as you can expect. The version I am using is 1.9.4 from the daily ppa but these bugs happen in any version of banshee from 1.8.1 and up. So if any one has come across a fix for this problem please share!! additional info Both VLC and Miro work on my system so there isn't a system wide problem with video and I haven't mentioned mono so no trolling it will get voted down.

    Read the article

  • Second Monitor Detected, but not receiving a signal after upgrading to 12.04

    - by user62458
    After I upgraded to 12.04, my second monitor is detected (in display settings), but will not power on. I have scoured the Internet and forums for a solution and I can't find anything. I have found a couple people with the same problem, but never a solution for it. I am no expert, but I'm certainly not a noob. My computer uses AMD Radeon 6250 graphics, but I do NOT want to use the proprietary graphics drivers. They refuse to work properly with my second monitor (they ATI drivers will only mirror screens, and I've done everything to try to fix it, and I DON't want mirrored screens) Not to mention that the default open-source video drivers seem to work much better than the proprietary anyway! Again, Ubuntu's default video drivers work fine, and they even DETECT the second monitor (Dell 19'). I can drag stuff off the screen and put it on the 'space' of the second monitor and even a screen-shot shows that there are two monitors active; but the monitor is OFF. It will not power on. It goes into 'power-save' mode because it is not receiving a signal. For some reason it is not getting the signal to power on, even though Ubuntu thinks the monitor is working properly. I had this working fine on my Sony VAIO yesterday (with Radeon graphics/default Ubuntu video drivers). I upgraded to a Samsung Series 3 and now I have this issue. I can't for the life of me figure out why the monitor is connected, detected and I have screen space for the monitor, but the screen won't turn on! XRANDR Output: Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 VGA-0 connected (normal left inverted right x axis y axis) 1440x900 59.9 + 75.0 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 70.1 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 72.8 75.0 66.7 60.0 720x400 70.1 LVDS connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1366x768 60.1*+ 1280x720 59.9 1152x768 59.8 1024x768 59.9 800x600 59.9 848x480 59.7 720x480 59.7 640x480 59.4 HDMI-0 disconnected (normal left inverted right x axis y axis)

    Read the article

  • Creating Ubuntu Browser App Frames

    - by user73006
    After watching the video i am inspired to create one browser but stuck at one place, could you please help me with this. Requirement = - Like you displayed in your Video i wan create Multiple Buttons in my Toolbar which will open Second ToolBar or Popup Window. - From that Pop Window i wanted to Select Specific Button Which will open My Required Browser. Question - - As displayed in your Video i create new BUtton and If i try to open new link using that it works but now i want to display tool bar or Popup window once any one click on that button, how can i do that.The Second Tool Bar Need to be Activated only after clicking on that button. Things i Tried - - As per my understanding i create Second Toolbar and on that tool bar i have created Button, now i wan know how do i link that tool bar with my Browser Toolbar button. - I tried that by passing Signal Property in Second Toolbar in Quickly but something is missing. MY Code class TvbrowserWindow(Window): gtype_name = "TvbrowserWindow" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the main window""" super(TvbrowserWindow, self).finish_initializing(builder) self.AboutDialog = AboutTvbrowserDialog self.PreferencesDialog = PreferencesTvbrowserDialog # Code for other initialization actions should be added here. self.refreshbutton=self.builder.get_object("refreshbutton") self.SONY=self.builder.get_object("SONY") self.urlentry=self.builder.get_object("urlentry") self.scrolledwindow1=self.builder.get_object("scrolledwindow1") self.webview = WebKit.WebView() self.scrolledwindow1.add(self.webview) self.webview.show() def on_refreshbutton_clicked(self, widget): print "refresh" def on_urlentry_activate(self, widget): url = widget.get_text() print url self.webview.open(url)

    Read the article

  • Sweden Windows Azure Group Meeting in November &amp; Fast with Windows Azure Competition

    - by Alan Smith
    SWAG November Meeting There will be a Sweden Windows Azure Group (SWAG) meeting in Stockholm on Monday 19th November. Chris Klug will be presenting a session on Windows Azure Mobile Services, and I will be presenting a session on Web Site Authentication with Social Identity Providers. Active Solution have been kid enough to host the event, and will be providing food and refreshments. The registration link is here: http://swag14.eventbrite.com If you would like to join SWAG the link is here: http://swagmembership.eventbrite.com Fast with Windows Azure Competition I’ve entered a 3 minute video of rendering a 3D animation using 256 Windows Azure worker roles in the “Fast with Windows Azure” competition. It’s the last week of voting this week, it would be great if you can check out the video and vote for it if you like it. I have not driven a car for about 15 years, so if I win you can expect a hilarious summery of the track day in Vegas. My preparation for the day would be to play Project Gotham Racing for a weekend, and watch a lot of Top Gear.   My video is “Rapid Massive On-Demand Scalability Makes Me Fast!”. The link is here: http://www.meetwindowsazure.com/fast/

    Read the article

  • Graphical glitches on grub and ubuntu desktop

    - by Klyn
    I've decided to install ubuntu but neither ubuntu or any other linux distro won't even get to the desktop screen or work after getting there. On windows 8, everything is just fine. my new video card works perfectly and I have no problem with anything about it. then when I try to boot from ubuntu with wubi or with usb everything goes like this: 1) Grub screen...no problem at all, colors are just fine everything looks okay 2) and then linux boot screen...weird background color, over the backround there are vertical stripes of red-orange dots. but on the ubuntu logo and text, there are no dots at all! -I mean its shape is perfect- 3) desktop is about the start but * vertical stripes of red colored dots are all over the unity screen*. then when I click on ubuntu's menu, it usually switches to black screen saying something about "panic occured"...and then it restarts or it gives no respond at all. problems started after putting hd 6570 video card on my asus m5a78lm-lx video card which has amd phenom II X4 processor on it. I've searched to find something but there was no similar question that's why I'm almost sure it is kind of unique. again, I'm writing on Windows 8 right now and everything works and looks perfect. so far I've updated bios and anyone knows anything to solve this?

    Read the article

  • Dependency Inversion Principle

    - by Chris Paine
    I have been studying also S.O.L.I.D. and watched this video: https://www.youtube.com/watch?v=huEEkx5P5Hs 01:45:30 into the video he talks about the Dependency Inversion Principle and I am scratching my head??? I had to simplify it(if possible) to get it through this thick scull of mine and here is what I came up with. Code on the marked My_modified_code my version, code marked Original DIP video version. Can I accomplish the same with the latter code? Thanks in advance. Original: namespace simple.main { class main { static void Main() { FirstClass FirstClass = new FirstClass(new OtherClass()); FirstClass.Method(); Console.ReadKey(); //tempClass temp = new OtherClass(); //temp.Method(); } } public class FirstClass { private tempClass _LastClass; public FirstClass(tempClass tempClass)//ctor { _LastClass = tempClass; } public void Method() { _LastClass.Method(); } } public abstract class tempClass{public abstract void Method();} public class LASTCLASS : tempClass { public override void Method() { Console.WriteLine("\nHello World!"); } } public class OtherClass : tempClass { public override void Method() { Console.WriteLine("\nOther World!"); } } } My_modified_code: namespace simple.main { class main { static void Main() { //FirstClass FirstClass = new FirstClass(new OtherClass()); //FirstClass.Method(); //Console.ReadKey(); tempClass temp = new OtherClass(); temp.Method(); } } //public class FirstClass //{ // private tempClass _LastClass; // public FirstClass(tempClass tempClass)//ctor // { // _LastClass = tempClass; // } // public void Method() // { // _LastClass.Method(); // } //} public abstract class tempClass{public abstract void Method();} public class LASTCLASS : tempClass { public override void Method() { Console.WriteLine("\nHello World!"); } } public class OtherClass : tempClass { public override void Method() { Console.WriteLine("\nOther World!"); } }

    Read the article

  • Weird Screen while booting to install, while installing and after the install...and then the "panic occured" error

    - by Klyn
    I've decided to install ubuntu but neither ubuntu or any other linux distro won't even get to the desktop screen or work after getting there. On windows 8, everything is just fine. my new video card works perfectly and I have no problem with anything about it. then when I try to boot from ubuntu with wubi or with usb everything goes like this: 1) Grub screen...no problem at all, colors are just fine everything looks okay 2) and then linux boot screen...weird background color, over the backround there are vertical stripes of red-orange dots. but on the ubuntu logo and text, there are no dots at all! -I mean its shape is perfect- 3) desktop is about the start but * vertical stripes of red colored dots are all over the unity screen*. then when I click on ubuntu's menu, it usually switches to black screen saying something about "panic occured"...and then it restarts or it gives no respond at all. problems started after putting hd 6570 video card on my asus m5a78lm-lx video card which has amd phenom II X4 processor on it. I've searched to find something but there was no similar question that's why I'm almost sure it is kind of unique. again, I'm writing on Windows 8 right now and everything works and looks perfect. so far I've updated bios and anyone knows anything to solve this?

    Read the article

  • Java EE Basic Training with Yakov Fain

    - by reza_rahman
    Those of us that have been around Java/Java EE for a little while sometimes tend to forget that Java is still an ever expanding ecosystem with many newcomers. Fortunately, not everyone misses this perspective, including well-respected Java veteran Yakov Fain. Yakov recently started a free online video tutorial series focused on Java and Java EE for absolute beginners. The first few parts of the series focused on Java SE but now Yakov is beginning to cover the very basics of Java EE. In a recent video he covered: The basics of the JCP, JSRs and Java EE How to get started with GlassFish 4 The basics of Servlets Developing Java EE/Servlets using Eclipse and GlassFish The excellent video is posted below. The slides for the tutorial series generally are available here. If there are folks you know that would benefit from this content, please do pass on word. Even if you are an experienced developer, it sometimes helps to sit back and review the basics... It's quite remarkable that someone of Yakov's stature took the time out to create content for absolute beginners. For those unaware, Yakov is one of the earliest Java champions and one would be very hard pressed to match his many contributions to the Java community. The tutorial demonstrates his continued passion, commitment and humility.

    Read the article

  • Old Fglrx Driver - AMD Radeon HD 3200 - ubuntu won't start

    - by Yohannes
    I've been using Ubuntu 12.04 64 bit for about 2 weeks now and I installed the latest Fglrx driver (Graphics Card- AMD HD 3200, PC- Acer Aspire 5336, 4GB RAM, 500GB Harddrive). The problem is that sometimes video's lag and play out of sync sometimes the windows take long to show up after I've clicked them etc. After looking around I found a video on Youtube by Ubuntu help guy and in the video he recommended using an older driver if you have an older graphics card, his was about 4 years old (same as mine) and he used the 11.10 catalyst driver so I decided to try it. I removed the previous installation of the driver and then installed the 11.10 driver. However, when I restarted it instead of going to the GUI it goes to a terminal like window and asks for my login. Now its pretty clear I need to remove the old driver and go back to using the latest one. The only problem is I'm not sure where I saved the latest driver and in order to connect to the Internet I need to change /etc/resolv.conf (I use a static IP). So what should I do? Also anyone from personal experience, what propitiatory driver works best with my graphics card? As in the version. Thanks

    Read the article

  • JavaOne Content Available for Free

    - by Tori Wieldt
    JavaOne content is available in video in three sizes, depending on if you want to have a sip, have a drink, or go to the proverbial firehose. Tall (Keynote Highlights) Go to the JavaOne playlist on the YouTube Java channel for highlights of the JavaOne Keynotes.  Grande (Keynotes in Full) Go to the Oracle Media Network JavaOne 2012 channel to view the keynotes in full (Community Keynote coming soon). Venti (All Sessions, BOFs and Tutorials) To view slides paired with audio of each session, go to the JavaOne content catalog (JavaOne homepage, click on JavaOne Technical Sessions) and select a session. If a video is available, you'll see "Media" in the right column. Look under "Presentation Download" to get the slides. Sessions are being made available as quickly as possible. "It's exciting to see Oracle take community stewardship so seriously," said Sharat Chander, Group Director for Java Technology Outreach. "Making all JavaOne sessions on video available online for free will helps make the future Java for everyone." Thanks to Oracle for funding this and providing to it to the Java Community for free.

    Read the article

  • Garage Sale Code &ndash; Everything must go!

    - by mbcrump
    Garage Sale Code     The term “Garage Sale Code” came from a post by Scott Hanselman. He defines Garage Sale Code as: Complete – It’s a whole library or application. Concise – It does one discrete thing. Clear – It’ll work when you get it. Cheap – It’s free or < 25 cents. (Quite Possibly) Crap – As with a Garage Sale, you’ll never know until you get it home if it’s useless. With the code I’ve posted here, you’ll get all 5 of those things (with an emphasis on crap). All of the projects listed below are available on CodePlex with full source code and executables (for those that just want to run it).  I plan on keeping this page updated when I complete projects that benefit the community.  You can always find this page again by swinging by http://garagesale.michaelcrump.net or you can keep on driving and find another sale. Name Description Language/Technology Used WPF Alphabet WPF Alphabet is a application that I created to help my child learn the alphabet. It displays each letter and pronounces it using speech synthesis. It was developed using WPF and c# in about 3 hours (so its kinda rough). C#, WPF Windows 7 Playlist Generator This program allows you to quickly create wvx video playlist for Windows Media Center. This functionality is not included in WMC and is useful if you want to play video files back to back without selecting the next file. It is also useful to queue up video files to keep children occupied! C#, WinForms Windows 7 Automatic Playlist Creator This application is designed to create W7MC playlist automatically whenever you want. You can select if you want the playlist sorted Alphabetical, by Creation Date or Random. C#, WinForms, Console Generator Twitter Message for Live Writer This is a plug-in for Windows Live Writer that generates a twitter message with your blog post name and a TinyUrl link to the blog post. It will do all of this automatically after you publish your post. C#, LiveWriter API

    Read the article

  • how to congest a link using iperf

    - by navaz
    I have setup like below. Switch1-------------------- Switch2 | | | | | | | | | | | | PC1 PC2 PC3 PC4 I have a video traffic is flowing between PC1 and PC4. I have configured PC2 as iperf server. ( iperf -s ) and PC3 as client . (iperf -c 10.10.10.2 -P 20 -t 10000) where 10.10.10.2 is PC2 IP. now I am seeing most of the traffic in a link (switch1---switch2) is iperf. (TCP). I have observed from the logs that 1 out of 300 packet is UDP. Still I am not seeing any difference in the quality of video streaming in PC4. It looks similar compared to the case with no iperf. I am checking QOS, I have tried many options with iperf, couldnt succeed. I want to diminish the quality of video streaming in PC4. Could you please tell me what options can be used along with iperf to do it. Bandwidth between Switch1---switch2 is 1Gbits/sec. Thanks in advance

    Read the article

  • Ubuntu 13.10. After login, no desktop displayed. Two Nvidia Graphics Cards, Four Monitors

    - by jmerkow
    I am working on an issue with my Ubuntu 13.10 installation. I am attempting to get 4 monitors up and running but I am having some trouble. So far, I installed and updated to the latest NVIDIA drivers (331.20). Initially X would not start (after installation) so I replaced my xorg.conf with xorg.conf.failsafe. This fixed that problem, but then I tried to enable the other 2 monitors (other video card) and xorg fails to start once again (after I login there is no desktop). I am fairly new to linux but I am not a complete beginner, but I'm not comfortable poking around too much on my own to troubleshoot yet.... lspci -nn | grep VGA: 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF110 [GeForce GTX 570 Rev. 2] [10de:1086] (rev a1) 05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF110 [GeForce GTX 580] [10de:1080] (rev a1) It seems that the nvidia-settings tool does not result in a good xorg.conf file. Here it is: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 331.20 (buildmeister@swio-display-x86-rhel47-05) Wed Oct 30 18:20:32 PDT 2013 Section "ServerLayout" Identifier "Default Layout" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection ... Section "Monitor" Identifier "Configured Monitor" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "SHARP HDMI" HorizSync 15.0 - 68.0 VertRefresh 55.0 - 76.0 EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 0.0 - 0.0 VertRefresh 0.0 EndSection Section "Device" Identifier "Configured Video Device" Driver "vesa" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 570" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:5:0:0" EndSection Section "Screen" Identifier "Default Screen" Device "Configured Video Device" Monitor "Configured Monitor" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-1" Option "metamodes" "HDMI-0: nvidia-auto-select +640+0, DVI-I-3: nvidia-auto-select +0+1080" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

  • No Unity after ubuntu 12.10 upgrade

    - by Aivaras
    After I upgraded to Ubuntu 12.10 from 12.04, there was low graphics and no Unity. Just the mouse and the wallpaper. So, I got into the terminal via Ctrl+Alt+T, launched Chrome and searched for a solution. As a result, I tried this: sudo sh amd-driver-installer-12.6-legacy-x86.x86_64.run It did not work. Then I tried this: sudo add-apt-repository ppa:makson96/fglrx sudo apt-get update sudo apt-get upgrade sudo apt-get install fglrx-legacy It did not work too. I removed the repository, got back the the xorg version to 1.13, and tried this: sudo sh /usr/share/ati/fglrx-uninstall.sh sudo apt-get remove --purge fglrx fglRx_* fglrx-amdcccle* fglrx-dev* xorg-driver-fglrx sudo apt-get remove --purge xserver-xorg-video-ati xserver-xorg-video-radeon sudo apt-get install xserver-xorg-video-ati sudo apt-get install --reinstall libgl1-mesa-glx libgl1-mesa-dri xserver-xorg-core It did return the screen resolution back, but still no Unity. Is there something what could I do? My graphics card is: lspci | grep VGA 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV620 [Mobility Radeon HD 3400 Series]

    Read the article

  • I Hereby Resolve… (T-SQL Tuesday #14)

    - by smisner
    It’s time for another T-SQL Tuesday, hosted this month by Jen McCown (blog|twitter), on the topic of resolutions. Specifically, “what techie resolutions have you been pondering, and why?” I like that word – pondering – because I ponder a lot. And while there are many things that I do already because of my job, there are many more things that I ponder about doing…if only I had the time. Then I ponder about making time, but then it’s back to work! In 2010, I was moderately more successful in making time for things that I ponder about than I had been in years past, and I hope to continue that trend in 2011. If Jen hadn’t settled on this topic, I could keep my ponderings to myself and no one would ever know the outcome, but she’s egged me on (and everyone else that chooses to participate)! So here goes… For me, having resolve to do something means that I wouldn’t be doing that something as part of my ordinary routine. It takes extra effort to make time for it. It’s not something that I do once and check off a list, but something that I need to commit to over a period of time. So with that in mind, I hereby resolve… To Learn Something New… One of the things I love about my job is that I get to do a lot of things outside of my ordinary routine. It’s a veritable smorgasbord of opportunity! So what more could I possibly add to that list of things to do? Well, the more I learn, the more I realize I have so much more to learn. It would be much easier to remain in ignorant bliss, but I was born to learn. Constantly. (And apparently to teach, too– my father will tell you that as a small child, I had the neighborhood kids gathered together to play school – in the summer. I’m sure they loved that – but they did it!) These are some of things that I want to dedicate some time to learning this year: Spatial data. I have a good understanding of how maps in Reporting Services works, and I can cobble together a simple T-SQL spatial query, but I know I’m only scratching the surface here. Rob Farley (blog|twitter) posted interesting examples of combining maps and PivotViewer, and I think there’s so many more creative possibilities. I’ve always felt that pictures (including charts and maps) really help people get their minds wrapped around data better, and because a lot of data has a geographic aspect to it, I believe developing some expertise here will be beneficial to my work. PivotViewer. Not only is PivotViewer combined with maps a useful way to visualize data, but it’s an interesting way to work with data. If you haven’t seen it yet, check out this interactive demonstration using Netflx OData feed. According to Rob Farley, learning how to work with PivotViewer isn’t trivial. Just the type of challenge I like! Security. You’ve heard of the accidental DBA? Well, I am the accidental security person – is there a word for that role? My eyes used to glaze over when having to study about security, or  when reading anything about it. Then I had a problem long ago that no one could figure out – not even the vendor’s tech support – until I rolled up my sleeves and painstakingly worked through the myriad of potential problems to resolve a very thorny security issue. I learned a lot in the process, and have been able to share what I’ve learned with a lot of people. But I’m not convinced their eyes weren’t glazing over, too. I don’t take it personally – it’s just a very dry topic! So in addition to deepening my understanding about security, I want to find a way to make the subject as it relates to SQL Server and business intelligence more accessible and less boring. Well, there’s actually a lot more that I could put on this list, and a lot more things I have plans to do this coming year, but I run the risk of overcommitting myself. And then I wouldn’t have time… To Have Fun! My name is Stacia and I’m a workaholic. When I love what I do, it’s difficult to separate out the work time from the fun time. But there are some things that I’ve been meaning to do that aren’t related to business intelligence for which I really need to develop some resolve. And they are techie resolutions, too, in a roundabout sort of way! Photography. When my husband and I went on an extended camping trip in 2009 to Yellowstone and the Grand Tetons, I had a nice little digital camera that took decent pictures. But then I saw the gorgeous cameras that other tourists were toting around and decided I needed one too. So I bought a Nikon D90 and have started to learn to use it, but I’m definitely still in the beginning stages. I traveled so much in 2010 and worked on two book projects that I didn’t have a lot of free time to devote to it. I was very inspired by Kimberly Tripp’s (blog|twitter) and Paul Randal’s (blog|twitter) photo-adventure in Alaska, though, and plan to spend some dedicated time with my camera this year. (And hopefully before I move to Alaska – nothing set in stone yet, but we hope to move to a remote location – with Internet access – later this year!) Astronomy. I have this cool telescope, but it suffers the same fate as my camera. I have been gone too much and busy with other things that I haven’t had time to work with it. I’ll figure out how it works, and then so much time passes by that I forget how to use it. I have this crazy idea that I can actually put the camera and the telescope together for astrophotography, but I think I need to start simple by learning how to use each component individually. As long as I’m living in Las Vegas, I know I’ll have clear skies for nighttime viewing, but when we move to Alaska, we’ll be living in a rain forest. I have no idea what my opportunities will be like there – except I know that when the sky is clear, it will be far more amazing than anything I can see in Vegas – even out in the desert - because I’ll be so far away from city light pollution. I’ve been contemplating putting together a blog on these topics as I learn. As many of my fellow bloggers in the SQL Server community know, sometimes the best way to learn something is to sit down and write about it. I’m just stumped by coming up with a clever name for the new blog, which I was thinking about inaugurating with my move to Alaska. Except that I don’t know when that will be exactly, so we’ll just have to wait and see which comes first!

    Read the article

  • How do I set libavcodec to use 4:2:2 chroma when encoding MPEG-2 4:2:2 profile?

    - by Mike Pollitt
    I have a project using libavcodec (ffmpeg). I'm using it to encode MPEG-2 video at 4:2:2 Profile, Main Level. I have the pixel format PIX_FMT_YUV422P selected in the AVCodecContext, however the video output I'm getting has all the colours wrong, and looks to me like the encoder is incorrectly reading the buffers as though it thinks it is 4:2:0 chroma rather than 4:2:2. Here's my codec setup: // // AVFormatContext* _avFormatContext previously defined as mpeg2video // // // Set up the video stream for output // AVVideoStream* _avVideoStream = av_new_stream(_avFormatContext, 0); if (!_avVideoStream) { err = ccErrWFFFmpegUnableToAllocateStream; goto bail; } _avCodecContext = _avVideoStream->codec; _avCodecContext->codec_id = CODEC_ID_MPEG2VIDEO; _avCodecContext->codec_type = CODEC_TYPE_VIDEO; // // Set up required parameters // _avCodecContext->rc_max_rate = _avCodecContext->rc_min_rate = _avCodecContext->bit_rate = src->_avCodecContext->bit_rate; _avCodecContext->flags = CODEC_FLAG_INTERLACED_DCT; _avCodecContext->flags2 = CODEC_FLAG2_INTRA_VLC | CODEC_FLAG2_NON_LINEAR_QUANT; _avCodecContext->qmin = 1; _avCodecContext->qmax = 1; _avCodecContext->rc_buffer_size = _avCodecContext->rc_initial_buffer_occupancy = 2000000; _avCodecContext->rc_buffer_aggressivity = 0.25; _avCodecContext->profile = 0; _avCodecContext->level = 5; _avCodecContext->width = f->GetWidth(); // f is a private Frame class with width, height properties etc. _avCodecContext->height = f->GetHeight(); _avCodecContext->time_base.den = 25; _avCodecContext->time_base.num = 1; _avCodecContext->gop_size = 12; _avCodecContext->max_b_frames = 2; _avCodecContext->pix_fmt = PIX_FMT_YUV422P; if (_avFormatContext->oformat->flags & AVFMT_GLOBALHEADER) { _avCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER; } if (av_set_parameters(_avFormatContext, NULL) < 0) { err = ccErrWFFFmpegUnableToSetParameters; goto bail; } // // Set up video codec for encoding // AVCodec* _avCodec = avcodec_find_encoder(_avCodecContext->codec_id); if (!_avCodec) { err = ccErrWFFFmpegUnableToFindCodecForOutput; goto bail; } if (avcodec_open(_avCodecContext, _avCodec) < 0) { err = ccErrWFFFmpegUnableToOpenCodecForOutput; goto bail; } A screengrab of the resulting video frame can be seen at http://ftp.limeboy.com/images/screen_grab.png (the input was standard colour bars). I've checked by outputting debug frames to TGA format at various points in the process, and I can confirm that it is all fine and dandy up until the point that libavcodec encodes the frame. Any assistance most appreciated! Cheers, Mike.

    Read the article

  • HLSL/XNA Ambient light texture mixed up with multi pass lighting

    - by Manu-EPITA
    I've been having some troubles lately with lighting. I have found a source on google which is working pretty good on the example. However, when I try to implement it to my current project, I am getting some very weird bugs. The main one is that my textures are "mixed up" when I only activate the ambient light, which means that a model gets the texture of another one . I am using the same effect for every meshes of my models. I guess this could be the problem, but I don't really know how to "reset" an effect for a new model. Is it possible? Here is my shader: float4x4 WVP; float4x4 WVP; float3x3 World; float3 Ke; float3 Ka; float3 Kd; float3 Ks; float specularPower; float3 globalAmbient; float3 lightColor; float3 eyePosition; float3 lightDirection; float3 lightPosition; float spotPower; texture2D Texture; sampler2D texSampler = sampler_state { Texture = <Texture>; MinFilter = anisotropic; MagFilter = anisotropic; MipFilter = linear; MaxAnisotropy = 16; }; struct VertexShaderInput { float4 Position : POSITION0; float2 Texture : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Texture : TEXCOORD0; float3 PositionO: TEXCOORD1; float3 Normal : NORMAL0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WVP); output.Normal = input.Normal; output.PositionO = input.Position.xyz; output.Texture = input.Texture; return output; } float4 PSAmbient(VertexShaderOutput input) : COLOR0 { return float4(Ka*globalAmbient + Ke,1) * tex2D(texSampler,input.Texture); } float4 PSDirectionalLight(VertexShaderOutput input) : COLOR0 { //Difuze float3 L = normalize(-lightDirection); float diffuseLight = max(dot(input.Normal,L), 0); float3 diffuse = Kd*lightColor*diffuseLight; //Specular float3 V = normalize(eyePosition - input.PositionO); float3 H = normalize(L + V); float specularLight = pow(max(dot(input.Normal,H),0),specularPower); if(diffuseLight<=0) specularLight=0; float3 specular = Ks * lightColor * specularLight; //sum all light components float3 light = diffuse + specular; return float4(light,1) * tex2D(texSampler,input.Texture); } technique MultiPassLight { pass Ambient { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PSAmbient(); } pass Directional { PixelShader = compile ps_3_0 PSDirectionalLight(); } } And here is how I actually apply my effects: public void ApplyLights(ModelMesh mesh, Matrix world, Texture2D modelTexture, Camera camera, Effect effect, GraphicsDevice graphicsDevice) { graphicsDevice.BlendState = BlendState.Opaque; effect.CurrentTechnique.Passes["Ambient"].Apply(); foreach (ModelMeshPart part in mesh.MeshParts) { graphicsDevice.SetVertexBuffer(part.VertexBuffer); graphicsDevice.Indices = part.IndexBuffer; // Texturing graphicsDevice.BlendState = BlendState.AlphaBlend; if (modelTexture != null) { effect.Parameters["Texture"].SetValue( modelTexture ); } graphicsDevice.DrawIndexedPrimitives( PrimitiveType.TriangleList, part.VertexOffset, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount ); // Applying our shader to all the mesh parts effect.Parameters["WVP"].SetValue( world * camera.View * camera.Projection ); effect.Parameters["World"].SetValue(world); effect.Parameters["eyePosition"].SetValue( camera.Position ); graphicsDevice.BlendState = BlendState.Additive; // Drawing lights foreach (DirectionalLight light in DirectionalLights) { effect.Parameters["lightColor"].SetValue(light.Color.ToVector3()); effect.Parameters["lightDirection"].SetValue(light.Direction); // Applying changes and drawing them effect.CurrentTechnique.Passes["Directional"].Apply(); graphicsDevice.DrawIndexedPrimitives( PrimitiveType.TriangleList, part.VertexOffset, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount ); } } I am also applying this when loading the effect: effect.Parameters["lightColor"].SetValue(Color.White.ToVector3()); effect.Parameters["globalAmbient"].SetValue(Color.White.ToVector3()); effect.Parameters["Ke"].SetValue(0.0f); effect.Parameters["Ka"].SetValue(0.01f); effect.Parameters["Kd"].SetValue(1.0f); effect.Parameters["Ks"].SetValue(0.3f); effect.Parameters["specularPower"].SetValue(100); Thank you very much UPDATE: I tried to load an effect for each model when drawing, but it doesn't seem to have changed anything. I suppose it is because XNA detects that the effect has already been loaded before and doesn't want to load a new one. Any idea why?

    Read the article

  • Problem Implementing Texture on Libgdx Mesh of Randomized Terrain

    - by BrotherJack
    I'm having problems understanding how to apply a texture to a non-rectangular object. The following code creates textures such as this: from the debug renderer I think I've got the physical shape of the "earth" correct. However, I don't know how to apply a texture to it. I have a 50x50 pixel image (in the environment constructor as "dirt.png"), that I want to apply to the hills. I have a vague idea that this seems to involve the mesh class and possibly a ShapeRenderer, but the little i'm finding online is just confusing me. Bellow is code from the class that makes and regulates the terrain and the code in a separate file that is supposed to render it (but crashes on the mesh.render() call). Any pointers would be appreciated. public class Environment extends Actor{ Pixmap sky; public Texture groundTexture; Texture skyTexture; double tankypos; //TODO delete, temp public Tank etank; //TODO delete, temp int destructionRes; // how wide is a static pixel private final float viewWidth; private final float viewHeight; private ChainShape terrain; public Texture dirtTexture; private World world; public Mesh terrainMesh; private static final String LOG = Environment.class.getSimpleName(); // Constructor public Environment(Tank tank, FileHandle sfileHandle, float w, float h, int destructionRes) { world = new World(new Vector2(0, -10), true); this.destructionRes = destructionRes; sky = new Pixmap(sfileHandle); viewWidth = w; viewHeight = h; skyTexture = new Texture(sky); terrain = new ChainShape(); genTerrain((int)w, (int)h, 6); Texture tankSprite = new Texture(Gdx.files.internal("TankSpriteBase.png")); Texture turretSprite = new Texture(Gdx.files.internal("TankSpriteTurret.png")); tank = new Tank(0, true, tankSprite, turretSprite); Rectangle tankrect = new Rectangle(300, (int)tankypos, 44, 45); tank.setRect(tankrect); BodyDef terrainDef = new BodyDef(); terrainDef.type = BodyType.StaticBody; terrainDef.position.set(0, 0); Body terrainBody = world.createBody(terrainDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = terrain; terrainBody.createFixture(fixtureDef); BodyDef tankDef = new BodyDef(); Rectangle rect = tank.getRect(); tankDef.type = BodyType.DynamicBody; tankDef.position.set(0,0); tankDef.position.x = rect.x; tankDef.position.y = rect.y; Body tankBody = world.createBody(tankDef); FixtureDef tankFixture = new FixtureDef(); PolygonShape shape = new PolygonShape(); shape.setAsBox(rect.width*WORLD_TO_BOX, rect.height*WORLD_TO_BOX); fixtureDef.shape = shape; dirtTexture = new Texture(Gdx.files.internal("dirt.png")); etank = tank; } private void genTerrain(int w, int h, int hillnessFactor){ int width = w; int height = h; Random rand = new Random(); //min and max bracket the freq's of the sin/cos series //The higher the max the hillier the environment int min = 1; //allocating horizon for screen width Vector2[] horizon = new Vector2[width+2]; horizon[0] = new Vector2(0,0); double[] skyline = new double[width]; //TODO skyline necessary as an array? //ratio of amplitude of screen height to landscape variation double r = (int) 2.0/5.0; //number of terms to be used in sine/cosine series int n = 4; int[] f = new int[n*2]; //calculating omegas for sine series for(int i = 0; i < n*2 ; i ++){ f[i] = rand.nextInt(hillnessFactor - min + 1) + min; } //amp is the amplitude of the series int amp = (int) (r*height); double lastPoint = 0.0; for(int i = 0 ; i < width; i ++){ skyline[i] = 0; for(int j = 0; j < n; j++){ skyline[i] += ( Math.sin( (f[j]*Math.PI*i/height) ) + Math.cos(f[j+n]*Math.PI*i/height) ); } skyline[i] *= amp/(n*2); skyline[i] += (height/2); skyline[i] = (int)skyline[i]; //TODO Possible un-necessary float to int to float conversions tankypos = skyline[i]; horizon[i+1] = new Vector2((float)i, (float)skyline[i]); if(i == width) lastPoint = skyline[i]; } horizon[width+1] = new Vector2(800, (float)lastPoint); terrain.createChain(horizon); terrain.createLoop(horizon); //I have no idea if the following does anything useful :( terrainMesh = new Mesh(true, (width+2)*2, (width+2)*2, new VertexAttribute(Usage.Position, (width+2)*2, "a_position")); float[] vertices = new float[(width+2)*2]; short[] indices = new short[(width+2)*2]; for(int i=0; i < (width+2); i+=2){ vertices[i] = horizon[i].x; vertices[i+1] = horizon[i].y; indices[i] = (short)i; indices[i+1] = (short)(i+1); } terrainMesh.setVertices(vertices); terrainMesh.setIndices(indices); } Here is the code that is (supposed to) render the terrain. @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); // tell the camera to update its matrices. camera.update(); // tell the SpriteBatch to render in the // coordinate system specified by the camera. backgroundStage.draw(); backgroundStage.act(delta); uistage.draw(); uistage.act(delta); batch.begin(); debugRenderer.render(this.ground.getWorld(), camera.combined); batch.end(); //Gdx.graphics.getGL10().glEnable(GL10.GL_TEXTURE_2D); ground.dirtTexture.bind(); ground.terrainMesh.render(GL10.GL_TRIANGLE_FAN); //I'm particularly lost on this ground.step(); }

    Read the article

  • How to diagnose and solve an erratic "HDCP Support Required"?

    - by Jom Orgstrom
    I am playing a digital tv broadcast on Windows Media Center for Windows 7. I built this system so it works with HDCP, and in fact I have been able to watch tv and bluray before with this same computer. However, I suddenly started getting an "HDCP Support Required" error from WMC. The entire message is as follows: HDCP Support Required High-bandwidth Digital Content Protection (HDCP) may not be supported by the current video card. Use an HDCP-compliant display, video card, and video driver. Or, connect using an analog connection such as component or VGA. Relevant specs are: CPU: Ivy Bridge Core i7-3770 Motherboard: Asus P8H77-I Memory: 16GB DDR3-1600 Graphics: Radeon HD 7850 (Driver by AMD, version 8.982.0.0 built on 2012/07/27) Display: Acer P243w connected by HDMI Sound: Roland Quad-Capture (It complains even when I use the bundled VIA HD Audio) TV Tuner: I-O Data GV-MC7/HZ3 OS: Windows 7 Professional SP1, Windows Update enabled. All patched and up to date. As you can see, there is nothing weird or old about my setup. I am also not doing anything strange, not doing any overclocking, weird system changes and so on. One thing that does happen from time to time, is that the display goes black for a few seconds (sometimes when watching media contents, sometimes when just using photoshop or Visual Studio). This happened with my previous setup as well, so I'd be inclined to think it is a display or cable issue (apart from the BD drive, these are the only things I kept from my previous setup to this one). But being a digital transfer, as far as I know, these things either work or not. Never erratically or with decreased quality. The thing is that sometimes I can watch the TV, sometimes not. This happens with recorded programs as well, so it's not a per-program thing. Sometimes rebooting helps, sometimes it doesn't. Sometimes unplugging and plugging back the HDMI connector helps, sometimes it doesn't. Sometimes doing so doesn't even turn the screen back on, so I have to reboot. Unfortunately, WMC's error message is quite unhelpful. I'd like to know exactly where the problem is, so I can solve it. I don't want to buy a brand new display just to then find out it was a registry setting that was misconfigured. I've tried looking at the system event viewer, but these errors don't show up at all in there. Other people who have this problem seem to have a setup that is not HDCP compliant, so I turn to you guys here. Anybody knows how to diagnose this problem? Edit: So I got the Cyberlink Blu-ray disc advisor. I ran it and told me everything was okay, except for the Video Connection Type, which showed as "Digital (without HDCP)". I then proceeded to unplug the power cable from the monitor, plugged it in again, ran the tool again, and now it's "Digital (with HDCP)". Needless to say, I can watch my TV and recorded programs on WMP again. I'm guessing that at some point, something may be slightly wrong with the HDCP setup, and Windows decides to reset the entire content protection path (which leads to the screen blanking out). Usually the reset succeeds, but sometimes it doesn't, so Windows defaults to turning HDCP off. There's no way to turn it back on, except by doing a hard reset of the display. I really want to know what the exact error was, so I can fix it. Is it the cable? is it the display? is it the video card? the driver? Also, is there any other way to try and turn HDCP on again without having to hard reset the display? Oh, questions, questions...

    Read the article

  • Another sound not working post

    - by Thomas Smart
    Tried all the other "sound not working" posts i think, lost count. purge/reinstall alsa and pulse, reboot, add user to audio group, various lines in the alsa config file such as "options snd-hda-intel model=" then tried different options like generic, auto, basic, default, etc. tried pulseaudio -k && sudo alsa force-reload a few times, with and without rebooting. Hardware: 16gb ram, core I7-4790, Intel Haswell mboard with onboard sound and graphics Multimedia: Audio Adapter: HDA-Intel-HDA Intel HDMI OS: Ubuntu server 14.04 with ubuntu-desktop installed. GUI sound settings lists only the dummy sound card alsamixer -c 0 ¦ Card: HDA Intel HDMI F1: Help ¦ ¦ Chip: Intel Haswell HDMI F2: System information ¦ ¦ View: F3:[Playback] F4: Capture F5: All F6: Select sound card ¦ ¦ Item: S/PDIF ¦ ¦ +--+ ¦ ¦ ¦OO¦ ¦ ¦ +--+ ¦ ¦ < S/PDIF > ¦ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 aplay -L default Playback/recording through the PulseAudio sound server null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 HDMI Audio Output dmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample mixing device dsnoop:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample snooping device hw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct hardware device without any conversions plughw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Hardware device with all software conversions cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA Intel HDMI HDA Intel HDMI at 0xf7d14000 irq 46 cat /proc/asound/devices 1: : sequencer 2: [ 0- 3]: digital audio playback 3: [ 0- 0]: hardware dependent 4: [ 0] : control 33: : timer mplayer -ao alsa:device=hdmi /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] alsa-lib: confmisc.c:768:(parse_card) cannot find card '1' [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:392:(snd_func_concat) error evaluating strings [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:1251:(snd_func_refer) error evaluating name [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory [AO_ALSA] alsa-lib: conf.c:4727:(snd_config_expand) Evaluate error: No such file or directory [AO_ALSA] alsa-lib: pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM hdmi [AO_ALSA] Playback open error: No such file or directory Failed to initialize audio driver 'alsa:device=hdmi' Could not open/initialize audio device -> no sound. Audio: no sound Video: no video Exiting... (End of file) mplayer -ao alsa:device=hw=0.3 /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] Format floatle is not supported by hardware, trying default. AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample) Video: no video Starting playback... A: 0.4 (00.4) of 0.8 (00.7) 0.1% Exiting... (End of file) Thank you for your time and help :)

    Read the article

  • Silverlight Cream for December 12, 2010 - 2 -- #1009

    - by Dave Campbell
    In this Issue: Michael Crump, Jesse Liberty, Shawn Wildermuth, Domagoj Pavlešic, Peter Kuhn, James Ashley, Sara Summers, Morten Nielsen, Peter Torr, and Tau Sick. Above the Fold: Silverlight: "Silverlight 4 – Coded UI Framework Video Tutorial" Michael Crump WP7: "Windows Phone From Scratch #12–Custom Behaviors (Part I)" Jesse Liberty From SilverlightCream.com: Silverlight 4 – Coded UI Framework Video Tutorial Michael Crump posted a video tutorial today on the Coded UI Test Framework that we got with the VS2010 Feature Pack 2. Wanna create automated tests? ... check out Michael's video and save yourself some time. Windows Phone From Scratch #12–Custom Behaviors (Part I) Jesse Liberty posted his Windows Phone from Scratch number 12 today... and it's on Custom Behaviors... cool stuff... need to read this and get your head around it... this is part 1, jump on it before he drops part 2 on us! The Next Application Platform? All of them... Shawn Wildermuth has a thought-provoking post up ... check it out and see if you're ready to join him on the adventure of building for all the platforms... Windows Phone 7 Accelerometer Test App Domagoj Pavlešic has a test app up for the accelerometer on the WP7 ... if you need to use it, and are having problems, a good example always helps me. Protocol of developing an animation texture tool Peter Kuhn found a need for a tool to creat some animations for an WP7 XNA game... so he challenged himself to write it, and detailed out all his steps as he went. Re-examining WP7 Launchers and Choosers James Ashley's most recent post is on the Pivot Control ... check this out... add a working Horizontally oriented slider to a pivot... plus some external links to help out New Prototyping Sketch Sheets for WP7 This is one of those posts that I had to go to SilverlightCream and make sure I hadn't hit it yet... pretty cool prototype sheets for WP7 by Sara Summers ... we've seen others, they're all good. Simulating GPS on Windows Phone 7 Morten Nielsen helps you get around the fact that you're not going to be able to use the emulator for testing your GPS app ... at least not without some assistance... and that doesn't mean hauling your dev system around your neighborhood, either. How to correctly handle application deactivation and reactivation We've seen posts on Tombstoning, but probably not from Silverlight team members... check this one out from Peter Torr ... great even sequence information and all the info on how to correctly handle it, plus external links to the documentation... you knew there was documentation, right? :) Localizing a Windows Phone 7 Application Tau Sick has a post up discussing Localization and your WP7 apps... coming from soneone with an app in the marketplace in 3 languages, it's a pretty good bet he's got it figured out! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • How to ace Skype Interviews

    - by FelixWehmeyer
    Many companies these days opt to include a Skype interview in the recruitment process, as it comes close to a face-to-face interview without the time and costs involved for both the company and the candidate. In some cases during the recruitment process at Oracle you also might be asked to conduct a Skype interview. To help you get started with this, we researched some websites to give you several tips and tricks. What most of the bloggers say about this topic is collected in this article to help you prepare. It is all about Technology The bit that can make a Skype interview more complicated than a face-to-face or phone interview is the fact you are using additional technology. Always check the video and audio capabilities of your computer to make sure they work properly. Be prepared for connections to be limited during the interview. Using a webcam can also be confusing, if you do not have a lot of experience using it. Make sure you look at the camera and not the monitor to avoid the impression you are looking away. Practice If you do not feel comfortable using the camera, do a mock interview with a friend or family member before you have the actual interview. Be aware that facial impressions or reactions come across differently on a monitor, so make sure to practice how you  come across during the interview. Good lighting in the room also helps you make you look the best for the interviewer. You and your room Dress code, as in any face-to-face interview,is important to think about. Dress the same way as you would for face-to-face interviews and avoid patterns or informal clothing. Another tip,is to be aware of your surroundings. Make sure the room you use looks good on camera, making sure it is neat and tidy, also think about how the walls look behind you. Also make sure you do not get distracted during the interview by anyone or anything, as this will directly have an impact on your interview and your ability to focus and concentrate. What is in a name What goes for any account that you share during the recruitment process, either your email address or Skype name, is to make sure it comes across as professional. Try to avoid using nicknames or strange words in your accounts, stick to using a first name – last name or an abbreviation of the same. If you would like to read more about this topic, have a look at the links below which we used as inspiration for this blog article. 7 Deadly Skype Interview Sins is fun to read and to gives you some good advice to keep in mind. ·         http://www.inc.com/guides/201103/4-tips-for-conducting-a-job-interview-using-skype.html ·         http://blog.simplyhired.com/2012/05/5-tips-to-a-great-skype-interview.html ·         http://www.cnn.com/2011/LIVING/07/11/skype.interview.tips.cb/index.html http://www.ehow.com/how_5648281_prepare-skype-interview.html

    Read the article

  • Let Me Show You Something: Instagram, Vine and Snapchat for Brands

    - by Mike Stiles
    While brands are well aware of how much more impactful images are than text-only posts on social channels, today you’re additionally being presented with platform after additional platform for hosting, doctoring and sharing photos and videos.  Can you play in every sandbox? And if you do, can you be brilliant on all of them? As has usually been the case, so far brands are sticking their toes into new platforms while not actually committing to them, or strategizing for them, or resourcing them. TrackMaven found of the 123 F500 companies using Instagram, only 22% of them are active on it. Likewise, research from Simply Measured found brands are indeed jumping in, with the number establishing a presence on Instagram up 55% over the past year. Users want them there…brand engagement has exploded 350%, and over 1/3 of the top brands have at least 10,000 followers. BUT…the top 10 brands are generating 33% of all posts, reaping 83% of all engagement. Things are also growing on Twitter’s Vine, the 6-second looping video app that hit 40 million users in August. The 7th Chamber says 5 tweets a second contain a Vine link. Other studies say branded Vines are 4 times more likely to be shared and seen than rank-and-file branded videos. Why? Users know that even if a video is pure junk, they won’t get robbed of too much of their valuable time. Vine is always upgrading so you can make sure your videos are worth viewers’ time. You can now edit videos, and save & work on several projects concurrently. What you can’t do is upload a finely crafted video into Vine, but you can do that with Instagram. The key to success? Same as with all other content; make it of value. Deliver a laugh or a lesson or both. How-to, behind the scenes peeks, contests, demos, all make sense in the short video format. Or follow Nash Grier’s example, which is to just have fun with and connect to your viewers, earning their trust that your next Vine will be as good as the last. Nash is only 15, has over 1.4 million followers, and adds about 100,000 a week. He broke out when one of his videos was re-Vined by some other kid with 300,000 followers. Make good stuff, get it in front of influencers, and your brand Vines could break out as well. Then there’s Snapchat, the “this photo will self destruct” platform. How can that be of use to brands besides offering coupons that really expire? The jury is out. But with an audience of over 100 million and a valuation of $800 million, media-with-a-time-limit is compelling. Now there’s “Snapchat Stories” that can last 24 hours and be shared to the public at large. You might be able to capitalize on how much more focus gets put on content when there’s a time limit on its availability. The underlying truth to all of this is, these are all tools. Very cool, feature rich tools, but tools. You can give the exact same art kit to 5 different people and you’d get back 5 very different works, ranging from worthless garbage to masterpiece. Brands are being called upon to be still and moving image artists. That’s what your customers are used to seeing, from a variety of sources. Commit to communicating with them accordingly. @mikestiles Photo: stock.xchng

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >