Search Results

Search found 2333 results on 94 pages for 'mr pixel'.

Page 87/94 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Position footer to bottom of window or page, whichever is larger

    - by BenM
    I am currently working on a site that requires a footer to be placed either at the bottom of the window, or the bottom of the page content, whichever is lower. I have tried using the height: 100% method, but this causes a problem. I also have a position: fixed header, and some padding on my content (defined in pixels). Also, the height of the content may change after the page has loaded (use of accordions, etc.), so I wonder if there's a pure CSS way to position the footer to either the bottom of the window, or the bottom of the document, while still allowing pixel padding and so forth. Here's an outlined structure of the HTML: <header></header> <div class="content"> <footer></footer> </div> I have also put together a Fiddle to demonstrate how the CSS works at the moment: http://jsfiddle.net/LY6Zs/. I am unfortunately unable to change the HTML structure (i.e. breaking out the footer element from .content.

    Read the article

  • Blit SDL_Surface onto another SDL_Surface and apply a colorkey

    - by NordCoder
    I want to load an SDL_Surface into an OpenGL texture with padding (so that NPOT-POT) and apply a color key on the surface afterwards. I either end up colorkeying all pixels, regardless of their color, or not colorkey anything at all. I have tried a lot of different things, but none of them seem to work. Here's the working snippet of my code. I use a custom color class for the colorkey (range [0-1]): // Create an empty surface with the same settings as the original image SDL_Surface* paddedImage = SDL_CreateRGBSurface(image->flags, width, height, image->format->BitsPerPixel, #if SDL_BYTEORDER == SDL_BIG_ENDIAN 0xff000000, 0x00ff0000, 0x0000ff00, 0x000000ff #else 0x000000ff, 0x0000ff00, 0x00ff0000, 0xff000000 #endif ); // Map RGBA color to pixel format value Uint32 colorKeyPixelFormat = SDL_MapRGBA(paddedImage->format, static_cast<Uint8>(colorKey.R * 255), static_cast<Uint8>(colorKey.G * 255), static_cast<Uint8>(colorKey.B * 255), static_cast<Uint8>(colorKey.A * 255)); SDL_FillRect(paddedImage, NULL, colorKeyPixelFormat); // Blit the image onto the padded image SDL_BlitSurface(image, NULL, paddedImage, NULL); SDL_SetColorKey(paddedImage, SDL_SRCCOLORKEY, colorKeyPixelFormat); Afterwards, I generate an OpenGL texture from paddedImage using similar code to the SDL+OpenGL texture loading code found online (I'll post if necessary). This code works if I just want the texture with or without padding, and is likely not the problem. I realize that I set all pixels in paddedImage to have alpha zero which causes the first problem I mentioned, but I can't seem to figure out how to do this. Should I just loop over the pixels and set the appropriate colors to have alpha zero?

    Read the article

  • Projecting an object into a scene based on world coordinates only

    - by user354862
    I want to place a 3D image into a scene base on world/global coordinates. I have an image of a scene. The image was captures at some global coordinate (x1, y1, z1). I am given an object that needs to be placed into this scene based on its global coordinate (x2, y2, y3). This object needs to be projected into the scene accurately similarly to perspective projection. An example may help to make this clear. Imagine there is a parking lot with some set of global coordinates. A picture is taken of a portion of the parking lot. The coordinates from the spot where the image was taken is recorded. The goal is to place a virtual vehicle into this image using the global coordinates for that vehicle. Because the global cooridnates for the vehicle may not be in the fov of the global coordinates for the image I am assuming that I will need the image coordinates, angle and possibly fov. 3D graphics is not my area so I have been looking at http://en.wikipedia.org/wiki/Perspective_projection#Perspective_projection. I have also been looking at Matrix3DProjection which seems to possibly be what I am looking for but it only works in Silverlight and I am trying to do this in WPF. In my mind it appears I need to determine the (X,Y,Z) coordinates that are in the fov of the image, determine the world coordinate to pixel conversion and then accurately project the vehicle into the image giving it the correct perspective such that is looks 3D i.e smaller the further away bigger closer Is there a function within WPF that can help with this or will I need to re-learn matrices and do this by hand?

    Read the article

  • Load image blurred Android

    - by Mira
    I'm trying to create a map for a game through an image, where each black pixel is equivalent to a wall, and yellow to flowers(1) and green grass(0) so far i had this image (50x50): http://i.imgur.com/Ydj9Cp2.png the problem here seems to be that, when i read the image on my code, it get's scaled up to 100x100, even tough i have it on the raw folder. I can't let it scale up or down because that will put noise and blur on the image and then the map won't be readable. here i have my code: (...) Bitmap tab=BitmapFactory.decodeResource(resources, com.example.lolitos2.R.raw.mappixel); //tab=Bitmap.createScaledBitmap(tab, 50, 50, false); Log.e("w", tab.getWidth()+"."+tab.getHeight()); for (int i = 0; i < tab.getWidth(); i++) { for (int j = 0; j < tab.getHeight(); j++) { int x = j; int y = i; switch (tab.getPixel(x, y)) { // se o é uma parede case Color.BLACK: getParedes()[x][y] = new Parede(x, y); break; case Color.GREEN: fundo.add(new Passivo(x,y,0)); break; default: fundo.add(new Passivo(x,y,1)); } } } How can i read my image Map without rescaling it?

    Read the article

  • glTexImage2D + byte[]

    - by miniMe
    How can I upload pixels from a simple byte array to an OpenGl texture ? I'm using glTexImage2D and all I get is a white rectangle instead of a pixelated texture. The 9th parameter (32-bit pointer to the pixel data) is IMO the problem. I tried lots of parameter types there (byte, ref byte, byte[], ref byte[], int & IntPtr + Marshall, out byte, out byte[], byte*). glGetError() always returns GL_NO_ERROR. There must be something I'm doing wrong because it's never some gibberish pixels. It's always white. glGenTextures works correct. The first id has the value 1 like always in OpenGL. And I draw colored lines without any problem. So something is wrong with my texturing. I'm in control of the DllImport. So I can change the parameter types if necessary. GL.glBindTexture(GL.GL_TEXTURE_2D, id); int w = 4; int h = 4; byte[] bytes = new byte[w * h * 4]; for (int i = 0; i < bytes.Length; i++) bytes[i] = (byte)Utils.random(256); GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, w, h, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bytes); [DllImport(GL_LIBRARY)] public static extern void glTexImage2D(uint what, int level, int internalFormat, int width, int height, int border, int format, int type, byte[] bytes);

    Read the article

  • How to click multiple instances of the same icon on a webpage? (Javascript)

    - by akd
    I want to write a script that will click every instance of a certain icon. The following code is from the source that is what I want to click on. It has a onclick event defined, I just don't know how to search the page for these icons and then click on them. Can anyone help? Thanks. <dd><a href="javascript:void(0);" onclick="RecommendItem(1,'article','6106925','1','recommendstatus_article6106925' ); return false;" onmouseover="return overlib('Give thumbs up', WRAP);" onmouseout="return nd();">&nbsp;<img class='icon' title='' alt='Thumb up' style='background-position: -304px -48px;' src='http://geekdo-images.com/images/pixel.gif' /></a></dd> <dt class='tippers'><a href="javascript://" style='color: #969600;' onclick="RecSpy( 'article', '6106925', 'tippers' ); return false;"></a></dt>

    Read the article

  • CSS 100% height with padding/margin

    - by Toji
    This has been driving me crazy for a couple of days now, but in reality it's a problem that I've hit off and on for the last few years: With HTML/CSS how can I make an element that has a width and/or height that is 100% of it's parent element and still has proper padding or margins? By "proper" I mean that if my parent element is 200px tall and I specify 100% height with 5px padding I would expect that I should get a 190px high element with 5px "border" on all sides, nicely centered in the parent element. Now, I know that that's not how the standard box model specifies it should work (although I'd like to know why, exactly...), so the obvious answer doesn't work: #myDiv { width: 100% height: 100%; padding: 5px; } But it would seem to me that there must be SOME way of reliably producing this effect for a parent of arbitrary size. Does anyone know of a way of accomplishing this (seemingly simple) task? Oh, and for the record I'm not terribly interested in IE compatibility so that should (hopefully) make things a bit easier. EDIT: Since an example was asked for, here's the simplest one I can think of: <html style="height: 100%"> <body style="height: 100%"> <div style="background-color: black; height: 100%; padding: 25px"></div> </body> </html> The challenge is then to get the black box to show up with a 25 pixel padding on all edges without the page growing big enough to require scrollbars.

    Read the article

  • How to change radius of rounded rectangle in Photoshop

    - by MattDiPasquale
    At this point, I'm thinking of just using CSS 3, esp. since I'm a programmer, but I'd like to do this with Photoshop because I think it's nicer since I'm working with images anyway, among other reasons... Before I move on, my first question is: Is there a place like SuperUser for designers (or for Photoshop-like or questions)? What I Want: I want the icons on http://www.mattdipasquale.com/ to look like those on http://about.me/mattdipasquale. About.me has an outdated Twitter icon and does not have icons for GitHub, StackOverflow, etc. So, although I like the look of their icons, I want to be able to create these icons myself instead of using their versions. What I Have: I have different iphone icons, like the Facebook iPhone icon, Twitter iPhone icon, etc., that I got from iTunes, using Firebug to find the URL of the background image. I opened them up in Photoshop and pressed option + command + i to reduce the image size to 32px x 32px with Bicubic Sharper (best for reduction). I now have a square icon layer. Closing the Gap: In addition to the icon layer, I want to have a clipping-mask layer that will apply the 5px rounded-corners, 1px stroke, and 1px bevel. (Note: I just want to apply effects to the edges of the icon because the gloss and other effects are already encoded in the iTunes image. Also, I'm just guessing about the pixel values, but I want it to look good, like the icons on about.me.) What settings should I use for the blend options to make the icons look good, like iphone icons or those used by about.me? Why a Clipping Mask? The reason I want to use a clipping mask is that I want ease of reproducibility. I want to be able to apply the same styling to other square icon layers by simply replacing the square icon layer and then saving for web. If there is a better way to achieve such ease of reproducibility, please suggest it. I've seen Photoshop iPhone icon templates, but I couldn't figure out how to use them with my own images. Thanks! Matt

    Read the article

  • How to set up dual quadro cards on RHEL 5.5?

    - by Alex J. Roberts
    I have a RHEL 5 workstation with 2 nvidia Quadro FX4500 cards, with one display attached to each card. After doing a clean install of RHEL 5.5, the second display doesnt work (it worked ok in RHEL 5.2). Neither separate X screens nor Xinerama are working. The kernel version is 2.6.18-194.el5 I've tried nvidia drivers 185.18.36 (the ones that i was using on 5.2) and the latest 260.19.36 and neither works. My xorg.conf is as follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildmeister@builder58) Fri Aug 14 18:34:43 PDT 2009 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" FontPath "unix/:7100" EndSection Section "ServerFlags" Option "Xinerama" "1" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from data in "/etc/sysconfig/keyboard" Identifier "Keyboard0" Driver "kbd" Option "XkbLayout" "us" Option "XkbModel" "pc105" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 3007WFP" HorizSync 49.3 - 98.5 VertRefresh 60.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:10:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 4500" BusID "PCI:129:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection And the Xorg Log: X Window System Version 7.1.1 Release Date: 12 May 2006 X Protocol Version 11, Revision 0, Release 7.1.1 Build Operating System: Linux 2.6.18-164.11.1.el5 x86_64 Red Hat, Inc. Current Operating System: Linux blur.svsdsde 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 Build Date: 06 March 2010 Build ID: xorg-x11-server 1.1.1-48.76.el5 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Module Loader present Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Fri Feb 18 09:52:08 2011 (==) Using config file: "/etc/X11/xorg.conf" (==) ServerLayout "Layout0" (**) |-->Screen "Screen0" (0) (**) | |-->Monitor "Monitor0" (**) | |-->Device "Device0" (**) |-->Screen "Screen1" (1) (**) | |-->Monitor "Monitor1" (**) | |-->Device "Device1" (**) |-->Input Device "Keyboard0" (**) |-->Input Device "Mouse0" (**) FontPath set to: unix/:7100 (==) RgbPath set to "/usr/share/X11/rgb" (==) ModulePath set to "/usr/lib64/xorg/modules" (**) Option "Xinerama" "1" (**) Xinerama: enabled (==) Max clients allowed: 512, resource mask: 0xfffff (II) Open ACPI successful (/var/run/acpid.socket) (II) Module ABI versions: X.Org ANSI C Emulation: 0.3 X.Org Video Driver: 1.0 X.Org XInput driver : 0.6 X.Org Server Extension : 0.3 X.Org Font Renderer : 0.5 (II) Loader running on linux (II) LoadModule: "bitmap" (II) Loading /usr/lib64/xorg/modules/fonts/libbitmap.so (II) Module bitmap: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Bitmap (II) LoadModule: "pcidata" (II) Loading /usr/lib64/xorg/modules/libpcidata.so (II) Module pcidata: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Video Driver, version 1.0 (++) using VT number 7 (II) PCI: PCI scan (all values are in hex) (II) PCI: 00:00:0: chip 10de,005e card 103c,1500 rev a3 class 05,80,00 hdr 00 (II) PCI: 00:01:0: chip 10de,0051 card 103c,1500 rev a3 class 06,01,00 hdr 80 (II) PCI: 00:01:1: chip 10de,0052 card 103c,1500 rev a2 class 0c,05,00 hdr 80 (II) PCI: 00:02:0: chip 10de,005a card 103c,1500 rev a2 class 0c,03,10 hdr 80 (II) PCI: 00:02:1: chip 10de,005b card 103c,1500 rev a3 class 0c,03,20 hdr 80 (II) PCI: 00:04:0: chip 10de,0059 card 103c,1500 rev a2 class 04,01,00 hdr 00 (II) PCI: 00:06:0: chip 10de,0053 card 103c,1500 rev f2 class 01,01,8a hdr 00 (II) PCI: 00:07:0: chip 10de,0054 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:08:0: chip 10de,0055 card 103c,1500 rev f3 class 01,01,85 hdr 00 (II) PCI: 00:09:0: chip 10de,005c card 0000,0000 rev a2 class 06,04,01 hdr 01 (II) PCI: 00:0a:0: chip 10de,0057 card 103c,1500 rev a3 class 06,80,00 hdr 00 (II) PCI: 00:0e:0: chip 10de,005d card 0000,0000 rev a3 class 06,04,00 hdr 01 (II) PCI: 00:18:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:18:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 00:19:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80 (II) PCI: 05:05:0: chip 104c,8023 card 103c,1500 rev 00 class 0c,00,10 hdr 00 (II) PCI: 0a:00:0: chip 10de,009d card 10de,02af rev a1 class 03,00,00 hdr 00 (II) PCI: End of PCI scan (II) PCI-to-ISA bridge: (II) Bus -1: bridge is at (0:1:0), (0,-1,-1), BCTRL: 0x0008 (VGA_EN is set) (II) Subtractive PCI-to-PCI bridge: (II) Bus 5: bridge is at (0:9:0), (0,5,5), BCTRL: 0x0206 (VGA_EN is cleared) (II) Bus 5 non-prefetchable memory range: [0] -1 0 0xf5000000 - 0xf50fffff (0x100000) MX[B] (II) PCI-to-PCI bridge: (II) Bus 10: bridge is at (0:14:0), (0,10,10), BCTRL: 0x000a (VGA_EN is set) (II) Bus 10 I/O range: [0] -1 0 0x00003000 - 0x00003fff (0x1000) IX[B] (II) Bus 10 non-prefetchable memory range: [0] -1 0 0xf3000000 - 0xf4ffffff (0x2000000) MX[B] (II) Bus 10 prefetchable memory range: [0] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B] (II) Host-to-PCI bridge: (II) Bus 0: bridge is at (0:24:0), (0,0,10), BCTRL: 0x0008 (VGA_EN is set) (II) Bus 0 I/O range: [0] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) Bus 0 non-prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (II) Bus 0 prefetchable memory range: [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] (--) PCI:*(10:0:0) nVidia Corporation Quadro FX 4500 rev 161, Mem @ 0xf3000000/24, 0xc0000000/28, 0xf4000000/24, I/O @ 0x3000/7 (II) Addressable bus resource ranges are [0] -1 0 0x00000000 - 0xffffffff (0x100000000) MX[B] [1] -1 0 0x00000000 - 0x0000ffff (0x10000) IX[B] (II) OS-reported resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) Active PCI resource ranges: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) Active PCI resource ranges after removing overlaps: [0] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [1] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [...snipped... post too long] [28] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [29] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) OS-reported resource ranges after removing overlaps with PCI: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [5] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] (II) All system resource ranges: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) LoadModule: "extmod" (II) Loading /usr/lib64/xorg/modules/extensions/libextmod.so (II) Module extmod: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension SHAPE (II) Loading extension MIT-SUNDRY-NONSTANDARD (II) Loading extension BIG-REQUESTS (II) Loading extension SYNC (II) Loading extension MIT-SCREEN-SAVER (II) Loading extension XC-MISC (II) Loading extension XFree86-VidModeExtension (II) Loading extension XFree86-Misc (II) Loading extension XFree86-DGA (II) Loading extension DPMS (II) Loading extension TOG-CUP (II) Loading extension Extended-Visual-Information (II) Loading extension XVideo (II) Loading extension XVideo-MotionCompensation (II) Loading extension X-Resource (II) LoadModule: "dbe" (II) Loading /usr/lib64/xorg/modules/extensions/libdbe.so (II) Module dbe: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension DOUBLE-BUFFER (II) LoadModule: "glx" (II) Loading /usr/lib64/xorg/modules/extensions/libglx.so (II) Module glx: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Server Extension (II) NVIDIA GLX Module 185.18.36 Fri Aug 14 18:27:24 PDT 2009 (II) Loading extension GLX (II) LoadModule: "freetype" (II) Loading /usr/lib64/xorg/modules/fonts/libfreetype.so (II) Module freetype: vendor="X.Org Foundation & the After X-TT Project" compiled for 7.1.1, module version = 2.1.0 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font FreeType (II) LoadModule: "type1" (II) Loading /usr/lib64/xorg/modules/fonts/libtype1.so (II) Module type1: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.2 Module class: X.Org Font Renderer ABI class: X.Org Font Renderer, version 0.5 (II) Loading font Type1 (II) LoadModule: "record" (II) Loading /usr/lib64/xorg/modules/extensions/librecord.so (II) Module record: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.13.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 0.3 (II) Loading extension RECORD (II) LoadModule: "dri" (II) Loading /usr/lib64/xorg/modules/extensions/libdri.so (II) Module dri: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading sub module "drm" (II) LoadModule: "drm" (II) Loading /usr/lib64/xorg/modules/linux/libdrm.so (II) Module drm: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org Server Extension, version 0.3 (II) Loading extension XFree86-DRI (II) LoadModule: "nvidia" (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so (II) Module nvidia: vendor="NVIDIA Corporation" compiled for 4.0.2, module version = 1.0.0 Module class: X.Org Video Driver (II) LoadModule: "kbd" (II) Loading /usr/lib64/xorg/modules/input/kbd_drv.so (II) Module kbd: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.0 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) LoadModule: "mouse" (II) Loading /usr/lib64/xorg/modules/input/mouse_drv.so (II) Module mouse: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.1.1 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 0.6 (II) NVIDIA dlloader X Driver 185.18.36 Fri Aug 14 17:51:02 PDT 2009 (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs (II) Primary Device is: PCI 0a:00:0 (--) Chipset NVIDIA GPU found (II) Loading sub module "fb" (II) LoadModule: "fb" (II) Loading /usr/lib64/xorg/modules/libfb.so (II) Module fb: vendor="X.Org Foundation" compiled for 7.1.1, module version = 1.0.0 ABI class: X.Org ANSI C Emulation, version 0.3 (II) Loading sub module "wfb" (II) LoadModule: "wfb" (II) Loading /usr/lib64/xorg/modules/libwfb.so (II) Module wfb: vendor="NVIDIA Corporation" compiled for 7.1.99.2, module version = 1.0.0 (II) Loading sub module "ramdac" (II) LoadModule: "ramdac" (II) Loading /usr/lib64/xorg/modules/libramdac.so (II) Module ramdac: vendor="X.Org Foundation" compiled for 7.1.1, module version = 0.1.0 ABI class: X.Org Video Driver, version 1.0 (II) resource ranges after xf86ClaimFixedResources() call: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [16] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [17] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [18] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [19] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [20] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [21] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [22] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [23] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [24] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [25] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [26] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [27] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [28] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [29] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [30] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [31] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [32] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [33] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [34] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [35] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) (II) resource ranges after probing: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) NVIDIA(0): Option "TwinView" "0" (**) NVIDIA(0): Option "MetaModes" "nvidia-auto-select +0+0" (**) NVIDIA(0): Enabling RENDER acceleration (II) NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) NVIDIA(0): enabled. (II) NVIDIA(0): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:10:0:0 (GPU-0) (--) NVIDIA(0): Memory: 524288 kBytes (--) NVIDIA(0): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(0): Detected PCI Express Link width: 16X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on Quadro FX 4500 at PCI:10:0:0: (--) NVIDIA(0): DELL 3007WFP (DFP-0) (--) NVIDIA(0): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Assigned Display Device: DFP-0 (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): "nvidia-auto-select+0+0" (II) NVIDIA(0): Virtual screen size determined to be 2560 x 1600 (--) NVIDIA(0): DPI set to (101, 101); computed from "UseEdidDpi" X config (--) NVIDIA(0): option (WW) NVIDIA(0): UBB is incompatible with the Composite extension. Disabling (WW) NVIDIA(0): UBB. (==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp (II) do I need RAC? No, I don't. (II) resource ranges after preInit: [0] -1 0 0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B) [1] -1 0 0x000f0000 - 0x000fffff (0x10000) MX[B] [2] -1 0 0x000c0000 - 0x000effff (0x30000) MX[B] [3] -1 0 0x00000000 - 0x0009ffff (0xa0000) MX[B] [4] -1 0 0xf5000000 - 0xf5003fff (0x4000) MX[B] [5] -1 0 0xf5004000 - 0xf50047ff (0x800) MX[B] [6] -1 0 0xf5104000 - 0xf5104fff (0x1000) MX[B] [7] -1 0 0xf5103000 - 0xf5103fff (0x1000) MX[B] [8] -1 0 0xf5102000 - 0xf5102fff (0x1000) MX[B] [9] -1 0 0xf5101000 - 0xf5101fff (0x1000) MX[B] [10] -1 0 0xfebf0000 - 0xfebf00ff (0x100) MX[B] [11] -1 0 0xf5100000 - 0xf5100fff (0x1000) MX[B] [12] -1 0 0xf4000000 - 0xf4ffffff (0x1000000) MX[B](B) [13] -1 0 0xc0000000 - 0xcfffffff (0x10000000) MX[B](B) [14] -1 0 0xf3000000 - 0xf3ffffff (0x1000000) MX[B](B) [15] 0 0 0x000a0000 - 0x000affff (0x10000) MS[B] [16] 0 0 0x000b0000 - 0x000b7fff (0x8000) MS[B] [17] 0 0 0x000b8000 - 0x000bffff (0x8000) MS[B] [18] -1 0 0x0000ffff - 0x0000ffff (0x1) IX[B] [19] -1 0 0x00000000 - 0x000000ff (0x100) IX[B] [20] -1 0 0x000048f0 - 0x000048f7 (0x8) IX[B] [21] -1 0 0x000048c0 - 0x000048cf (0x10) IX[B] [22] -1 0 0x00004c04 - 0x00004c07 (0x4) IX[B] [23] -1 0 0x000048e8 - 0x000048ef (0x8) IX[B] [24] -1 0 0x00004c00 - 0x00004c03 (0x4) IX[B] [25] -1 0 0x000048e0 - 0x000048e7 (0x8) IX[B] [26] -1 0 0x000048b0 - 0x000048bf (0x10) IX[B] [27] -1 0 0x000048fc - 0x000048ff (0x4) IX[B] [28] -1 0 0x000048d8 - 0x000048df (0x8) IX[B] [29] -1 0 0x000048f8 - 0x000048fb (0x4) IX[B] [30] -1 0 0x000048d0 - 0x000048d7 (0x8) IX[B] [31] -1 0 0x000048a0 - 0x000048af (0x10) IX[B] [32] -1 0 0x00004400 - 0x000044ff (0x100) IX[B] [33] -1 0 0x00004000 - 0x000040ff (0x100) IX[B] [34] -1 0 0x00004840 - 0x0000487f (0x40) IX[B] [35] -1 0 0x00004800 - 0x0000483f (0x40) IX[B] [36] -1 0 0x00004880 - 0x0000489f (0x20) IX[B] [37] -1 0 0x0000fb00 - 0x0000fbff (0x100) IX[B] [38] -1 0 0x00003000 - 0x0000307f (0x80) IX[B](B) [39] 0 0 0x000003b0 - 0x000003bb (0xc) IS[B] [40] 0 0 0x000003c0 - 0x000003df (0x20) IS[B] (II) NVIDIA(GPU-1): NVIDIA GPU Quadro FX 4500 (G70GL) at PCI:129:0:0 (GPU-1) (--) NVIDIA(GPU-1): Memory: 524288 kBytes (--) NVIDIA(GPU-1): VideoBIOS: 05.70.02.41.01 (II) NVIDIA(GPU-1): Detected PCI Express Link width: 16X (--) NVIDIA(GPU-1): Interlaced video modes are supported on this GPU (--) NVIDIA(GPU-1): Connected display device(s) on Quadro FX 4500 at PCI:129:0:0: (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0) (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): 310.0 MHz maximum pixel clock (--) NVIDIA(GPU-1): DELL 3007WFP (DFP-0): Internal Dual Link TMDS (II) NVIDIA(0): Initialized GPU GART. (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Loading extension NV-GLX (II) NVIDIA(0): NVIDIA 3D Acceleration Architecture Initialized (==) NVIDIA(0): Disabling shared memory pixmaps (II) NVIDIA(0): Using the NVIDIA 2D acceleration architecture (==) NVIDIA(0): Backing store disabled (==) NVIDIA(0): Silken mouse enabled (**) Option "dpms" (**) NVIDIA(0): DPMS enabled (II) Loading extension NV-CONTROL (==) RandR enabled (II) Setting vga for screen 0. (II) Initializing built-in extension MIT-SHM (II) Initializing built-in extension XInputExtension (II) Initializing built-in extension XTEST (II) Initializing built-in extension XKEYBOARD (II) Initializing built-in extension XC-APPGROUP (II) Initializing built-in extension SECURITY (II) Initializing built-in extension XINERAMA (II) Initializing built-in extension XFIXES (II) Initializing built-in extension XFree86-Bigfont (II) Initializing built-in extension RENDER (II) Initializing built-in extension RANDR (II) Initializing built-in extension COMPOSITE (II) Initializing built-in extension DAMAGE (II) Initializing built-in extension XEVIE (II) Initializing extension GLX (WW) Disabling Composite since Xinerama is enabled (**) Option "CoreKeyboard" (**) Keyboard0: Core Keyboard (**) Option "Protocol" "standard" (**) Keyboard0: Protocol: standard (**) Option "AutoRepeat" "500 30" (**) Option "XkbRules" "xorg" (**) Keyboard0: XkbRules: "xorg" (**) Option "XkbModel" "pc105" (**) Keyboard0: XkbModel: "pc105" (**) Option "XkbLayout" "us" (**) Keyboard0: XkbLayout: "us" (**) Option "CustomKeycodes" "off" (**) Keyboard0: CustomKeycodes disabled (**) Option "Protocol" "auto" (**) Mouse0: Device: "/dev/input/mice" (**) Mouse0: Protocol: "auto" (**) Option "CorePointer" (**) Mouse0: Core Pointer (**) Option "Device" "/dev/input/mice" (**) Option "Emulate3Buttons" "no" (**) Option "ZAxisMapping" "4 5" (**) Mouse0: ZAxisMapping: buttons 4 and 5 (**) Mouse0: Buttons: 9 (II) XINPUT: Adding extended input device "Mouse0" (type: MOUSE) (II) XINPUT: Adding extended input device "Keyboard0" (type: KEYBOARD) (--) Mouse0: PnP-detected protocol: "ExplorerPS/2" (II) Mouse0: ps2EnableDataReporting: succeeded (II) Open ACPI successful (/var/run/acpid.socket) (II) NVIDIA(0): Setting mode "nvidia-auto-select+0+0" (II) Mouse0: ps2EnableDataReporting: succeeded (the snipped part can be changed if necessary) Any help at all would be appreciated. Cheers, Alex

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

  • Computer suddenly dies; screen displays weird flickering lines, then restarts

    - by Imray
    I've been having this terrible problem for a little while and just managed to get a picture of 'dead screen' for the first time and I am posting it to seek help. Randomly, at irregular intervals (typically once a week), while working on something (it's been different things every time) my computer will just suddenly go dead - the screen turns to exactly the picture below (the lines flicker a little bit), it hangs there for a few seconds and then restarts. Obviously this is extremely frustrating and I want to try to stop it. I've searched numerous postings with similar keywords but nothing exactly the same as mine. Does anyone have any idea what might be the cause of this? I would post all my system settings and installed programs but the list is long and I don't know how much relevance each item would be. If you'd like to know something specific, please comment and I'll let you know whatever you need. SPECS C:\Users\Imray>systeminfo Host Name: Imray OS Name: Microsoft Windows 7 Professional OS Version: 6.1.7600 N/A Build 7600 OS Manufacturer: Microsoft Corporation OS Configuration: Standalone Workstation OS Build Type: Multiprocessor Free Registered Owner: Imray - Owner Registered Organization: Product ID: 00371-152-9333854-85895 Original Install Date: 06/09/1999, 5:45:21 PM System Boot Time: 22/03/2013, 8:58:18 AM System Manufacturer: Gateway System Model: DX4840 System Type: x64-based PC Processor(s): 1 Processor(s) Installed. [01]: Intel64 Family 6 Model 37 Stepping 2 GenuineIntel ~3201 Mhz BIOS Version: American Megatrends Inc. P01-A3 , 17/05/2010 Windows Directory: C:\Windows System Directory: C:\Windows\system32 Boot Device: \Device\HarddiskVolume2 System Locale: en-us;English (United States) Input Locale: en-us;English (United States) Time Zone: (UTC-05:00) Eastern Time (US & Canada) Total Physical Memory: 6,135 MB Available Physical Memory: 3,632 MB Virtual Memory: Max Size: 12,268 MB Virtual Memory: Available: 8,114 MB Virtual Memory: In Use: 4,154 MB Page File Location(s): C:\pagefile.sys Domain: WORKGROUP Logon Server: \\Imray-OWNER Hotfix(s): 4 Hotfix(s) Installed. [01]: KB971033 [02]: KB958559 [03]: KB977206 [04]: KB981889 Network Card(s): 2 NIC(s) Installed. [01]: 802.11n Wireless PCI Express Card LAN Adapter Connection Name: Wireless Network Connection DHCP Enabled: Yes DHCP Server: 192.168.2.1 IP address(es) [01]: 192.168.2.13 [02]: fe80::1df1:5399:6890:91f6 [02]: Microsoft Virtual WiFi Miniport Adapter Connection Name: Wireless Network Connection 2 DHCP Enabled: Yes DHCP Server: N/A IP address(es) Graphics Card Specs Name ATI Radeon HD 5570 PNP Device ID PCI\VEN_1002&DEV_68D9&SUBSYS_E142174B&REV_00\4&18A4B35E&0&0008 Adapter Type ATI display adapter (0x68D9), ATI Technologies Inc. compatible Adapter Description ATI Radeon HD 5570 Adapter RAM 1.00 GB (1,073,741,824 bytes) Installed Drivers atiu9p64 aticfx64 aticfx64 atiu9pag aticfx32 aticfx32 atiumd64 atidxx64 atidxx64 atiumdag atidxx32 atidxx32 atiumdva atiumd6a atitmm64 Driver Version 8.700.0.0 INF File oem1.inf (ati2mtag_Evergreen section) Color Planes Not Available Color Table Entries 4294967296 Resolution 1920 x 1080 x 59 hertz Bits/Pixel 32 Memory Address 0xD0000000-0xDFFFFFFF Memory Address 0xFBDE0000-0xFBDFFFFF I/O Port 0x0000D000-0x0000DFFF IRQ Channel IRQ 4294967293 I/O Port 0x000003B0-0x000003BB I/O Port 0x000003C0-0x000003DF Memory Address 0xA0000-0xBFFFF Driver c:\windows\system32\drivers\atikmpag.sys (8.14.1.6095, 181.00 KB (185,344 bytes), 06/09/1999 5:59 PM)

    Read the article

  • Apache VirtualHost Blockhole (Eats All Requests on All Ports on an IP)

    - by Synetech inc.
    I’m exhausted. I just spent the last two hours chasing a goose that I have been after on-and-off for the past year. Here is the goal, put as succinctly as possible. Step 1: HOSTS File: 127.0.0.5 NastyAdServer.com 127.0.0.5 xssServer.com 127.0.0.5 SQLInjector.com 127.0.0.5 PornAds.com 127.0.0.5 OtherBadSites.com … Step 2: Apache httpd.conf <VirtualHost 127.0.0.5:80> ServerName adkiller DocumentRoot adkiller RewriteEngine On RewriteRule (\.(gif|jpg|png|jpeg)$) /p.png [L] RewriteRule (.*) /ad.htm [L] </VirtualHost> So basically what happens is that the HOSTS file redirects designated domains to the localhost, but to a specific loopback IP address. Apache listens for any requests on this address and serves either a transparent pixel graphic, or else an empty HTML file. Thus, any page or graphic on any of the bad sites is replaced with nothing (in other words an ad/malware/porn/etc. blocker). This works great as is (and has been for me for years now). The problem is that these bad things are no longer limited to just HTTP traffic. For example: <script src="http://NastyAdServer.com:99"> or <iframe src="https://PornAds.com/ad.html"> or a Trojan using ftp://spammaster.com/[email protected];[email protected];[email protected] or an app “phoning home” with private info in a crafted ICMP packet by pinging CardStealer.ru:99 Handling HTTPS is a relatively minor bump. I can create a separate VirtualHost just like the one above, replacing port 80 with 443, and adding in SSL directives. This leaves the other ports to be dealt with. I tried using * for the port, but then I get overlap errors. I tried redirecting all request to the HTTPS server and visa-versa but neither worked; either the SSL requests wouldn’t redirect correctly or else the HTTP requests gave the You’re speaking plain HTTP to an SSL-enabled server port… error. Further, I cannot figure out a way to test if other ports are being successfully redirected (I could try using a browser, but what about FTP, ICMP, etc.?) I realize that I could just use a port-blocker (eg ProtoWall, PeerBlock, etc.), but there’s two issues with that. First, I am blocking domains with this method, not IP addresses, so to use a port-blocker, I would have to get each and every domain’s IP, and update theme frequently. Second, using this method, I can have Apache keep logs of all the ad/malware/spam/etc. requests for future analysis (my current AdKiller logs are already 466MB right now). I appreciate any help in successfully setting up an Apache VirtualHost blackhole. Thanks.

    Read the article

  • General guidelines / workflow to convert or transfer video "professionally"?

    - by cloneman
    I'm an IT "professional" who sometimes has to deal with small video conversion / video cutting projects, and I'd like to learn "the right way" to do this. Every time I search Google, there's always a disaster for weird, low-maturity trialware, or random forums threads from 3-4 years ago indicating various antiquated method to do it. The big question is the following: What are the "general" guidelines and tools to transcode video into some efficient (lossless?) intermediary, for editing purposes, for the purpose of eventually re-encoding it after? It seems to me like even the simplest of formats and tasks are a disaster of endless trial & error, or expertise only known by hardened experts who have a swiss army kife of weird conversion tools that they use, almost as if mounting an attack against the project. Here are a few cases in point: Simple VOB files extracted from DVD footage can't be imported into Adobe Premiere directly. Virtualdub is an old software people keep recommending but doesn't seem to support newer formats. I don't even know how to tell with certainty which codecs a video has, and weather the image is interlaced or not, and what resolution and codecs I'm dealing with. Problems: Choosing a wrong interlace option which diminishes quality Choosing a wrong pixel aspect ratio (stretches the image) Choosing a wrong "project type" in Premiere causing footage to require scaling Being forced to use some weird program that will have any number of negative effects What I'm looking for: Books or "Real knowledge" on format conversions, recognized tools, etc. that aren't some random forum guides on how to deal with video formats. Workflow guidelines on identifying a format going from one format to another without problems as mentioned above. Documentation on what programs like Adobe Premiere can and can't do with regards to formats, so that I don't use a wrench as a hammer. TL;DR How should you convert or "prepare" a video file to ensure it will be supported by Premiere for editing? Is premiere a suitable program to handle cropping, encoding, or should other tools be used for this, when making a video montage from a variety of source formats? What are some good books to read that specifically deal with converting videos that use any number of codecs?

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • Building extensions for Expression Blend 4 using MEF

    - by Timmy Kokke
    Introduction Although it was possible to write extensions for Expression Blend and Expression Design, it wasn’t very easy and out of the box only one addin could be used. With Expression Blend 4 it is possible to write extensions using MEF, the Managed Extensibility Framework. Until today there’s no documentation on how to build these extensions, so look thru the code with Reflector is something you’ll have to do very often. Because Blend and Design are build using WPF searching the visual tree with Snoop and Mole belong to the tools you’ll be using a lot exploring the possibilities.  Configuring the extension project Extensions are regular .NET class libraries. To create one, load up Visual Studio 2010 and start a new project. Because Blend is build using WPF, choose a WPF User Control Library from the Windows section and give it a name and location. I named mine DemoExtension1. Because Blend looks for addins named *.extension.dll  you’ll have to tell Visual Studio to use that in the Assembly Name. To change the Assembly Name right click your project and go to Properties. On the Application tab, add .Extension to name already in the Assembly name text field. To be able to debug this extension, I prefer to set the output path on the Build tab to the extensions folder of Expression Blend. This means that everything that used to go into the Debug folder is placed in the extensions folder. Including all referenced assemblies that have the copy local property set to false. One last setting. To be able to debug your extension you could start Blend and attach the debugger by hand. I like it to be able to just hit F5. Go to the Debug tab and add the the full path to Blend.exe in the Start external program text field. Extension Class Add a new class to the project.  This class needs to be inherited from the IPackage interface. The IPackage interface can be found in the Microsoft.Expression.Extensibility namespace. To get access to this namespace add Microsoft.Expression.Extensibility.dll to your references. This file can be found in the same folder as the (Expression Blend 4 Beta) Blend.exe file. Make sure the Copy Local property is set to false in this reference. After implementing the interface the class would look something like: using Microsoft.Expression.Extensibility; namespace DemoExtension1 { public class DemoExtension1:IPackage { public void Load(IServices services) { } public void Unload() { } } } These two methods are called when your addin is loaded and unloaded. The parameter passed to the Load method, IServices services, is your main entry point into Blend. The IServices interface exposes the GetService<T> method. You will be using this method a lot. Almost every part of Blend can be accessed thru a service. For example, you can use to get to the commanding services of Blend by calling GetService<ICommandService>() or to get to the Windowing services by calling GetService<IWindowService>(). To get Blend to load the extension we have to implement MEF. (You can get up to speed on MEF on the community site or read the blog of Mr. MEF, Glenn Block.)  In the case of Blend extensions, all that needs to be done is mark the class with an Export attribute and pass it the type of IPackage. The Export attribute can be found in the System.ComponentModel.Composition namespace which is part of the .NET 4 framework. You need to add this to your references. using System.ComponentModel.Composition; using Microsoft.Expression.Extensibility;   namespace DemoExtension1 { [Export(typeof(IPackage))] public class DemoExtension1:IPackage { Blend is able to find your addin now. Adding UI The addin doesn’t do very much at this point. The WPF User Control Library came with a UserControl so lets use that in this example. I just drop a Button and a TextBlock onto the surface of the control to have something to show in the demo. To get the UserControl to work in Blend it has to be registered with the WindowService.  Call GetService<IWindowService>() on the IServices interface to get access to the windowing services. The UserControl will be used in Blend on a Palette and has to be registered to enable it. This is done by calling the RegisterPalette on the IWindowService interface and passing it an identifier, an instance of the UserControl and a caption for the palette. public void Load(IServices services) { IWindowService windowService = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(); windowService.RegisterPalette("DemoExtension", uc, "Demo Extension"); } After hitting F5 to start debugging Expression Blend will start. You should be able to find the addin in the Window menu now. Activating this window will show the “Demo Extension” palette with the UserControl, style according to the settings of Blend. Now what? Because little is publicly known about how to access different parts of Blend adding breakpoints in Debug mode and browsing thru objects using the Quick Watch feature of Visual Studio is something you have to do very often. This demo extension can be used for that purpose very easily. Add the click event handler to the button on the UserControl. Change the contructor to take the IServices interface and store this in a field. Set a breakpoint in the Button_Click method. public partial class UserControl1 : UserControl { private readonly IServices _services;   public UserControl1(IServices services) { _services = services; InitializeComponent(); }   private void button1_Click(object sender, RoutedEventArgs e) { } } Change the call to the constructor in the load method and pass it the services property. public void Load(IServices services) { IWindowService service = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(services); service.RegisterPalette("DemoExtension", uc, "Demo Extension"); } Hit F5 to compile and start Blend. Got to the window menu and start show the addin. Click on  the button to hit the breakpoint. Now place the carrot text _services text in the code window and hit Shift+F9 to show the Quick Watch window. Now start exploring and discovering where to find everything you need.  More Information The are no official resources available yet. Microsoft has released one extension for expression Blend that is very useful as a reference, the Microsoft Expression Blend® Add-in Preview for Windows® Phone. This will install a .extension.dll file in the extension folder of Blend. You can load this file with Reflector and have a peek at how Microsoft is building his addins. Conclusion I hope this gives you something to get started building extensions for Expression Blend. Until Microsoft releases the final version, which hopefully includes more information about building extensions, we’ll have to work on documenting it in the community.

    Read the article

  • Reg Gets a Job at Red Gate (and what happens behind the scenes)

    - by red(at)work
    Mr Reg Gater works at one of Cambridge’s many high-tech companies. He doesn’t love his job, but he puts up with it because... well, it could be worse. Every day he drives to work around the Red Gate roundabout, wondering what his boss is going to blame him for today, and wondering if there could be a better job out there for him. By late morning he already feels like handing his notice in. He got the hacky look from his boss for being 5 minutes late, and then they ran out of tea. Again. He goes to the local sandwich shop for lunch, and picks up a Red Gate job menu and a Book of Red Gate while he’s waiting for his order. That night, he goes along to Cambridge Geek Nights and sees some very enthusiastic Red Gaters talking about the work they do; it sounds interesting and, of all things, fun. He takes a quick look at the job vacancies on the Red Gate website, and an hour later realises he’s still there – looking at videos, photos and people profiles. He especially likes the Red Gate’s Got Talent page, and is very impressed with Simon Johnson’s marathon time. He thinks that he’d quite like to work with such awesome people. It just so happens that Red Gate recently decided that they wanted to hire another hot shot team member. Behind the scenes, the wheels were set in motion: the recruitment team met with the hiring manager to understand exactly what they’re looking for, and to decide what interview tests to do, who will do the interviews, and to kick-start any interview training those people might need. Next up, a job description and job advert were written, and the job was put on the market. Reg applies, and his CV lands in the Recruitment team’s inbox and they open it up with eager anticipation that Reg could be the next awesome new starter. He looks good, and in a jiffy they’ve arranged an interview. Reg arrives for his interview, and is greeted by a smiley receptionist. She offers him a selection of drinks and he feels instantly relaxed. A couple of interviews and an assessment later, he gets a job offer. We make his day and he makes ours by accepting, and becoming one of the 60 new starters so far this year. Behind the scenes, things start moving all over again. The HR team arranges for a “Welcome” goodie box to be whisked out to him, prepares his contract, sends an email to Information Services (Or IS for short - we’ll come back to them), keeps in touch with Reg to make sure he knows what to expect on his first day, and of course asks him to fill in the all-important wiki questionnaire so his new colleagues can start to get to know him before he even joins. Meanwhile, the IS team see an email in SupportWorks from HR. They see that Reg will be starting in the sales team in a few days’ time, and they know exactly what to do. They pull out a new machine, and within minutes have used their automated deployment software to install every piece of software that a new recruit could ever need. They also check with Reg’s new manager to see if he has any special requirements that they could help with. Reg starts and is amazed to find a fully configured machine sitting on his desk, complete with stationery and all the other tools he’ll need to do his job. He feels even more cared for after he gets a workstation assessment, and realises he’d be comfier with an ergonomic keyboard and a footstool. They arrive minutes later, just like that. His manager starts him off on his induction and sales training. Along with job-specific training, he’ll also have a buddy to help him find his feet, and loads of pre-arranged demos and introductions. Reg settles in nicely, and is great at his job. He enjoys the canteen, and regularly eats one of the 40,000 meals provided each year. He gets used to the selection of teas that are available, develops a taste for champagne launch parties, and has his fair share of the 25,000 cups of coffee downed at Red Gate towers each year. He goes along to some Feel Good Fund events, and donates a little something to charity in exchange for a turn on the chocolate fountain. He’s looking a little scruffy, so he decides to get his hair cut in between meetings, just in time for the Red Gate birthday company photo. Reg starts a new project: identifying existing customers to up-sell to new bundles. He talks with the web team to generate lists of qualifying customers who haven’t recently been sent marketing emails, and sends emails out, using a new in-house developed tool to schedule follow-up calls in CRM for the same group. The customer responds, saying they’d like to upgrade but are having a licensing problem – Reg sends the issue to Support, and it gets routed to the web team. The team identifies a workaround, and the bug gets scheduled into the next maintenance release in a fortnight’s time (hey; they got lucky). With all the new stuff Reg is working on, he realises that he’d be way more efficient if he had a third monitor. He speaks to IS and they get him one - no argument. He also needs a test machine and then some extra memory. Done. He then thinks he needs an iPad, and goes to ask for one. He gets told to stop pushing his luck. Some time later, Reg’s wife has a baby, so Reg gets 2 weeks of paid paternity leave and a bunch of flowers sent to his house. He signs up to the childcare scheme so that he doesn’t have to pay National Insurance on the first £243 of his childcare. The accounts team makes it all happen seamlessly, as they did with his Give As You Earn payments, which come out of his wages and go straight to his favorite charity. Reg’s sales career is going well. He’s grateful for the help that he gets from the product support team. How do they answer all those 900-ish support calls so effortlessly each month? He’s impressed with the patches that are sent out to customers who find “interesting behavior” in their tools, and to the customers who just must have that new feature. A little later in his career at Red Gate, Reg decides that he’d like to learn about management. He goes on some management training specially customised for Red Gate, joins the Management Book Club, and gets together with other new managers to brainstorm how to get the most out of one to one meetings with his team. Reg decides to go for a game of Foosball to celebrate his good fortune with his team, and has to wait for Finance to finish. While he’s waiting, he reflects on the wonderful time he’s had at Red Gate. He can’t put his finger on what it is exactly, but he knows he’s on to a good thing. All of the stuff that happened to Reg didn’t just happen magically. We’ve got teams of people working relentlessly behind the scenes to make sure that everyone here is comfortable, safe, well fed and caffeinated to the max.

    Read the article

  • The Internet of Things Is Really the Internet of People

    - by HCM-Oracle
    By Mark Hurd - Originally Posted on LinkedIn As I speak with CEOs around the world, our conversations invariably come down to this central question: Can we change our corporate cultures and the ways we train and reward our people as rapidly as new technology is changing the work we do, the products we make and how we engage with customers? It’s a critical consideration given today’s pace of disruption, which already is straining traditional management models and HR strategies. Winning companies will bring innovation and vision to their employees and partners by attracting people who will thrive in this emerging world of relentless data, predictive analytics and unlimited what-if scenarios. So, where are we going to find employees who are as familiar with complex data as I am with orderly financial statements and business plans? I’m not just talking about high-end data scientists who most certainly will sit at or near the top of the new decision-making pyramid. Global organizations will need creative and motivated people who will devote their time to manipulating, reviewing, analyzing, sorting and reshaping data to drive business and delight customers. This might seem evident, but my conversations with business people across the globe indicate that only a small number of companies get it. In the past few years, executives have been busy keeping pace with seismic upheavals, including the rise of social customer engagement, the rapid acceleration of product-development cycles and the relentless move to mobile-first. But all of that, I think, is the start of an uphill climb to the top of a roller-coaster. Today, about 10 billion devices across the globe are connected to the Internet. In a couple of years, that number will probably double, and not because we will have bought 10 billion more computers, smart phones and tablets. This unprecedented explosion of Big Data is being triggered by the Internet of Things, which is another way of saying that the numerous intelligent devices touching our everyday lives are all becoming interconnected. Home appliances, food, industrial equipment, pets, pharmaceutical products, pallets, cars, luggage, packaged goods, athletic equipment, even clothing will be streaming data. Some data will provide important information about how to run our businesses and lead healthier lives. Much of it will be extraneous. How does a CEO cope with this unimaginable volume and velocity of data, much less harness it to excite and delight customers? Here are three things CEOs must do to tackle this challenge: 1) Take care of your employees, take care of your customers. Larry Ellison recently noted that the two most important priorities for any CEO today revolve around people: Taking care of your employees and taking care of your customers. Companies in today’s hypercompetitive business environment simply won’t be able to survive unless they’ve got world-class people at all levels of the organization. CEOs must demonstrate a commitment to employees by becoming champions for HR systems that empower every employee to fully understand his or her job, how it ties into the corporate framework, what’s expected of them, what training is available, and how they can use an embedded social network to communicate, collaborate and excel. Over the next several years, many of the world’s top industrialized economies will see a turnover in the workforce on an unprecedented scale. Across the United States, Europe, China and Japan, the “baby boomer” generation will be retiring and, by 2020, we’ll see turnovers in those regions ranging from 10 to 30 percent. How will companies replace all that brainpower, experience and know-how? How will CEOs perpetuate the best elements of their corporate cultures in the midst of this profound turnover? The challenge will be daunting, but it can be met with world-class HR technology. As companies begin replacing up to 30 percent of their workforce, they will need thousands of new types of data-native workers to exploit the Internet of Things in the service of the Internet of People. The shift in corporate mindset here can’t be overstated. The CEO has to be at the forefront of this new way of recruiting, training, motivating, aligning and developing truly 21-century talent. 2) Start thinking today about the Internet of People. Some forward-looking companies have begun pursuing the “democratization of data.” This allows more people within a company greater access to data that can help them make better decisions, move more quickly and keep pace with the changing interests and demands of their customers. As a result, we’ve seen organizations flatten out, growing numbers of well-informed people authorized to make decisions without corporate approval and a movement of engagement away from headquarters to the point of contact with the customer. These are profound changes, and I’m a huge proponent. As I think about what the next few years will bring as companies become deluged with unprecedented streams of data, I’m convinced that we’ll need dramatically different organizational structures, decision-making models, risk-management profiles and reward systems. For example, if a car company’s marketing department mines incoming data to determine that customers are shifting rapidly toward neon-green models, how many layers of approval, review, analysis and sign-off will be needed before the factory starts cranking out more neon-green cars? Will we continue to have organizations where too many people are empowered to say “No” and too few are allowed to say “Yes”? If so, how will those companies be able to compete in a world in which customers have more choices, instant access to more information and less loyalty than ever before? That’s why I think CEOs need to begin thinking about this problem right now, not in a year or two when competitors are already reshaping their organizations to match the marketplace’s new realities. 3) Partner with universities to help create a new type of highly skilled workers. Several years ago, universities introduced new undergraduate as well as graduate-level programs in analytics and informatics as the business need for deeper insights into the booming world of data began to explode. Today, as the growth rate of data continues to soar, we know that the Internet of Things will only intensify that growth. Moreover, as Big Data fuels insights that can be shaped into products and services that generate revenue, the demand for data scientists and data specialists will go on unabated. Beyond that top-level expertise, companies are going to need data-native thinkers at all levels of the organization. Where will this new type of worker come from? I think it’s incumbent on the business community to collaborate with universities to develop new curricula designed to turn out graduates who can capitalize on the data-driven world that the Internet of Things is surely going to create. These new workers will create opportunities to help their companies in fields as diverse as product design, customer service, marketing, manufacturing and distribution. They will become innovative leaders in fashioning an entirely new type of workforce and organizational structure optimized to fully exploit the Internet of Things so that it becomes a high-value enabler of the Internet of People. Mark Hurd is President of Oracle Corporation and a member of the company's Board of Directors. He joined Oracle in 2010, bringing more than 30 years of technology industry leadership, computer hardware expertise, and executive management experience to his role with the company. As President, Mr. Hurd oversees the corporate direction and strategy for Oracle's global field operations, including marketing, sales, consulting, alliances and channels, and support. He focuses on strategy, leadership, innovation, and customers.

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • Vlc And other Multi Media Problems

    - by MrMagu
    ive tried numerous Free Multimedia Programs on ubuntu and i get the same result when i go to watch one of my movies off my other hardrive it allways ends up full screening with a stuck image in the middle and vlc ends up looking pitch black. ive look all over the web for this issue im wondering if anyone is experinceing the same problems I Dont know if it has to do with the duel monitors ive tried adding and re adding the repositorys but it seems to be doing the same repetive thing for over a month now do i need to reinstall the whole system or what idk anyhelp please would be much appreciated Ty MrMagu Xorg Conf File nvidia-settings: X configuration file generated by nvidia-settings nvidia-settings: version 304.37 (buildd@allspice) Sun Sep 9 05:59:26 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Ancor Communications Inc VE247" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 76.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Ancor Communications Inc VE247" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 76.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 1500" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 1500" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0, DFP-1: nvidia-auto-select +1920+0; DFP-0: 1920x1080 +0+0, DFP-1: 1920x1080 +1920+0" Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1080 +0+0; DFP-0: 1920x1080_60 +0+0; DFP-0: 1680x1050 +0+0; DFP-0: 1680x1050_60 +0+0; DFP-0: 1440x900 +0+0; DFP-0: 1440x900_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1152x864_75 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_75 +0+0; DFP-0: 1024x768_70 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_75 +0+0; DFP-0: 800x600_72 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_75 +0+0; DFP-0: 640x480_72 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select @1920x720 +0+0" Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1080 +0+0; DFP-0: 1920x1080_60 +0+0; DFP-0: 1680x1050 +0+0; DFP-0: 1680x1050_60 +0+0; DFP-0: 1440x900 +0+0; DFP-0: 1440x900_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1152x864_75 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_75 +0+0; DFP-0: 1024x768_70 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_75 +0+0; DFP-0: 800x600_72 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_75 +0+0; DFP-0: 640x480_72 +0+0; DFP-0: 640x480_60 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: nvidia-auto-select +0+0, DFP-1: 1920x1080 +1920+0; DFP-0: 1920x1080 +0+0, DFP-1: nvidia-auto-select +1920+0; DFP-0: 1920x1080_60 +0+0; DFP-0: 1680x1050 +0+0; DFP-0: 1680x1050_60 +0+0; DFP-0: 1440x900 +0+0; DFP-0: 1440x900_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1152x864_75 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_75 +0+0; DFP-0: 1024x768_70 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_75 +0+0; DFP-0: 800x600_72 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_75 +0+0; DFP-0: 640x480_72 +0+0; DFP-0: 640x480_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-1: 1920x1080 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "InputClass" Identifier "Mouse Remap" MatchProduct "Saitek Cyborg R.A.T.7 Mouse" MatchDevicePath "/dev/input/event*" Option "ButtonMapping" "1 2 3 4 5 6 7 8 9 0 0 0 0 0 0" EndSection Pictures http://tinypic.com/r/34o7j1h/6 http://tinypic.com/r/3rn20/6 Ty so Much Mr Magu

    Read the article

  • UIImagePickerController, UIImage, Memory and More!

    - by Itay
    I've noticed that there are many questions about how to handle UIImage objects, especially in conjunction with UIImagePickerController and then displaying it in a view (usually a UIImageView). Here is a collection of common questions and their answers. Feel free to edit and add your own. I obviously learnt all this information from somewhere too. Various forum posts, StackOverflow answers and my own experimenting brought me to all these solutions. Credit goes to those who posted some sample code that I've since used and modified. I don't remember who you all are - but hats off to you! How Do I Select An Image From the User's Images or From the Camera? You use UIImagePickerController. The documentation for the class gives a decent overview of how one would use it, and can be found here. Basically, you create an instance of the class, which is a modal view controller, display it, and set yourself (or some class) to be the delegate. Then you'll get notified when a user selects some form of media (movie or image in 3.0 on the 3GS), and you can do whatever you want. My Delegate Was Called - How Do I Get The Media? The delegate method signature is the following: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info; You should put a breakpoint in the debugger to see what's in the dictionary, but you use that to extract the media. For example: UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage]; There are other keys that work as well, all in the documentation. OK, I Got The Image, But It Doesn't Have Any Geolocation Data. What gives? Unfortunately, Apple decided that we're not worthy of this information. When they load the data into the UIImage, they strip it of all the EXIF/Geolocation data. Can I Get To The Original File Representing This Image on the Disk? Nope. For security purposes, you only get the UIImage. How Can I Look At The Underlying Pixels of the UIImage? Since the UIImage is immutable, you can't look at the direct pixels. However, you can make a copy. The code to this looks something like this: UIImage* image = ...; // An image NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; // Take away the red pixel, assuming 32-bit RGBA for(int i = 0; i < [pixelData length]; i += 4) { pixelBytes[i] = 0; // red pixelBytes[i+1] = pixelBytes[i+1]; // green pixelBytes[i+2] = pixelBytes[i+2]; // blue pixelBytes[i+3] = pixelBytes[i+3]; // alpha } However, note that CGDataProviderCopyData provides you with an "immutable" reference to the data - meaning you can't change it (and you may get a BAD_ACCESS error if you do). Look at the next question if you want to see how you can modify the pixels. How Do I Modify The Pixels of the UIImage? The UIImage is immutable, meaning you can't change it. Apple posted a great article on how to get a copy of the pixels and modify them, and rather than copy and paste it here, you should just go read the article. Once you have the bitmap context as they mention in the article, you can do something similar to this to get a new UIImage with the modified pixels: CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; Do remember to release your references though, otherwise you're going to be leaking quite a bit of memory. After I Select 3 Images From The Camera, I Run Out Of Memory. Help! You have to remember that even though on disk these images take up only a few hundred kilobytes at most, that's because they're compressed as a PNG or JPG. When they are loaded into the UIImage, they become uncompressed. A quick over-the-envelope calculation would be: width x height x 4 = bytes in memory That's assuming 32-bit pixels. If you have 16-bit pixels (some JPGs are stored as RGBA-5551), then you'd replace the 4 with a 2. Now, images taken with the camera are 1600 x 1200 pixels, so let's do the math: 1600 x 1200 x 4 = 7,680,000 bytes = ~8 MB 8 MB is a lot, especially when you have a limit of around 24 MB for your application. That's why you run out of memory. OK, I Understand Why I Have No Memory. What Do I Do? There is never any reason to display images at their full resolution. The iPhone has a screen of 480 x 320 pixels, so you're just wasting space. If you find yourself in this situation, ask yourself the following question: Do I need the full resolution image? If the answer is yes, then you should save it to disk for later use. If the answer is no, then read the next part. Once you've decided what to do with the full-resolution image, then you need to create a smaller image to use for displaying. Many times you might even want several sizes for your image: a thumbnail, a full-size one for displaying, and the original full-resolution image. OK, I'm Hooked. How Do I Resize the Image? Unfortunately, there is no defined way how to resize an image. Also, it's important to note that when you resize it, you'll get a new image - you're not modifying the old one. There are a couple of methods to do the resizing. I'll present them both here, and explain the pros and cons of each. Method 1: Using UIKit + (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize; { // Create a graphics image context UIGraphicsBeginImageContext(newSize); // Tell the old image to draw in this new context, with the desired // new size [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)]; // Get the new image from the context UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); // End the context UIGraphicsEndImageContext(); // Return the new image. return newImage; } This method is very simple, and works great. It will also deal with the UIImageOrientation for you, meaning that you don't have to care whether the camera was sideways when the picture was taken. However, this method is not thread safe, and since thumbnailing is a relatively expensive operation (approximately ~2.5s on a 3G for a 1600 x 1200 pixel image), this is very much an operation you may want to do in the background, on a separate thread. Method 2: Using CoreGraphics + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize; { CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } if (sourceImage.imageOrientation == UIImageOrientationLeft) { CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The benefit of this method is that it is thread-safe, plus it takes care of all the small things (using correct color space and bitmap info, dealing with image orientation) that the UIKit version does. How Do I Resize and Maintain Aspect Ratio (like the AspectFill option)? It is very similar to the method above, and it looks like this: + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize; { CGSize imageSize = sourceImage.size; CGFloat width = imageSize.width; CGFloat height = imageSize.height; CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGFloat scaleFactor = 0.0; CGFloat scaledWidth = targetWidth; CGFloat scaledHeight = targetHeight; CGPoint thumbnailPoint = CGPointMake(0.0,0.0); if (CGSizeEqualToSize(imageSize, targetSize) == NO) { CGFloat widthFactor = targetWidth / width; CGFloat heightFactor = targetHeight / height; if (widthFactor > heightFactor) { scaleFactor = widthFactor; // scale to fit height } else { scaleFactor = heightFactor; // scale to fit width } scaledWidth = width * scaleFactor; scaledHeight = height * scaleFactor; // center the image if (widthFactor > heightFactor) { thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5; } else if (widthFactor < heightFactor) { thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5; } } CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } // In the right or left cases, we need to switch scaledWidth and scaledHeight, // and also the thumbnail point if (sourceImage.imageOrientation == UIImageOrientationLeft) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The method we employ here is to create a bitmap with the desired size, but draw an image that is actually larger, thus maintaining the aspect ratio. So We've Got Our Scaled Images - How Do I Save Them To Disk? This is pretty simple. Remember that we want to save a compressed version to disk, and not the uncompressed pixels. Apple provides two functions that help us with this (documentation is here): NSData* UIImagePNGRepresentation(UIImage *image); NSData* UIImageJPEGRepresentation (UIImage *image, CGFloat compressionQuality); And if you want to use them, you'd do something like: UIImage* myThumbnail = ...; // Get some image NSData* imageData = UIImagePNGRepresentation(myThumbnail); Now we're ready to save it to disk, which is the final step (say into the documents directory): // Give a name to the file NSString* imageName = @"MyImage.png"; // Now, we have to find the documents directory so we can save it // Note that you might want to save it elsewhere, like the cache directory, // or something similar. NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; // Now we get the full path to the file NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName]; // and then we write it out [imageData writeToFile:fullPathToFile atomically:NO]; You would repeat this for every version of the image you have. How Do I Load These Images Back Into Memory? Just look at the various UIImage initialization methods, such as +imageWithContentsOfFile: in the Apple documentation.

    Read the article

  • Atmospheric Scattering

    - by Lawrence Kok
    I'm trying to implement atmospheric scattering based on Sean O`Neil algorithm that was published in GPU Gems 2. But I have some trouble getting the shader to work. My latest attempts resulted in: http://img253.imageshack.us/g/scattering01.png/ I've downloaded sample code of O`Neil from: http://http.download.nvidia.com/developer/GPU_Gems_2/CD/Index.html. Made minor adjustments to the shader 'SkyFromAtmosphere' that would allow it to run in AMD RenderMonkey. In the images it is see-able a form of banding occurs, getting an blueish tone. However it is only applied to one half of the sphere, the other half is completely black. Also the banding appears to occur at Zenith instead of Horizon, and for a reason I managed to get pac-man shape. I would appreciate it if somebody could show me what I'm doing wrong. Vertex Shader: uniform mat4 matView; uniform vec4 view_position; uniform vec3 v3LightPos; const int nSamples = 3; const float fSamples = 3.0; const vec3 Wavelength = vec3(0.650,0.570,0.475); const vec3 v3InvWavelength = 1.0f / vec3( Wavelength.x * Wavelength.x * Wavelength.x * Wavelength.x, Wavelength.y * Wavelength.y * Wavelength.y * Wavelength.y, Wavelength.z * Wavelength.z * Wavelength.z * Wavelength.z); const float fInnerRadius = 10; const float fOuterRadius = fInnerRadius * 1.025; const float fInnerRadius2 = fInnerRadius * fInnerRadius; const float fOuterRadius2 = fOuterRadius * fOuterRadius; const float fScale = 1.0 / (fOuterRadius - fInnerRadius); const float fScaleDepth = 0.25; const float fScaleOverScaleDepth = fScale / fScaleDepth; const vec3 v3CameraPos = vec3(0.0, fInnerRadius * 1.015, 0.0); const float fCameraHeight = length(v3CameraPos); const float fCameraHeight2 = fCameraHeight * fCameraHeight; const float fm_ESun = 150.0; const float fm_Kr = 0.0025; const float fm_Km = 0.0010; const float fKrESun = fm_Kr * fm_ESun; const float fKmESun = fm_Km * fm_ESun; const float fKr4PI = fm_Kr * 4 * 3.141592653; const float fKm4PI = fm_Km * 4 * 3.141592653; varying vec3 v3Direction; varying vec4 c0, c1; float scale(float fCos) { float x = 1.0 - fCos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main( void ) { // Get the ray from the camera to the vertex, and its length (which is the far point of the ray passing through the atmosphere) vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Pos = normalize(gl_Vertex.xyz) * fOuterRadius; vec3 v3Ray = v3CameraPos - v3Pos; float fFar = length(v3Ray); v3Ray = normalize(v3Ray); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = v3CameraPos; float fHeight = length(v3Start); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth*scale(fStartAngle); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fLightAngle = dot(normalize(v3LightPos), v3SamplePoint) / fHeight; float fCameraAngle = dot(normalize(v3Ray), v3SamplePoint) / fHeight; float fScatter = (-fStartOffset + fDepth*( scale(fLightAngle) - scale(fCameraAngle)))/* 0.25f*/; vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } // Finally, scale the Mie and Rayleigh colors and set up the varying variables for the pixel shader vec4 newPos = vec4( (gl_Vertex.xyz + view_position.xyz), 1.0); gl_Position = gl_ModelViewProjectionMatrix * vec4(newPos.xyz, 1.0); gl_Position.z = gl_Position.w * 0.99999; c1 = vec4(v3FrontColor * fKmESun, 1.0); c0 = vec4(v3FrontColor * (v3InvWavelength * fKrESun), 1.0); v3Direction = v3CameraPos - v3Pos; } Fragment Shader: uniform vec3 v3LightPos; varying vec3 v3Direction; varying vec4 c0; varying vec4 c1; const float g =-0.90f; const float g2 = g * g; const float Exposure =2; void main(void){ float fCos = dot(normalize(v3LightPos), v3Direction) / length(v3Direction); float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); gl_FragColor = c0 + fMiePhase * c1; gl_FragColor.a = 1.0; }

    Read the article

  • An issue with tessellation a model with DirectX11

    - by Paul Ske
    I took the hardware tessellation tutorial from Rastertek and implemended texturing instead of color. This is great, so I wanted to implemended the same techique to a model inside my game editor and I noticed it doesn't draw anything. I compared the detailed tessellation from DirectX SDK sample. Inside the shader file - if I replace the HullInputType with PixelInputType it draws. So, I think because when I compiled the shaders inside the program it compiles VertexShader, PixelShader, HullShader then DomainShader. Isn't it suppose to be VertexShader, HullSHader, DomainShader then PixelShader or does it really not matter? I am just curious why wouldn't the model even be drawn when HullInputType but renders fine with PixelInputType. Shader Code: [code] cbuffer ConstantBuffer { float4x4 WVP; float4x4 World; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; Texture2D Texture; Texture2D NormalTexture; SamplerState ss { MinLOD = 5.0f; MipLODBias = 0.0f; }; struct HullOutputType { float3 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct HullInputType { float4 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct VertexInputType { float4 position : POSITION; float2 texcoord : TEXCOORD; float3 normal : NORMAL; float3 tangent : TANGENT; uint uVertexID : SV_VERTEXID; }; struct PixelInputType { float4 position : SV_POSITION; float2 texcoord : TEXCOORD0; // texture coordinates float3 normal : NORMAL; float3 tangent : TANGENT; float4 color : COLOR; float3 viewDirection : TEXCOORD1; float4 depthBuffer : TEXTURE0; }; HullInputType VShader(VertexInputType input) { HullInputType output; output.position.w = 1.0f; output.position = mul(input.position,WVP); output.texcoord = input.texcoord; output.normal = input.normal; output.tangent = input.tangent; //output.normal = mul(normal,World); //output.tangent = mul(tangent,World); //output.color = output.color; //output.texcoord = texcoord; // set the texture coordinates, unmodified return output; } ConstantOutputType TexturePatchConstantFunction(InputPatch inputPatch,uint patchID : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("TexturePatchConstantFunction")] HullOutputType HShader(InputPatch patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; // Set the position for this control point as the output position. output.position = patch[pointId].position; // Set the input color as the output color. output.texcoord = patch[pointId].texcoord; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; return output; } [domain("tri")] PixelInputType DShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch patch) { float3 vertexPosition; float2 uvPosition; float4 worldposition; PixelInputType output; // Interpolate world space position with barycentric coordinates float3 vWorldPos = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; // Determine the position of the new vertex. vertexPosition = vWorldPos; // Calculate the position of the new vertex against the world, view, and projection matrices. output.position = mul(float4(vertexPosition, 1.0f),WVP); // Send the input color into the pixel shader. output.texcoord = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.normal = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.tangent = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; //output.depthBuffer = output.position; //output.depthBuffer.w = 1.0f; //worldposition = mul(output.position,WVP); //output.viewDirection = cameraDirection.xyz - worldposition.xyz; //output.viewDirection = normalize(output.viewDirection); return output; } [/code] Somethings are commented out but will be in place when fixed. I'm probably not connecting something correctly.

    Read the article

  • Understanding each other in web development

    - by Pete Hotchkin
    During my career I have been lucky enough to work in several different roles within web development with many extremely talented people, from incredible designers who were passionate about the placement of every pixel right through to server administrators and DBAs who were always measuring the improvements they were making to their queries in the smallest possible unit. The problem I always faced was that more often than not I was stuck in the middle trying to mediate between these different functions and enable each side to understand the other’s point of view. The main areas of contention that there have always been between these functional groups in my experience have been at 2 key points: during the build phase and then when there is a problem post-build. During both of these times it is often easier for someone to pass the buck onto someone else than spend the time to understand the other person’s perspective. Below is a quick look at two upcoming tools that will not only speed up the build phase for each function, but  also help when it comes to the issues faced once a site has been pushed live. In my experience a web project goes through several phases of development. The first of these is design, generally handled as Photoshop files which are then passed onto a front-end developer. This is the first point at which heated discussions can arise. One problem I’ve seen several times is that the designer doesn’t fully understand the platform constraints that need to be considered, and as a result has designed something that does not translate very well or is simply not possible. Working at Red Gate, I am lucky enough to be able to meet some amazing people and this happened just the other day when I was introduced to Neil Kinnish and Pete Nelson, the creators of what I believe could be a great asset in this designer-developer relationship, Mixture. Mixture allows the front end developer to quickly prototype a web page with built-in frameworks such as bootstrap. It’s not an IDE however, it just sits there in the background and monitors the project files in the background so every time you save a file from your favorite IDE, it will compile things like LESS, compact your JavaScript and the automatically refresh your test browser so you can see the changes instantly. I think one of the best parts of this however is a single button that pushes the changed files up to the web so the designer can instantly see how far the developer has got and the problem that he is facing at that time without the need to spend time setting up a remote server. I can see this being a real asset to remote teams where there needs to be a compromise between the designer and the front-end developer, or just to allow the designer to see how the build is progressing and suggest small alterations. Once the design has been built into the front end the designer’s job is generally done and there are no other points of contention between the designer and the other functions involved in building these web projects. As the project moves into the stage of integrating it into the back end and deploying it to the production server other functions start to be pulled in and other issues arise such as the back-end developer understanding the frameworks that they are using such as the routes that are in place in an MVC application or the number of database calls that the ORM layer is actually making. There are many tools out there that can actually help with these problems such as mini profiler that gives you a quick snapshot of what is going on directly in the browser. For a slightly more in-depth look at what is happening and to gain a deeper understanding of an application you may be working on though, you may want to consider Glimpse. Created by Nik and Anthony, it is an application that sits at the bottom of your browser (installed via NuGet) which can show you information about how your application is pieced together and how the information on screen is being delivered as it happens. With a wealth of community-built plugins such as one for nHibernate and linq2SQL (full list of plugins on NuGet). It can be customized directly to your own setup to truly delve into the code to see what is happening, and can help to reduce the number of confusing moments about whether it is your code that is going wrong or whether there is something more sinister happening directly on the server. All the tools that I have mentioned in this post help to do one thing above all, and that is to ease the barrier of understanding between the different functions that are involved in building and maintaining a web application. In my experience it is very easy to say “Well, that’s not my problem”, simply because the two functions involved don’t truly understand the other’s point of view. Software should not only be seen as a way to streamline our own working process or as a debugging tool but also a communication aid to improve the entire lifecycle of a web project. Glimpse is actually the project that I am the designer on and I would love to get your feedback if you do decide to try it out or if you would like to share your own experiences of working on web projects please fill in your details at https://www.surveymk.com/s/joinGlimpse  or add a comment below and I will get in touch with you.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >