Search Results

Search found 10760 results on 431 pages for 'win 16 subsystem'.

Page 90/431 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • team viewer 8 beta wont run

    - by Conner Jones
    I installed team viewer 7 and then one of my friends using windows got version 8 so I installed the beta of version 8 for linux. When I try to run it for terminal I get these errors i atempted to do as the comment bellow said and when trying to run teamveiwer i stil got an error conner@DemonicGrace:~$ teamviewer Init... Checking setup... Launching TeamViewer... wine: cannot find L"C:\windows\system32\winemenubuilder.exe" err:wineboot:ProcessRunKeys Error running cmd L"C:\windows\system32\winemenubuilder.exe -a -r" (2) err:winedevice:ServiceMain driver L"MountMgr" failed to load err:secur32:SECUR32_initSchannelSP libgnutls not found, SSL connections will fail fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:ole:CoInitializeSecurity ((nil),-1,(nil),(nil),0,3,(nil),0,(nil)) - stub! fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:process:SetProcessShutdownParameters (00000100, 00000000): partial stub. fixme:resource:GetGuiResources (0xffffffff,0): stub fixme:win:EnumDisplayDevicesW ((null),0,0x32df64,0x00000000), stub! fixme:win:EnumDisplayDevicesW (L"\\.\DISPLAY1",0,0x32dc1c,0x00000000), stub! fixme:win:EnumDisplayDevicesW ((null),1,0x32df64,0x00000000), stub! please help me out if anyone has ideas im more than willing to listen

    Read the article

  • Ubuntu 12.04 server froze during the first boot after it was installed.

    - by user69021
    I installed Ubuntu server 12.04 to my new server and it failed on the first boot. It just stopped and I can't proceed any further. Server's specifications: Dell PowerEdge T620 CPU : Xeon E5-2665 2.4G x 2 RAM : 8GB RDIMM, 1333MHz x 12 HDD : 3TB Near Line SAS 7.2K x 8 RAID controller : PERC H710 GPU : NVIDIA Tesla C2075 x 4 I have a screenshot of the screen it stopped on but I cannot attach it because my privilege level is currently too low. ![freeze on boot][1] Here are the last messages while booting. [5.048743] Freeing unused kernel memory : 920k freed [5.049046] Write protecting the kernel read-only data : 12288k [5.052973] Freeing unused kernel memory : 1608k freed [5.056132] Freeing unused kernel memory : 1196k freed Loading, please wait... [5.070236] udevd[218]: starting version 175 Begin: Loading essential drivers ... done. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [5.089030] megasas: 00.00.06.12-rc1 Wed. Oct. 5 17:00:00 PDT 2011 [5.089518] megasas: 0x1000:0x005b:0x1028:0x1f35: bus 1:slot 0:func 0 [5.089739] megaraid_sas 0000:01:00.0: PCI INT A -> GSI 34 (level, low) -> IRQ 34 [5.089937] megasas: FW now in Ready state [5.090427] dca service started, version 1.12.1 [5.091463] Intel(R) Gigabit Ethernet Network Driver - version 3.2.10-k [5.091578] Copyright (c) 2007-2011 Intel Corporation. [5.091712] igb 0000:06:00.0: PCI INT A -> GSI 16 (level,low) -> IRQ 16 [5.111090] megasas:IOC Init cmd success [5.123124] usb 1-1:new high-speed USB device number 2 using ehci_hcd What can I do about this?

    Read the article

  • Bitmap Font Displays in Center Always Without Coding it Manually (Fix Coordinate Problem onText)

    - by David Dimalanta
    Is there a way on how to stay the texts in center without manually coding it or something, especially when making an update? I'm making a display for the highest score. Let's say that the score is 9. However, if the score is 9,999,999, the text displays still only at the fixed X and Y coordinate. Is there really a way to stay the text in center especially when there is changes when a player beats the new world record? Here's my code inside Sprite Batch: font.setScale(1.5f); font.draw(batch, "HIGHEST SCORE:", (900/10)*1 + 60, (1280/16)*10); font.draw(batch, "" + 9999999 + "", (900/10)*4, (1280/16)*8); batch.draw(grid_guide, 0, 0, 900, 1280); // --> For testing purpose only. // Where 9999999 is a new record score for example. Here's the image shown as example. I add it some red grid so that I could check if the display of score when updated will always display on center no matter how many digits takes place in. However, it is fixed, so I have to figure it out how to display it automatically on center regardless of the number of digits while updating for the new highscore. I have used the LibGDX preferences very well though to save and load records for the highscore.

    Read the article

  • Java: How to Make a Player Class in a Tile-Based RPG

    - by A.K.
    So I've been following a JavaHub tutorial that basically uses a pixel engine similar to MiniCraft. I've attempted to make a Player Class as such, and I'm basically making a mock Pokemon game for learning's sake: package pokemon.entity; import java.awt.Rectangle; import pokemon.gfx.Screen; import pokemon.levelgen.Tile; import pokemon.entity.SpritesManage;; public class Player { int x, y; int vx, vy; public Rectangle AshRec; public Sprite AshSprite; Screen screen; Sprite[][] AshSheet; public Player() { AshSprite = SpritesManage.AshSheet[1][0]; AshRec = new Rectangle(0, 0, 16, 16); x = 0; y = 0; vx = 1; vy = 1; screen.renderSprite(0, 0, AshSprite); } public void update() { move(); checkCollision(); } private void checkCollision() { } private void move() { AshRec.x += vx; AshRec.y += vy; } public void render(Screen screen, int x, int y) { screen.renderSprite(x, y, AshSprite); } } I guess what I really want to do is have the Player centered in the screen and have the sprite drawn based on an Input Handler. I'm just stumped as to how to sync these together.

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • X-notifier doesn't work in Chromium Browser

    - by cipricus
    It just keeps checking in vain. Also cannot import or export data, but get this error I use the latest versions of both in Lubuntu 12.04. In Google Chrome it works. What could it be the problem? Edit - following vasa1's comment - running sudo aa-status i get apparmor module is loaded. 16 profiles are loaded. 16 profiles are in enforce mode. /sbin/dhclient /usr/bin/evince /usr/bin/evince-previewer /usr/bin/evince-previewer//launchpad_integration /usr/bin/evince-previewer//sanitized_helper /usr/bin/evince-thumbnailer /usr/bin/evince-thumbnailer//sanitized_helper /usr/bin/evince//launchpad_integration /usr/bin/evince//sanitized_helper /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/connman/scripts/dhclient-script /usr/lib/cups/backend/cups-pdf /usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper /usr/sbin/cupsd /usr/sbin/ntpd /usr/sbin/tcpdump 0 profiles are in complain mode. 3 processes have profiles defined. 3 processes are in enforce mode. /sbin/dhclient (1562) /usr/sbin/cupsd (916) /usr/sbin/ntpd (1695) 0 processes are in complain mode. 0 processes are unconfined but have a profile defined.

    Read the article

  • Wireless drivers

    - by Kencer
    The results for my laptop are as below. How will i install wireless drivers and graphics? $ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) $ sudo lshw -c network *-network UNCLAIMED description: Network controller product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f0500000-f0507fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 07 serial: 3c:97:0e:85:c0:0d size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8168e-3_0.0.4 03/27/12 ip=172.16.96.36 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:43 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff $ rfkill list all 0: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no

    Read the article

  • Mplayer not working after update

    - by R. Morgenstern
    After an update with update-manager in Unbuntu 12.04, Mplayer is not working anymore. It needs ffmpeg, but it can't be installed due to unmet dependencies. I added a ppa for fmpeg, but did not solve the problem. see output: Python (v2.7) requires to install .... GStreamer ffmpeg video plugin. Codecs to play mpeg, divx, mpeg4, ac3, wmv and asf files. Using Install, I get an error messages that it can't be installed due to unmet dependencies. see list below. How can I fix this problem? Thanks in advance for guidance. Renate The following packages have unmet dependencies: gstreamer0.10-ffmpeg: Depends: libavcodec-extra-53 (>= 4:0.7.3-1) but 4:0.8.3ubuntu0.12.04.1 is to be installed Depends: libavformat-extra-53 (>= 4:0.7.3-1) but 4:0.8.3ubuntu0.12.04.1 is to be installed Depends: libavutil-extra-51 (>= 4:0.7.3-1) but 6:0.10.4.0ubuntu0jon2.2 is to be installed Depends: libc6 (>= 2.7) but 2.15-0ubuntu10 is to be installed Depends: libglib2.0-0 (>= 2.31.2) but 2.32.3-0ubuntu1 is to be installed Depends: libgstreamer-plugins-base0.10-0 (>= 0.10.31) but 0.10.36-1 is to be installed Depends: libgstreamer0.10-0 (>= 0.10.31) but 0.10.36-1ubuntu1 is to be installed Depends: liborc-0.4-0 (>= 1:0.4.16) but 1:0.4.16-1ubuntu2 is to be installed Depends: libpostproc-extra-52 (>= 4:0.7.3-1) but 4:0.8.3ubuntu0.12.04.1 is to be installed Depends: libswscale-extra-2 (>= 4:0.7.3-1) but 4:0.8.3ubuntu0.12.04.1 is to be installed

    Read the article

  • 2D Camera Acceleration/Lag

    - by Cyral
    I have a nice camera set up for my 2D xna game. Im wondering how I should make the camera have 'acceleration' or 'lag' so it smoothly follows the player, instead of 'exactly' like mine does now. Im thinking somehow I need to Lerp the values when I set CameraPosition. Heres my code private void ScrollCamera(Viewport viewport) { float ViewMargin = .35f; float marginWidth = viewport.Width * ViewMargin; float marginLeft = cameraPosition.X + marginWidth; float marginRight = cameraPosition.X + viewport.Width - marginWidth; float TopMargin = .3f; float BottomMargin = .1f; float marginTop = cameraPosition.Y + viewport.Height * TopMargin; float marginBottom = cameraPosition.Y + viewport.Height - viewport.Height * BottomMargin; Vector2 CameraMovement; Vector2 maxCameraPosition; CameraMovement.X = 0.0f; if (Player.Position.X < marginLeft) CameraMovement.X = Player.Position.X - marginLeft; else if (Player.Position.X > marginRight) CameraMovement.X = Player.Position.X - marginRight; maxCameraPosition.X = 16 * Width - viewport.Width; cameraPosition.X = MathHelper.Clamp(cameraPosition.X + CameraMovement.X, 0.0f, maxCameraPosition.X); CameraMovement.Y = 0.0f; if (Player.Position.Y < marginTop) //above the top margin CameraMovement.Y = Player.Position.Y - marginTop; else if (Player.Position.Y > marginBottom) //below the bottom margin CameraMovement.Y = Player.Position.Y - marginBottom; maxCameraPosition.Y = 16 * Height - viewport.Height; cameraPosition.Y = MathHelper.Clamp(cameraPosition.Y + CameraMovement.Y, 0.0f, maxCameraPosition.Y); }

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • Ubuntu 14.04: Fine tuning Touchpad for ThinkPad S431

    - by ramgorur
    I am using Ubuntu 14.04 on Lenovo ThinkPad S431. The touchpad is very bumpy, tricky to use. Slides down with a small touch, sometimes jerks erratically. I have tried to modify the settings in /usr/share/X11/xorg.conf.d/50-synaptics.conf file as below -- Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "JumpyCursorThreshold" "250" Option "VertResolution" "100" # Option "HorizResolution" "65" # Option "MinSpeed" "1" # Option "MaxSpeed" "1" # Option "AccelerationProfile" "1" # Option "AdaptiveDeceleration" "8" # Option "ConstantDeceleration" "1" # Option "VelocityScale" "128" Option "HorizHysteresis" "150" Option "VertHysteresis" "150" EndSection There are lots of options here, does anyone know how to get a fine-tuned values for the above options (for ThinkPad S431)? The Hysteresis values seems to alleviate the problem a little bit, but failed to get a perfect result. EDIT: According to this bug report for ThinkPad X230 (+X230t), I set these values and quite good for now -- Option "VertResolution" "100" Option "HorizResolution" "65" Option "MinSpeed" "1" Option "MaxSpeed" "1" Option "AccelerationProfile" "2" Option "AdaptiveDeceleration" "16" Option "ConstantDeceleration" "16" Option "VelocityScale" "32" Option "HorizHysteresis" "50" Option "VertHysteresis" "50" and then you need to increase the cursor speed manually from the Unity mouse settings. But I am still looking for a fully functional (possibly with all the gestures) and a fine-tuned touchpad settings for S431. Further help is appreciated.

    Read the article

  • Find possible variations of one item out of multiple baskets.

    - by tugberk
    I have three baskets of balls and each of them has 10 balls which have the following numbers: Basket 1: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 Basket 2: 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 Basket 3: 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 What would be the possible variations If I were to pick one ball from each basket? I guess this is called as Probability in Mathematics but not sure. How would you write this code in C# (or any other programming language) to get the correct results? Edit: Based on @Kilian Foth's comment, here is the solution in C#: class Program { static void Main(string[] args) { IEnumerable<string> basket1 = new List<string> { "1", "2", "3", "4", "5", "6", "7", "8", "9", "10" }; IEnumerable<string> basket2 = new List<string> { "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" }; IEnumerable<string> basket3 = new List<string> { "21", "22", "23", "24", "25", "26", "27", "28", "29", "30" }; foreach (var item1 in basket1) foreach (var item2 in basket2) foreach (var item3 in basket3) { Console.WriteLine("{0}, {1}, {2}", item1, item2, item3); } Console.ReadLine(); } }

    Read the article

  • M-Audio Delta 1010LT on 12.04

    - by user74039
    I have 12.04 64bit installed, my soundcard is a Delta 1010LT, it seems to be partially detected, I've been following steps here: https://help.ubuntu.com/community/SoundTroubleshooting/ lspci -v | grep -A7 -i "audio" shows this: 04:07.0 Multimedia audio controller: VIA Technologies Inc. ICE1712 [Envy24] PCI Multi-Channel I/O Controller (rev 02) Subsystem: VIA Technologies Inc. M-Audio Delta 1010LT Flags: bus master, medium devsel, latency 64, IRQ 22 I/O ports at ec00 [size=32] I/O ports at e880 [size=16] I/O ports at e800 [size=16] I/O ports at e480 [size=64] Capabilities: <access denied> Kernel driver in use: snd_ice1712 aplay shows this: **** List of PLAYBACK Hardware Devices **** card 0: M1010LT [M Audio Delta 1010LT], device 0: ICE1712 multi [ICE1712 multi] Subdevices: 1/1 Subdevice #0: subdevice #0 In the sound settings on the desktop all I see is the ICE1712 S/PDIF, which I don't use, I want to use the individual outputs on the card, I'm not so bothered about inputs, I just want the playback for now. If I open alsamixer in the console, I see all of the output and input channels, i've raised the volume on them but I don't get anything in the sound settings on the desktop and when I play any sound, I hear nothing. Can someone help?

    Read the article

  • OSSEC HIDS Notification "Unknown problem somewhere in the system." (seems like hdd issue)

    - by John
    from what i understand somethings is wrong with hdd i am trying to find some commands in order to run some tests to check if hard disk is OK I will post a full list of logs after REBOOT of system: "Unknown problem somewhere in the system." kernel: ata2.00: failed command: READ FPDMA QUEUED kernel: res 51/40:c8:38:5c:16/00:00:00:00:00/40 Emask 0x409 (media error) <F> kernel: ata2.00: error: { UNC } kernel: ata2.00: failed command: READ FPDMA QUEUED kernel: res 51/40:78:88:5c:16/00:00:00:00:00/40 Emask 0x409 (media error) <F> kernel: sd 1:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: md/raid1:md1: read error corrected (8 sectors at 1461400 on sda1) kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: md/raid1:md1: read error corrected (8 sectors at 1461672 on sda1) Also some of this logs are duplicate or even more. Thanks.

    Read the article

  • Trying to wrap my head around class structure for domain-specific language

    - by svaha
    My work is mostly in embedded systems programming in C, and the proper class structure to pull this off eludes me. Currently we communicate via C# and Visual Basic with a large collection of servos, pumps, and sensors via a USB-to-CAN hid device. Right now, it is quite cumbersome to communicate with the devices. To read the firmware version of controller number 1 you would use: SendCan(Controller,1,ReadFirmwareVersion) or SendCan(8,1,71) This sends three bytes on the CAN bus: (8,1,71) Connected to controllers are various sensors. SendCan(Controller,1,PassThroughCommand,O2Sensor,2,ReadO2) would tell Controller number 1 to pass a command to O2 Sensor number 2 to read O2 by sending the bytes 8,1,200,16,2,0 I would like to develop a domain-specific language for this setup. Instead of commands issued like they are currently, commands would be written like this: Controller1.SendCommand.O2Sensor2.ReadO2 to send the bytes 8,1,200,16,0 What's the best way to do this? Some machines have 20 O2 Sensors, others have 5 controllers, so the numbers and types of controllers and sensors, pumps, etc. aren't static.

    Read the article

  • How do I configure an Intel HD Graphics 4000?

    - by derabbink
    First off, please note that last night I already posted this question to a launchpad mailing list, so this could be considered a cross post. However, I think this is a better place to ask the same question The question: How can I configure my Ubuntu 12.04, with upgraded kernel (3.6), to use the Intel HD Graphics 4000 adapter? (Intel HD 4000 is the standard of 3rd gen Intel Core i7 (Ivy Bridge) graphics adapter) Some output: $ glxinfo name of display: :0 X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 154 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 $ cat /etc/X11/xorg.conf this is probably the farthest from what it should be Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Module" Load "glx" EndSection $ lspci I only listed the line I think are relevant. If you want more info in order to help me, please comment :) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 16:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler XT [AMD Radeon HD 6700M Series] 16:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Turks HDMI Audio [Radeon HD 6000 Series]

    Read the article

  • Enable [command] key to register as something other than just [ctrl]?

    - by gojomo
    I'm running 10.04LTS inside VMWare Fusion on a Mac. The [command] key (aka [windows] on many keyboards) is almost always behaving as if it was [ctrl], even though I done anything explicit to request that behavior. In fact, in SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior, 'default' is chosen (rather than the 'Control is mapped to Win keys' option). However, choosing other options there do not seem to change the handling of [command], at least not as tested in the SystemPreferenceKeyboard Shortcuts app. (No matter what I've tried, [command]-x is always detected as [Ctrl]-x in that app.) I've tried: various options under SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior toggling the VMWare Fusion Preferences KKeyboard & Mouse Key Mappings setup which claims to map '[command]' to '[windows]', and restarting the VM in each position the xmodmap lines suggested at https://help.ubuntu.com/community/MappingWindowsKey And yet, it's clear that all Ubuntu apps aren't merging [ctrl] and [command], because in 'Terminal', [shift]-[ctrl]-c will Copy, but [shift]-[command]-c will not. If the [command]/[windows] key was recognized as anything else ('Super', 'Meta', 'Hyper'? I don't care as long as it's not 'Control'), then I could achieve my real goal (which happens to be enabling CMD-based cut/copy/paste in PyCharm, while leaving CTRL-X/etc available for emacs-like bindings). I think any solution which manages to make [command]-x appear as something other than [ctrl]-x in PreferencesKeyboard Shortcuts will probably do the trick.

    Read the article

  • Matrix loading problems with jbullet and lwjgl

    - by Quintin
    The following code does not load the matrix correctly from jbullet. //box is a RigidBody Transform trans = new Transform(); trans = box.getMotionState().getWorldTransform(trans); float[] matrix = new float[16]; trans.getOpenGLMatrix(matrix); // pass that matrix to OpenGL and render the cube FloatBuffer buffer = ByteBuffer.allocateDirect(4*16).asFloatBuffer().put(matrix); buffer.rewind(); glPushMatrix(); glMultMatrix(buffer); glBegin(GL_POINTS); glVertex3f(0,0,0); glEnd(); glPopMatrix(); the jbullet is configured as so: CollisionConfiguration = new DefaultCollisionConfiguration(); dispatcher = new CollisionDispatcher(collisionConfiguration); Vector3f worldAabbMin = new Vector3f(-10000,-10000,-10000); Vector3f worldAabbMax = new Vector3f(10000,10000,10000); AxisSweep3 overlappingPairCache = new AxisSweep3(worldAabbMin, worldAabbMax); SequentialImpulseConstraintSolver solver = new SequentialImpulseConstraintSolver(); dynamicWorld = new DiscreteDynamicsWorld(dispatcher, overlappingPairCache, solver, collisionConfiguration); dynamicWorld.setGravity(new Vector3f(0,-10,0)); dynamicWorld.getDispatchInfo().allowedCcdPenetration = 0f; CollisionShape groundShape = new BoxShape(new Vector3f(1000.f, 50.f, 1000.f)); Transform groundTransform = new Transform(); groundTransform.setIdentity(); groundTransform.origin.set(new Vector3f(0.f, -60.f, 0.f)); float mass = 0f; Vector3f localInertia = new Vector3f(0, 0, 0); DefaultMotionState myMotionState = new DefaultMotionState(groundTransform); RigidBodyConstructionInfo rbInfo = new RigidBodyConstructionInfo(mass, myMotionState, groundShape, localInertia); RigidBody body = new RigidBody(rbInfo); dynamicWorld.addRigidBody(body); dynamicWorld.clearForces(); Nothing is rendered on the screen. What am I doing wrong?

    Read the article

  • How to load chunks of 2d map segments when player reaches a certain point?

    - by 2kan
    In my 2d platformer (made with Java and Slick2d), random maps are made by combining different segments together and displaying them one after the other. My problem is that I can't load too many segments or the game will run out of memory, so I want to load n number of segments at a time in chunks, then load the next chunk when the player comes near the end of one. I've attempted to do this for a couple of hours now, but I just can't get it to work at all. This is my chunk generation function where chunkLoad is the number of segments to load and BLOCK_WIDTH is the number of blocks/tiles each segment is across. Chunk1 and map are arrays of segments. Random r = new Random(); for(int i=0; i<chunkLoad; i++) { int id = r.nextInt(4)+2; chunk1[i] = new BlockMap("res/window/map"+id+".tmx", i*BLOCK_WIDTH); } map = chunk1; chunksLoaded++; The map is then drawn on the screen like this. tmap is a TiledMap object and each block/tile is 16 pixels wide for(int i=0; i<chunkLoad; i++) { map[i].tmap.render((i * BLOCK_WIDTH * 16) + (cameraX), 0); } I can successfully load new chunks, but I can't display them in the correct position, nor the hitboxes. Any suggestions? Thanks.

    Read the article

  • Map rendering Libgdx Java

    - by user3165683
    Ok, so I am trying to create a 2D non-movable random tiled map. This is what I have so far: private void generateTile(){ System.out.print("tiletry1"); while(loadedTiles != 8100){ System.out.print("tiletry"); Texture currentTile = null; int tileX = 0; int tileY = 0; if (tileX == 120); tileY = 16; tileX = 0; game.batch.begin(); switch(MathUtils.random(2)){ case 0: //game.batch.draw(tile1, tileX, tileY); System.out.print("tile1"); currentTile = tile1; break; case 1: //game.batch.draw(tile2, tileX, tileY); System.out.print("tile2"); currentTile = tile2; break; case 2: //game.batch.draw(tile3, tileX, tileY); System.out.print("tile3"); currentTile = tile3; break; } tileX+=16; loadedTiles ++; game.batch.draw(currentTile, tileX, tileY); game.batch.end(); } } However, I can't see any of the tiles and the screen just looks green. This method is above my render method which I have: camera.update(); batch.setProjectionMatrix(camera.combined); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); game.batch.begin(); //other render stuff Why am I not able to see the tiles?

    Read the article

  • libapt-inst1.4:i386 fais to install

    - by user92834
    Today I was notifed of updates, some of them being libapt. So I press "Install Updates", and I got this error: The following packages have unmet dependencies: libapt-inst1.4:i386: Depends: libapt-pkg4.12 (>= 0.8.16~exp12ubuntu10.2) but 0.8.16~exp12ubuntu10.3 is installed Depends: libc6 (>= 2.4) but 2.15-0ubuntu10 is installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.6.3-1ubuntu5 is installed Depends: libstdc++6 (>= 4.4.0) but 4.6.3-1ubuntu5 is installed So I open the terminal and typed sudo apt-get install -f I was shown this: The following extra packages will be installed: libapt-inst1.4 libapt-inst1.4:i386 libapt-pkg4.12:i386 The following NEW packages will be installed: libapt-pkg4.12:i386 The following packages will be upgraded: libapt-inst1.4 libapt-inst1.4:i386 2 upgraded, 1 newly installed, 0 to remove and 18 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,146 kB of archives. After this operation, 3,031 kB of additional disk space will be used. Do you want to continue [Y/n]? I selected "yes". And then: E: Internal Error, No file name for libapt-pkg4.12 Also, when I open the software center, I get a message that the database is broken... I'm using 12.04 64bit But why it wants to install the i386 version? I'm using 64 bit ubuntu...

    Read the article

  • Precise exposition of an image for set number of frames (Vsync?)

    - by Istrebitel
    I need to make a simple enough program in C#, but it seems to be impossible via usual WinForms means. I need to show something (a string of text, an image) on the screen for very small time interval. Since typical monitors are 60Hz, this interval would be 1 or 2 frames (16,6 or 33,3 ms). I tried doing this with usual WinForms, and it is not possible because, apparently, there is no way to know how many frames were output to the monitor since some point in time. I can only draw on the controls, and monitor output is totally independant. So even if I run the timer, say, for 17 ms, between showing and hiding the image, it still sometimes manages not to draw a single frame of my image on the screen (even though theoretically it should, because 17ms 16,6ms). Moreover, even 20ms seems to slow (even though i should be more than enough). I did some game development as a hobby in the past (Delphi X, XNA) and I know that you usually draw the whole screen by yourself, each frame. Also, I know that there is an option called Vsync in most modern games, that allows you to synchronize your framerate to your monitor's frame rate. So, is it possible? I mean, to actually know how many frames were sent to the monitor with w/e I want to show?

    Read the article

  • 2013?1?~2??OTN????????

    - by OTN-J Master
    ?????????????????????? ??????????????????????????????????????????????????????~??????????????!??????????????????????????????(??????????????)12?13?????????????????????????????????????????OTN Twitter????????????¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?2013?1??????? ?96? ????! ???????? -????????????????????¦ ??: 2013?1?16?(?)18:30 ~20:00 ¦ ??: ?????? ???????????? ??????????????????1??????????????????????????????????????????????????????????????????????????????????????????????????>> ??????? ????!!??????? ?1? [????] ?90?????! Oracle Database???????????????? ¦ ??: 2013?1?16?(?)18:30 ~20:00 ¦ ??: ?????????? ???? ??????? ??????????????????????????··????????????????????????????????????????????????????? ???????????????????????????????????Oracle Database??????????????????????????????????????????????????? >> ??????? ¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?¦?2013?2??????? WebLogic Server 12c Forum 2013~ Java EE???????WebLogic????????? ~¦ ??: 2013?2?1?(?)13:30~17:30¦ ??: ??????????????????????WebLogic Server?????????????·???????????Java EE 6??????2012?10????Oracle OpenWorld????WebLogic Server????????????WebLogic????????????????????????????????Java EE???????????·???????Java EE????????????Java?????????? ???????????????????????????????????????????Java EE????????????????????????????????????????????Candy?????????????????????!>> ???????  

    Read the article

  • ?????????????:3?15?~17?

    - by Yusuke.Yamamoto
    2011?3?15?~2011?3?17?????????????????(Oracle Direct Seminar)?????????????????????????????????????????????????????????????????? ?????????????????????????????? ?????????????????????????????????????????????OTN???? ?????? ????? ?????????????? ????????????? ?OTN???? ?????? ????? http://www.oracle.com/technetwork/jp/ondemand/db-basic/index.html ??????????????????????????????? 3/15 11:00~:???!?????????????????? http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=124542&src=7013395&src=7013395&Act=388 3/15 15:00~:????????! ?????????????? http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=124543&src=7013395&src=7013395&Act=389 3/16 11:00~:?ORACLE MASTER [Bronze DBA11g] ? http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=124572&src=7013395&src=7013395&Act=391 3/16 15:00~:?Oracle Database???????XML?????? http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=124573&src=7013395&src=7013395&Act=392 3/17 11:00~:?????????·?????????????GoldenGate? http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=124574&src=7013395&src=7013395&Act=393 3/17 15:00~:???????!Web????????/?????????????? http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=124578&src=7013395&src=7013395&Act=395 ?????????? Oracle Direct Seminar??? [email protected]

    Read the article

  • What approach to take for SIMD optimizations

    - by goldenmean
    Hi, I am trying to optimize below code for SIMD operations (8way/4way/2way SIMD whiechever possible and if it gives gains in performance) I am tryin to analyze it first on paper to understand the algorithm used. How can i optimize it for SIMD:- void idct(uint8_t *dst, int stride, int16_t *input, int type) { int16_t *ip = input; uint8_t *cm = ff_cropTbl + MAX_NEG_CROP; int A, B, C, D, Ad, Bd, Cd, Dd, E, F, G, H; int Ed, Gd, Add, Bdd, Fd, Hd; int i; /* Inverse DCT on the rows now */ for (i = 0; i < 8; i++) { /* Check for non-zero values */ if ( ip[0] | ip[1] | ip[2] | ip[3] | ip[4] | ip[5] | ip[6] | ip[7] ) { A = M(xC1S7, ip[1]) + M(xC7S1, ip[7]); B = M(xC7S1, ip[1]) - M(xC1S7, ip[7]); C = M(xC3S5, ip[3]) + M(xC5S3, ip[5]); D = M(xC3S5, ip[5]) - M(xC5S3, ip[3]); Ad = M(xC4S4, (A - C)); Bd = M(xC4S4, (B - D)); Cd = A + C; Dd = B + D; E = M(xC4S4, (ip[0] + ip[4])); F = M(xC4S4, (ip[0] - ip[4])); G = M(xC2S6, ip[2]) + M(xC6S2, ip[6]); H = M(xC6S2, ip[2]) - M(xC2S6, ip[6]); Ed = E - G; Gd = E + G; Add = F + Ad; Bdd = Bd - H; Fd = F - Ad; Hd = Bd + H; /* Final sequence of operations over-write original inputs. */ ip[0] = (int16_t)(Gd + Cd) ; ip[7] = (int16_t)(Gd - Cd ); ip[1] = (int16_t)(Add + Hd); ip[2] = (int16_t)(Add - Hd); ip[3] = (int16_t)(Ed + Dd) ; ip[4] = (int16_t)(Ed - Dd ); ip[5] = (int16_t)(Fd + Bdd); ip[6] = (int16_t)(Fd - Bdd); } ip += 8; /* next row */ } ip = input; for ( i = 0; i < 8; i++) { /* Check for non-zero values (bitwise or faster than ||) */ if ( ip[1 * 8] | ip[2 * 8] | ip[3 * 8] | ip[4 * 8] | ip[5 * 8] | ip[6 * 8] | ip[7 * 8] ) { A = M(xC1S7, ip[1*8]) + M(xC7S1, ip[7*8]); B = M(xC7S1, ip[1*8]) - M(xC1S7, ip[7*8]); C = M(xC3S5, ip[3*8]) + M(xC5S3, ip[5*8]); D = M(xC3S5, ip[5*8]) - M(xC5S3, ip[3*8]); Ad = M(xC4S4, (A - C)); Bd = M(xC4S4, (B - D)); Cd = A + C; Dd = B + D; E = M(xC4S4, (ip[0*8] + ip[4*8])) + 8; F = M(xC4S4, (ip[0*8] - ip[4*8])) + 8; if(type==1){ //HACK E += 16*128; F += 16*128; } G = M(xC2S6, ip[2*8]) + M(xC6S2, ip[6*8]); H = M(xC6S2, ip[2*8]) - M(xC2S6, ip[6*8]); Ed = E - G; Gd = E + G; Add = F + Ad; Bdd = Bd - H; Fd = F - Ad; Hd = Bd + H; /* Final sequence of operations over-write original inputs. */ if(type==0){ ip[0*8] = (int16_t)((Gd + Cd ) >> 4); ip[7*8] = (int16_t)((Gd - Cd ) >> 4); ip[1*8] = (int16_t)((Add + Hd ) >> 4); ip[2*8] = (int16_t)((Add - Hd ) >> 4); ip[3*8] = (int16_t)((Ed + Dd ) >> 4); ip[4*8] = (int16_t)((Ed - Dd ) >> 4); ip[5*8] = (int16_t)((Fd + Bdd ) >> 4); ip[6*8] = (int16_t)((Fd - Bdd ) >> 4); }else if(type==1){ dst[0*stride] = cm[(Gd + Cd ) >> 4]; dst[7*stride] = cm[(Gd - Cd ) >> 4]; dst[1*stride] = cm[(Add + Hd ) >> 4]; dst[2*stride] = cm[(Add - Hd ) >> 4]; dst[3*stride] = cm[(Ed + Dd ) >> 4]; dst[4*stride] = cm[(Ed - Dd ) >> 4]; dst[5*stride] = cm[(Fd + Bdd ) >> 4]; dst[6*stride] = cm[(Fd - Bdd ) >> 4]; }else{ dst[0*stride] = cm[dst[0*stride] + ((Gd + Cd ) >> 4)]; dst[7*stride] = cm[dst[7*stride] + ((Gd - Cd ) >> 4)]; dst[1*stride] = cm[dst[1*stride] + ((Add + Hd ) >> 4)]; dst[2*stride] = cm[dst[2*stride] + ((Add - Hd ) >> 4)]; dst[3*stride] = cm[dst[3*stride] + ((Ed + Dd ) >> 4)]; dst[4*stride] = cm[dst[4*stride] + ((Ed - Dd ) >> 4)]; dst[5*stride] = cm[dst[5*stride] + ((Fd + Bdd ) >> 4)]; dst[6*stride] = cm[dst[6*stride] + ((Fd - Bdd ) >> 4)]; } } else { if(type==0){ ip[0*8] = ip[1*8] = ip[2*8] = ip[3*8] = ip[4*8] = ip[5*8] = ip[6*8] = ip[7*8] = ((xC4S4 * ip[0*8] + (IdctAdjustBeforeShift<<16))>>20); }else if(type==1){ dst[0*stride]= dst[1*stride]= dst[2*stride]= dst[3*stride]= dst[4*stride]= dst[5*stride]= dst[6*stride]= dst[7*stride]= cm[128 + ((xC4S4 * ip[0*8] + (IdctAdjustBeforeShift<<16))>>20)]; }else{ if(ip[0*8]){ int v= ((xC4S4 * ip[0*8] + (IdctAdjustBeforeShift<<16))>>20); dst[0*stride] = cm[dst[0*stride] + v]; dst[1*stride] = cm[dst[1*stride] + v]; dst[2*stride] = cm[dst[2*stride] + v]; dst[3*stride] = cm[dst[3*stride] + v]; dst[4*stride] = cm[dst[4*stride] + v]; dst[5*stride] = cm[dst[5*stride] + v]; dst[6*stride] = cm[dst[6*stride] + v]; dst[7*stride] = cm[dst[7*stride] + v]; } } } ip++; /* next column */ dst++; } }

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >