Search Results

Search found 2280 results on 92 pages for 'switching'.

Page 20/92 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Focus follows mouse stops working when opening window from launcher and no click to focus

    - by user97600
    This is 12.04 default desktop (unity). I set it to focus follows mouse, and changed the menus to be on the window. This worked for a while, then some unknown even, maybe an upgrade maybe some other setting change caused it to stop working. There are many ways for this behavior to start but one reliable one is to bring a window to the foreground/focus with the launcher. Now the focus is stuck on that window and not just the window but the regions within the window so the close, maximize, minimize and menus do not work. I have to use mouse middle and then mouse right and then focus follows mouse is restored for a bit. The exact details of the mouse action aren't clear, sometimes it seems like just mouse middle helps, sometimes just right some times a desperate sequence of clicks :-( I have tried switching to the gnome desktop and it seems to occur less there but it is not eliminated. I have tried switching mice to an old wired USB mouse. I have tried creating a new account and that has not worked. I have observed "split focus" where to scroll button scrolls one one window but the input goes to another. I go trapped recently where my keyboard input went to libre office calc, but I was selecting the search term in the chrome address window. The selection "grayed" but the keyboard input for the search went to libre. Regions in windows have very confused focus. I have to work hard to get focus on for example the close gliph (X) or the minimize gliph (_).

    Read the article

  • How to check Early Z efficiency on AMD GPU with Windows 7

    - by Suma
    I have a game using DirectX 9, and a development station using Win 7 x64. I am still able to get access to another station with Vista x64 / dual booted with WinXP x86. I wanted to check early Z efficiency in the game and to my sadness all tools I have tried seem to be unable to perform this task: AMD PerfStudio AMD GPUPerfStudio 2 does not support DirectX 9 at all AMD GPUPerfStudio 1.2 does not install correctly on Windows 7. When I have tweaked the MSI package (a simple OS version check adjustment was needed), it complained the drivers I have do not provide needed instrumentation. The drivers old enough to support the GPUPerfStudio would most likely not be able to operate with my Radeon 5750 card (though this is something I am not 100 % sure, I did not attempt to try any older drivers, not knowing which I should look for) PIX PIX does not seem to contain any counters like this. It offers some ATI specific counters, but when I try to activate them, the PIX reports "PIX encountered a problem while attaching to the target program." I do not want to upgrade to DX 10/11 just to be able to profile the game, but it seems without the step I am somewhat locked with a toolset which is no longer supported. I see only one obvious options which would probably work, and that is using WinXP (or with a little bit of luck even Vista) station, perhaps with some older AMD card, to make sure GPUPerfStudio 1.2 works. Other than that, can anyone recommend other options how to check GPU HW counters (HiZ / EarlyZ in particular, but if others would be enabled as well, it would be a nice bonus) for a DirectX 9 game on Windows 7, preferably on AMD GPU? (If that is not possible, I would definitely prefer switching GPU to switching the OS, but before I do so I would like to know if I will not hit the same problem with nVidia again)

    Read the article

  • Bluetooth headset A2DP works, HSP/HFP not (no sound/no mic)

    - by Stefan Armbruster
    My Philips SBH9001 headset pairs fine using Ubuntu 12.04. In the audio settings it's properly detected as A2DP device and as HSP/HFP device. Hardware: Thinkpad X230, Ubuntu 12.04 64bit, Kernel 3.6.0-030600rc3-generic (build from Ubuntu mainline repo), Bluetooth device is USB-Id 0a5c:21e6 from Broadcom, Headset is a Philips SBH9001. Note: Kernel 3.6 rc3 is used because of a fix for audio on the dockingstation that is not in any previous branches. Playing audio in A2DP works just fine out of the box, but when switching the headset to HSP/HSP mode there is no sound nor does the microphone work. When connecting the headset, /var/log/syslog shows: Aug 25 21:32:47 x230 bluetoothd[735]: Badly formated or unrecognized command: AT+CSRSF=1,1,1,1,1,7 Aug 25 21:32:49 x230 rtkit-daemon[1879]: Successfully made thread 17091 of process 14713 (n/a) owned by '1000' RT at priority 5. Aug 25 21:32:49 x230 rtkit-daemon[1879]: Supervising 4 threads of 1 processes of 1 users. Aug 25 21:32:50 x230 kernel: [ 4860.627585] input: 00:1E:7C:01:73:E1 as /devices/virtual/input/input17 When switching from A2DP (standard profile) to HSP/HFP: Aug 25 21:34:36 x230 bluetoothd[735]: /org/bluez/735/hci0/dev_00_1E_7C_01_73_E1/fd3: fd(34) ready Aug 25 21:34:36 x230 rtkit-daemon[1879]: Successfully made thread 17309 of process 14713 (n/a) owned by '1000' RT at priority 5. Aug 25 21:34:36 x230 rtkit-daemon[1879]: Supervising 4 threads of 1 processes of 1 users. Aug 25 21:34:41 x230 bluetoothd[735]: Audio connection got disconnected Any hints how to get HSP/HFP working here?

    Read the article

  • Gnome 3 - Multiple Video Cards - Xinerama -- Forced Fallback Mode

    - by Alvin
    Just installed a 2nd nvidia video card -- previously had gnome 3 working perfectly with 2 monitors on a a single video card using twinview tried a number of things thus far twinview on 1 card + xinerama no xinerama no twinview various manual xorg.conf hacks based on random forums (couple references below) xinerama no twinview with and without Extensions Composite The last one is what I'm using now -- it results in a forced fallback mode with Composite Disable set at the end of xorg.conf via nvidia-settings Section "Extensions" Option "Composite" "Disable" EndSection when I disabled that last snippet it boots to gnome 3 full with the left monitor on a black screen and the middle monitor as primary but non-responsive switching to console mode Ctrl+Alt+F1 and then switching back I get 3 black screens with a mouse that can move around but nothing to interact with issue seems related to OpenGL and the multiple video cards -- I can boot into Unity without issue though my Glx-Dock shows up with the black background as barely shows in the screenshot below indicating the OpenGL is not initiated has anyone had any luck with getting Xinerama to work with Multiple NVidia Video Cards with OpenGL support? Found this in the logs while looking a bit further [ 23.208] (II) NVIDIA(1): Setting mode "nvidia-auto-select+0+0" [ 23.254] (WW) NVIDIA(1): The GPU driving screen 1 is incompatible with the rest of the [ 23.254] (WW) NVIDIA(1): GPUs composing the desktop. OpenGL rendering will be [ 23.254] (WW) NVIDIA(1): disabled on screen 1. [ 23.277] (==) NVIDIA(1): Disabling shared memory pixmaps [ 23.277] (==) NVIDIA(1): Backing store disabled [ 23.277] (==) NVIDIA(1): Silken mouse enabled [ 23.277] (==) NVIDIA(1): DPMS enabled According to this page at the NVidia User Docs http://us.download.nvidia.com/XFree86/Linux-x86/173.14.09/README/chapter-14.html I may be out of luck =( Starting this question with the hopes that others may be able to help debug and perhaps gain answers over time as I really want to get the full gnome 3 back.

    Read the article

  • Why is Rhythmbox becoming the default (again)?

    - by Christoph
    So, it seems with 12.04, they're switching back to Rhythmbox, after switching from Rhythmbox a year ago. I don't get why. They say that it's because of a blocking bug in GTK3# (if I understand that correctly), but that's just one bug, and in the same breath they say RB is not well maintained. It seems Ubuntu guys were dissatisfied with Banshee in some way, but apparently the Banshee guys were never notified of any problems. Also, it can't be to save disc space by dropping mono, because at the same day it was announced that the install disc will be enlarged by 50MB. Also, isn't it a bit shortsighted to push Banshee for default inclusion, and then drop it again a year later? How is that a sustainable use of dev resources, or consistent? Apparently there was quite some heavy effort by banshee devs - David Nielsen used the term "bending over backwards for Ubuntu" iirc. In summary: Can anyone shed more light on this? Related question: Why is Banshee becoming the default? Sources: http://www.omgubuntu.co.uk/2011/11/banshee-tomboy-and-mono-dropped-from-ubuntu-12-04-cd/ http://www.omgubuntu.co.uk/2011/11/rhythmbox-to-return-as-ubuntu-12-04-default-music-app/ http://www.omgubuntu.co.uk/2011/11/ubuntu-12-04-disc-size-to-be-750mb/ http://summit.ubuntu.com/uds-p/meeting/19442/desktop-p-default-apps/ http://banshee-media-player.2283330.n4.nabble.com/banshee-being-dropped-from-ubuntu-because-of-GTK3-support-td3985298.html

    Read the article

  • Flash Player for IPTV

    - by Le Mystique
    Hi All, We have to design a flash player for three video sources of IPTV. Educational videos on the system that can be played on demand. (VOD or Video on Demand local) Streaming videos available on different websites that can be played on demand (VOD or Video on Demand remote) IPTV live streaming video feed Videos from three sources have to be played within single flash player that we need to design. There are 2 basic requirements: For the two VOD types, we have to provide the pause and play functionality. For IPTV, pause and play is not required, However, channel switching time is critically important here. The user should get the program behaviour as close to TV experience as possible. This means channel switching should be fast. To provide this, buffering of video is required for multiple channels. Now the questions is, is this possible? If yes, how? Secondly, what skill set (e.g. action script 3.0 , flash 10) should the flash programmer have to be able to design this? Thirdly is it as simple as making a normal FLV player? Or is it a more complex job?

    Read the article

  • FTP gives me a error when uploading and deleting files [on hold]

    - by AR Games
    Here's the error I get when trying to delete files... Command: DELE index.html Response: 550 Delete operation failed. Here's the error I get when trying to upload files... Command: OPTS UTF8 ON Response: 200 Always in UTF8 mode. Status: Connected Status: Starting upload of C:\wamp\www\.DS_Store Command: CWD /var/www/html Response: 250 Directory successfully changed. Command: TYPE A Response: 200 Switching to ASCII mode. Command: PASV Response: 227 Entering Passive Mode (76,185,76,101,78,222). Command: STOR .DS_Store Response: 553 Could not create file. Error: Critical file transfer error Status: Retrieving directory listing... Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Response: 227 Entering Passive Mode (76,185,76,101,23,94). Command: LIST Response: 150 Here comes the directory listing. Response: 226 Directory send OK. Status: Directory listing successful Response: 421 Timeout. Error: Connection closed by server Status: Disconnected from server IM running windows OS and using filezilla FTP client

    Read the article

  • Ubuntu sound volume is 40% lower than in Windows

    - by ncomx
    I have 2x 2.1 speakers connected to the computer where I have Ubuntu 12.04 installed. On the software side I've set all the volume controls to 100% with the alsamixer program. The speakers have their own volume control, maintaining those at the same level, and switching between Ubuntu and Windows (XP and 7), on windows the output volume is at least 40% higher, even when having the windows volume control at 50% (without touching the speakers volume control) it's still much higher than the sound on Ubuntu. Why can this be happening? Are there some alternative sound drivers (other than the default ones) I could test to see if it makes a difference? some info about the card: root:$ cat /proc/asound/cards 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xfbff4000 irq 55 1 [Generic ]: HDA-Intel - HD-Audio Generic HD-Audio Generic at 0xfbcfc000 irq 56 root:$ lspci | grep -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 02:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Cayman/Antilles HDMI Audio [Radeon HD 6900 Series] I think the one i am using is the Intel one, the other seems to be from the vga card which is an ati radeon 6950. Running gstreamer-properties and switching between alsa, oss, ossv4 and pulseaudio doesn't seem to make any difference.

    Read the article

  • dvcs - is "clone to branch" a common workflow?

    - by Tesserex
    I was recently discussing dvcs with a coworker, because our office is beginning to consider switching from TFS (we're a MS shop). In the process, I got very confused because he said that although he uses Mercurial, he hadn't heard of a "branch" or "checkout" command, and these terms were unfamiliar to him. After wondering how it was possible that he didn't know about them and explaining how dvcs branches work "in place" on your local files, he was quite confused. He explained that, similar to how TFS works, when he wants to create a "branch" he does it by cloning, so he has an entire copy of his repo. This seemed really strange to me, but the benefit, which I have to concede, is that you can look at or work on two branches simultaneously because the files are separate. In searching this site to see if this has been asked before I saw a comment that many online resources promote this "clone to branch" methodology, to the poster's dismay. Is this actually common in the dvcs community? And what are some of the pros and cons of going this way? I would never do it since I have no need to see multiple branches at once, switching is fast, and I don't need all the clones filling up my disk.

    Read the article

  • vim + Ruby on Rails: how do you bounce among those 4-5 files you're currently working on?

    - by glitch
    I'm just starting to get familiar with vim, and I'd like to use it as my primary Rails development tool. As a Visual Studio and RubyMine user, I find a lot of stuff to be missing from the barebones vim installation, and therefore I went ahead and attempted to soup it up with plugins such as: rails.vim tcomment ruby-vim NERDtree and a couple of others. The issue is that I still don't quite get the average work-flow of using vim as one's Rails IDE. In RubyMine (again, similarly to Visual Studio) I have a series of tabs always open, containing the main files I'm switching among, and I additionally use NERDtree to open files from the folder structure. I tried opening them as new tabs, but the tab system in vim is just a lot more awkward than that in real IDEs. (I haven't seen vim pros in action, but I imagine that they'd not be relying on tabs, but using numerous splits instead, keeping at least a couple of files per split and switching between them with CTRL + ^. Is that the case?) So, at the end of the day, how do I really squeeze the most from vim if I want to be able to quickly access several files at once? Thank you!

    Read the article

  • New: Online NetBeans 8 Crash Course

    - by Geertjan
    On Twitter today I came across an announcement for a brand new on-line course in NetBeans 8. Since NetBeans 8 has been released during the past few months, the course is really very new. Go here to get there directly: https://www.video2brain.com/de/videotraining/netbeans-ide-8-0-crashkurs Here's the general idea. As you can see, the course is in German. With my basic understanding of German, I've had no problem in following the course. The trainer speaks clearly and slowly and everything is very well structured. The course covers all the basics of NetBeans IDE. From getting set up to using all the key features. The quality of the videos is great and the content is clear and informative. Once you've bought the course, all the lessons are unlocked. As you can see, they're all quite short and there's really a lot of content, didn't all fit into the screenshot: Quite some work must have gone into this. Here's one of the free lessons in the course, to give an idea of what you'll get: https://www.video2brain.com/de/tutorial/texte-internationalisieren This one is also free: https://www.video2brain.com/de/tutorial/eclipse-projekt-importieren I highly recommend this course especially if you're switching, or thinking about switching, from a different IDE and want to get a thorough overview of all the features that NetBeans IDE provides. Everything in the course is done within NetBeans, which means no slides, just code. You get to see the workflow of all the standard tasks and, for these purposes, the course does a really great job.

    Read the article

  • How to Use Vim-Style Keyboard Shortcuts for OS X Tab Navigation

    - by The Geek
    After switching to OS X when I got a new MacBook Air, one of the first things I needed to duplicate was my extremely customized AutoHotkey setup — the most important of which is using the J and K keys to navigate throughout tabbed windows easily. Yeah, I’m a Vim user. I’ve never been a fan of having to use CTRL + TAB to switch from one tab to the next — to start with, you have to move your hands from the home row, and it’s awkward, and why should I have to do that just because somebody decided that keyboard shortcut before tabs became popular? If you think about it, if tabbed browsers were popular back when keyboard shortcuts were being invented, they would have definitely reserved some of the good shortcuts for switching tabs. On Windows, I’ve always used an AutoHotkey script to make things the way I wanted it:  ALT + J and ALT + K for selecting previous and next tabs. Once you get used to it, it’s extremely awesome, and so much faster than using CTRL + TAB. Of course, I also hacked CTRL + T and CTRL + W into ALT + T and ALT + W so I could open new tabs and close them without moving my hands from the home row. Over on OS X, it turns out that it’s incredibly simple and easy to use CMD + J and CMD + K for next/previous tab navigation, and it works in most applications that support tabs, like Terminal, Safari, or Google Chrome.    

    Read the article

  • How do I turn off PCI devices?

    - by ethana2
    With the purchase of an Intel SSD and 85WHr Li-ion battery and the linking of wifi and bluetooth to my laptop's wireless switch, extensive Intel PowerTop usage, switching from compiz to metacity, stopping of the desktop-couch daemon, removal of Ubuntu One and several other services from my startup, disabling of everything possible in my BIOS, and physical removal of my optical drive, I've gotten my battery life up fairly high, but I think there's still more to be done. Specifically, when I'm in class taking notes, I want to temporarily but completely power down: Ethernet Firewire USB ports SD card reader Optical drive Webcam Sound card PCMCIA slot ..without turning them off in my BIOS like they are now, if possible, because then I have to restart my computer to use any of them. As it stands, I still haven't managed to power down: Firewire USB connection to webcam sound card How do I tell Linux to disable and power down these devices? Is it true that any PCI slot can be physically powered down? My current idle power consumption is 7.9 watts plus the screen. (10.0W at min. brightness) Also, how do I set the screen timeout to ten seconds? gconf editor isn't honoring it when I set it to that. Will switching from nVidia to Nouveau save any significant amount of power?

    Read the article

  • Accidentally Changed Dual Monitor Setup and don't know how to reset it

    - by user203783
    I have a dual monitor setup with my laptop and an external Asus monitor running under Ubuntu 11.10. When I first started using Ubuntu, it synced both screens. What showed up on my laptop showed up on my eternal monitor. Last night, I accidentally knocked the HDMI cable from its port (not that unusual in the tight space I work in). When I plugged it back in, as usual, the external monitor only displayed my wallpaper. Usually, I just restart Ubuntu and it resets, but last night what I now realize was the display console came up and somehow I changed the setup. Now the two screens show different jobs so to speak. Also, the external monitor doesn't display the ribbon, making switching between Firefox tabs and windows or apps jarring. I write for a living and need to quickly check bits of research, notes or sources while writing and the extra switching between mouse pad and external mouse to get one or the other screen to respond interrupts my work flow. The only technical help I can offer is when the display console popped up, the graphical representation of the screens was that the external screen appeared layered over by about one-quarter of the laptop screen. Again, these were icons. I tried to replicate that after I obviously screwed things up. If someone could help. I would appreciate it. If I have to use terminal, that's fine. I started using computers for writing back when they were green DOS prompts. There's a certain elegance to prompt commands to me. Maybe it's the writer in me. Again, if anyone can help me return my screens to sync under Ubuntu 11.10, I would greatly appreciate it.

    Read the article

  • Visually recognise active window

    - by mcaleaa
    Ubuntu 13.04 I work with two monitors, usually with lots of different windows open. I don't like to use the mouse, so swap between applications using alt-tab usually. The problem comes when I want to type something into an application. I need the active application to immediately obvious, usually so that I can tell which monitor to look at next. With default ubuntu (with appearance = ambiance), the only real visual indication of a window being active is that the header bar of the application is in a lighter font color. This is too subtle for me, so I find myself alt-tabbing and moving my mouse too much when switching applications, then clicking around with the mouse to give a particular window the focus. I want my switching to be more accurate, and for that I think I need better feedback on what window has the focus. This needs to be more obvious than it is now. I looked at the high contrast appearance, and it helps somewhat, but the inconsistency in the icon sets is far too distracting. I think what I need is a something like a bright border right around the window, or something like that, to make the active window really stand out. Or, maybe to have the non-active windows fade to the background a bit. I would appreciate tips on how others overcome this problem, to make the active window stand out visually from the other windows. Thank you!

    Read the article

  • Driver error when using multiple shaders

    - by Jinxi
    I'm using 3 different shaders: a tessellation shader to use the tessellation feature of DirectX11 :) a regular shader to show how it would look without tessellation and a text shader to display debug-info such as FPS, model count etc. All of these shaders are initialized at the beginning. Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader. Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message: Display Driver stopped responding and has recovered This happens with either of the two shaders used to render the scene. Additionally: I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader. What am I doing wrong when switching between shaders? What am I doing wrong when displaying text at the same time? What file can I post to help you help me? :) thx P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok Testing Procedure Regular Shader without text shader Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...) Remove text shader, then change shader to Tessellation Shader by key input Then if I add the Text Shader or switch back to the Regular Shader Switching/Render Shader Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader": if(m_useTessellationShader) { // Render the model using the tesselation shader ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), worldMatrix, viewMatrix, projectionMatrix, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), (D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT); } else { // todo: loaded model depends on distance to camera // Render the model using the light shader. ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), worldMatrix, viewMatrix, projectionMatrix); } And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader": // RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer. void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader) { unsigned int stride; unsigned int offset; // Set vertex buffer stride and offset. stride = sizeof(VertexType); offset = 0; // Set the vertex buffer to active in the input assembler so it can be rendered. deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); // Set the index buffer to active in the input assembler so it can be rendered. deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); // Check which Shader is used to set the appropriate Topology // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles. if(useTessellationShader) { deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST); }else{ deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } return; } RenderShader Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader? Here a little overview of my architecture: TextClass: uses font.vs and font.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); RegularShader: uses vertex.vs and pixel.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-HSSetShader(m_hullShader, NULL, 0); deviceContext-DSSetShader(m_domainShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); ClearState I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL: I found following in my Direct3D Class: depth-stencil state - m_deviceContext-OMSetDepthStencilState rasterizer state - m_deviceContext-RSSetState(m_rasterState); blend state - m_device-CreateBlendState viewports - m_deviceContext-RSSetViewports(1, &viewport); I found following in every Shader Class: input/output resource slots - deviceContext-PSSetShaderResources shaders - deviceContext-VSSetShader to - deviceContext-PSSetShader input layouts - device-CreateInputLayout sampler state - device-CreateSamplerState These two I didn't understand, where can I find them? predications - ? scissor rectangles - ? Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!

    Read the article

  • Pre-built UI components for displaying SSRS Local Mode Parameters

    - by namenlos
    BACKGROUND Am writing a WinForm app that uses the SSRS Report Viewer control to render reports in local mode - there is no SSRS server involved at all I can successully create and execute reports in local mode What I want to do is add Parameters to the report THE PROBLEM When the Report Viewer control executes a report in local mode it does not provide any UI that allows the user to enter values for parameters defined in the report This is in contrast to to when the Report Viewer control is running a server report - parameters are shown. MY QUESTION Are there pre-built components or even sample code I can use that would provide a reasonble parameter user experience for this scenario (scenario = SSRS + Report Viewer Control + Local Mode)? Yes, I could write this UI code myself - what I am looking for is existing code to avoid having to implement this UI because I'd rather spend my time making the content of the report work well NOTES Switching to another reporting engine besides SSRS is not an option. Switching to server side reports is not an option.

    Read the article

  • What causes a JRE 6 JVM code cache leak?

    - by Arturo Knight
    Since switching to JRE 6, my server's code cache usage (non-heap) keeps growing indefinitely. My application creates a lot of classes at runtime, BUT these classes are successfully unloaded during the GC process. I can see these classes getting unloaded in the gc logs and also the permGen usage stays constant. I specifically make sure in my code that these classes are orphaned once I am finished with them and so they correctly get garbage collected from permGen. The code cache however keeps growing. I only became aware of the code cache after switching to JRE 6. So I guess my questions are: Does GC include the code cache? What could cause a code cache memory leak, specifically. Is there a bug in JDK 6 in this area?

    Read the article

  • How to get default Ctrl+Tab functionality in WinForms MDI app when hosting WPF UserControls

    - by jpierson
    I have a WinForms based app with traditional MDI implementation within it except that I'm hosting WPF based UserControls via the ElementHost control as the main content for each of my MDI children. This is the solution recommended by Microsoft for achieving MDI with WPF although there are various side effects unfortunately. One of which is that my Ctrl+Tab functionality for tab switching between each MDI child is gone because the tab key seems to be swallowed up by the WPF controls. Is there a simple solution to this that will let the Ctrl+tab key sequences reach my WinForms MDI parent so that I can get the built-in tab switching functionality?

    Read the article

  • Best Practise for Stopwatch in multi processors machine?

    - by Ahmed Said
    I found a good question for measuring function performance, and the answers recommend to use Stopwatch as follows Stopwatch sw = new Stopwatch(); sw.Start(); //DoWork sw.Stop(); //take sw.Elapsed But is this valid if you are running under multi processors machine? the thread can be switched to another processor, can it? Also the same thing should be in Enviroment.TickCount. If the answer is yes should I wrap my code inside BeginThreadAffinity as follows Thread.BeginThreadAffinity(); Stopwatch sw = new Stopwatch(); sw.Start(); //DoWork sw.Stop(); //take sw.Elapsed Thread.EndThreadAffinity(); P.S The switching can occur over the thread level not only the processor level, for example if the function is running in another thread so the system can switch it to another processor, if that happens, will the Stopwatch be valid after this switching? I am not using Stopwatch for perfromance measurement only but also to simulate timer function using Thread.Sleep (to prevent call overlapping)

    Read the article

  • Increase efficiency for an R simulator of the Monty Hall Puzzle

    - by jahan_m
    The Monty Hall Problem is a simple puzzle involving probability that even stumps professionals in careers dealing with some heavy-duty math. Here's the basic problem: Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice? You can find numerous explanations of the solution here: http://en.wikipedia.org/wiki/Monty_Hall_problem Goal of my simulation: Prove that a switching strategy will win you the car 2/3 of the time. I got curious and wanted to write a little function that simulates the problem many times and returns the proportion of wins if you switched and the proportion of wins if you stayed with your first choice. The function then plots the cumulative wins. First and foremost, I'm interested in hearing if my simulation is indeed replicating the Monty Problem, or if some aspect of the code got it wrong. Secondly, this function takes a long time to run once I get to about 10,000 simulations. I know I don't need this many simulations to prove this but I'd love to hear some ideas on how to make it more efficient. Thanks for your feedback! Monty_Hall=function(repetitions){ doors=c('A','B','C') stay_wins=0 switch_wins=0 series=data.frame(sim_num=seq(repetitions),cum_sum_stay=replicate(repetitions,0),cum_sum_switch=replicate(repetitions,0)) for(i in seq(repetitions)){ winning_door=sample(doors,1) contestant_chooses=sample(doors,1) if(contestant_chooses==winning_door) stay_wins=stay_wins+1 else switch_wins=switch_wins+1 series[i,'cum_sum_stay']=stay_wins series[i,'cum_sum_switch']=switch_wins } plot(series$sim_num,series$cum_sum_switch,col=2,ylab='Cumulative # of wins', xlab='Simulation #',main=sprintf('%d Simulations of the Monty Hall Paradox',repetitions),type='l') lines(series$sim_num,series$cum_sum_stay,col=4) legend('topleft',legend=c('Cumulative wins from switching', 'Cumulative wins from staying'),col=c(2,4),lty=1) result=list(series=series,stay_wins=stay_wins,switch_wins=switch_wins, proportion_stay_wins=stay_wins/repetitions, proportion_switch_wins=switch_wins/repetitions) return(result) } #Theory predicts that it is to the contestant's advantage if he #switches his choice to the other door. This function simulates the game #many times, and shows you the proportion of games in which staying or #switching would win the car. It also plots the cumulative wins for each strategy. Monty_Hall(100)

    Read the article

  • iPhone Debugger Message -- Weird

    - by Bill Shiff
    Hello, I have an iPhone app that I've been working on and have recently upgraded my version of XCode. Since the upgrade, I can build and debug in the iPhone Simulator just fine, but when I try to debug on an attached device I get the following messages: From Xcode4: GNU gdb 6.3.50-20050815 (Apple version gdb-1510) (Fri Oct 22 04:12:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 sharedlibrary apply-load-rules all warning: Unable to read symbols from "dyld" (prefix __dyld_) (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MessageUI.framework/MessageUI (file not found). warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MapKit.framework/MapKit (file not found). warning: Unable to read symbols from "MapKit" (not yet mapped into memory). warning: Unable to read symbols from "Foundation" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/UIKit.framework/UIKit (file not found). warning: Unable to read symbols from "UIKit" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/CoreGraphics.framework/CoreGraphics (file not found). warning: Unable to read symbols from "CoreGraphics" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). warning: Unable to read symbols from "QuartzCore" (not yet mapped into memory). warning: Unable to read symbols from "libgcc_s.1.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libSystem.B.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libobjc.A.dylib" (not yet mapped into memory). warning: Unable to read symbols from "CoreFoundation" (not yet mapped into memory). target remote-mobile /tmp/.XcodeGDBRemote-3836-28 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none [Switching to thread 11523] [Switching to thread 11523] gdb stack crawl at point of internal error: 0 gdb-arm-apple-darwin 0x0013216e internal_vproblem + 316

    Read the article

  • How to make javascript for font sizes fully language dependent?

    - by toonoid
    Hello All I have a multilanguage webiste English and Arabic. The javascript for the fontsizes is included in the theme not in the .css. Both language have different fontsize 16 and 12. When switching from English to Arabic the English letters such as ( dates and stuff like that ) are shown way too big. And when switching from English to Arabic the arabic letters are too small. I would really like to make this javascrip fully language dependent so that the English letters being shown in the Arabic version keeps the same fontsize as in English version and vice versa for arabic. //-- function setstyle(tag){ var elements = document.getElementsByTagName(tag); for (var i = 0; i The following script is the current javascript included in the theme. I would really appreciate it if somebody out there could help me with this problem. My knowledge of javascript is to be honest zero. Thanks TOONOID

    Read the article

  • CAD/CAM without C++

    - by zaladane
    Hello, Is it possible to do CAD/CAM software without having to use C++? My company developed their software with c/C++ but that was more than 10 years ago. Today,there is a lot of legacy code that switching would force us to get rid of but i was wondering what the actual risks are. We have a lot of mathematical algorithms for toolpath calculations, feature recognition and simulation and 3D Rendering and i was wondering if C# can handles all of that without great performance loss. Is it a utopia to rewrite such algorithms in c# or should that language only deal with UI. We are not talking about game development here (Halo 3 or Call Of Duty) so how much processing does CAD/CAM really need? Can anybody enlighten me on this matter? Most of my colleagues are hardcore C++ programmers and although i program in c++ i love .NET but i am having a hard time selling .NET to them other than basic UI. Does it make sense to consider switching to .NET in such a field, or is it just not a wise idea? Thank you

    Read the article

  • Weird networking problem ( Linksys, Windows 7 )

    - by Rohit Nair
    Okay it's a bit tough to figure out where to start from, but here is the basic summary of the issue: During general internet usage, there are times when any attempt to visit a website stalls at "Waiting for somedomain.com". This problem occurs in Firefox, IE and Chrome. No website will load, INCLUDING the router configuration page at 192.168.1.1. Curiously, ping works fine, and other network apps such as MSN Messenger continue to work and I can send and receive messages. Disconnecting and reconnecting to the wireless network seems to fix the problem for a bit, but there are times when it relapses into not loading after every 2-3 http requests. Restarting the router seems to fix the issue, but it can crop up hours or days later. I have a CCNA cert and I know my way around the Windows family of operating systems, so I'm going to list all the things I've tried here. Other computers on the network seem to suffer the same problem, which makes me think it might be a specific problem with something in Win7. The random nature of this issue makes it a bit difficult to confirm, but I can definitely say that I have experienced this on the following systems: Windows 7 64-bit on my desktop Windows Vista 32-bit on my desktop ( the desktop has 2 wireless NICs and the problem existed on both ) Windows Vista 32-bit on my laptop ( both with wireless and wired ) Windows XP SP3 on another laptop ( both wireless and wired ) Using Wireshark to sniff packets seemed to indicate that although HTTP requests were being SENT out, no packets were coming in to respond to the HTTP request. However, other network apps continued to work i.e I would still receive IMs on Windows Live Messenger. Disabling IPV6 had no effect. Updating router firmware to the latest stock firmware by Linksys had no effect. Switching to dd-wrt firmware had no effect. By "no effect" I mean that although the restart required by firmware updates fixed the problem at the time, it still came back. A couple of weeks back, after a LOT of googling and flipping of various options, I figured it might be a case of router slowdown ( http://www.dd-wrt.com/wiki/index.php/Router%5FSlowdown ) caused by the fact that I occasionally run a torrent client. I tried changing the configuration as suggested in that router slowdown link, and restarted the router. However I have not run the torrent client for 12 days now, and yet I still randomly experience this problem. Currently the computer I am using is running Windows 7 64-bit. I would just like to reiterate some of the reasons that I was confused by the issue. Even the router config page at 192.168.1.1 would not load, indicating that it's not a problem with the WAN link, but probably a router issue or a local computer issue. For some reason, disconnecting and reconnecting to the wireless network immediately seems to fix the problem. Updating the router firmware, even switching to open source firmware did nothing. So it seemed to be a computer issue. On the other hand, I have not seen any mass outrage of people having networking problems with Windows 7 and Linksys routers, especially a problem of this sort, and I have tweaked every network setting I could think of. Although HTTP seems to have trouble, ping works fine, DNS lookups work fine, other networking apps work fine. However if I disconnect from Windows Live Messenger and try to reconnect, it fails to reconnect. So although it could receive data over the existing TCP/IP connection, trying to start a new one failed? Does anyone have any further ideas on debugging or fixing this issue? I am reasonably certain there are no viruses or other malicious apps on my network, and I am also reasonably certain that nobody is accessing my router without my consent. Router: Linksys WRT54G2 1.0 running dd-wrt firmware Wireless Card: Alfa AWUS036H OS: Windows 7 64-bit EDIT: I tried switching to a clean wireless channel free from interference, but the problem still persisted. I tried connecting directly with a cable, but the problem still persisted. Signed A very confused and bewildered geek whose knowledge seems to be useless in the face of this frustrating network issue.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >