Search Results

Search found 16598 results on 664 pages for 'device center'.

Page 526/664 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Interaction of a GUI-based App and Windows Service

    - by psubsee2003
    I am working on personal project that will be designed to help manage my media library, specifically recordings created by Windows Media Center. So I am going to have the following parts to this application: A Windows Service that monitors the recording folder. Once a new recording is completed that meets specific criteria, it will call several 3rd party CLI Applications to remove the commercials and re-encode the video into a more hard-drive friendly format. A controller GUI to be able to modify settings of the service, specifically add new shows to watch for, and to modify parameters for the CLI Applications A standalone (GUI-based) desktop application that can perform many of the same functions as the windows service, expect manually on specific files instead of automatically based on specific criteria. (It should be mentioned that I have limited experience with an application of this complexity, and I have absolutely zero experience with Windows Services) Since the 1st and 3rd bullet share similar functionality, my design plan is to pull the common functionality into a separate library shared by both parts applications, but these 2 components do not need to interact otherwise. The 2nd and 3rd bullets seem to share some common functionality, both will have a GUI, both will have to help define similar parameters (one to send to the service and the other to send directly to the CLI applications), so I can see some advantage to combining them into the same application. On the other hand, the standalone application (bullet #3) really does not need to interact with the service at all, except for possibly sharing a few common default parameters that can easily be put into an XML in a common location, so it seems to make more sense to just keep everything separate. The controller GUI (2nd bullet) is where I am stuck at the moment. Do I just roll this functionality (allow for user interaction with the service to update settings and criteria) into the standalone application? Or would it be a better design decision to keep them separate? Specifically, I'm worried about adding the complexity of communicating with the Windows Service to the standalone application when it doesn't need it. Is WCF the right approach to allow the controller GUI to interact with the Windows Service? Or is there a better alternative? At the moment, I don't envision a need for a significant amount of interaction, maybe just adding a new task once in a while and occasionally tweaking a parameter, but when something is changed, I do expect the windows service to immediately use the new settings.

    Read the article

  • Showing ZFS some LOVE

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} L is for the way you look at us, and O because we’re Oracle, but V is very, very, extra ordinary, and E, well that’s obvious… E is because Oracle’s new Sun ZFS Storage Appliance is Excellent, and here at OPN, we like spell out the obvious!  If you haven’t already heard, the Sun ZFS Appliance has “A simple, GUI-driven setup and configuration, solid price-performance and world-class Oracle support behind it. The CRN Test Center recommends the Sun ZFS Storage”. Read more about what CRN said here. Oracle's Sun ZFS Appliance family delivers enterprise-class network attached storage (NAS) capabilities with leading Oracle integration, simplicity, efficiency, performance, and TCO.  The systems offer an easy way to manage and expand your storage environment at a lower cost, with more efficiency, better data integrity, and higher performance when compared with competitive NAS offerings. Did we mention that set up, including configuring, will take you less than an hour since it all comes in one box and is so darn simple to use? So if you L-O-V-E what you’re hearing about Oracle’s Sun Z-F-S, learn more by watching the video below, and visiting any of our available resources . It Had to Be You, The OPN Communications Team

    Read the article

  • Ubuntu 12.4.1 failing in vm both Vbox and Vmware on new HP Envy 4t-1000

    - by Chas
    Brand new to Linux, getting frustrated trying to get an environment up with Ubuntu. My primary goal is to learn Linux and Apache/PHP development. I need to keep my Windows OS as main on my machine for work, so i'm trying to virtualize Ubuntu 12.4.1 without luck (many attempts). I have a new HP Envy 4t-1000 with 16gb ram, and 32 gb ssd caching with 500gb spindle hard drive. Graphics card is an Intel HD 3000 with AMD Radeon 7670M. With installing Ubuntu desktop in VBox, I'm getting this result: https://forums.virtualbox.org/viewtopic.php?f=6&t=51939 With VMware workstation 7 (patched), I complete the install of Ubuntu, it reboots, purple desktop briefly flashes then it drops to command line. I bought a beginning Ubuntu book, and it recommends trying to manually configure graphics if this happens. So I tried doing a safe boot holding shift - I get to the first screen (GRUB) loads fine, and I choose recovery mode. After choosing the recovery mode, I get the recovery mode options, and can arrow down to what the book suggests 'Run in fail safe graphic mode.' Once I select this option, I get a black screen with a large white dialogue box, at the top it says "The system is running in low-graphics mode. Your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself." Then there is an ok button way down at the bottom. When I select 'ok' I get a menu for a few options, book recommended 'reconfigure graphics.' When I try this, I get a menu of two options: 1) "Use generic (default) configuration or 2) use backup. I've tried both options several times, hitting ok just refreshes screen and nothing more. Rebooting at this point just goes back to command line as before. I don't know what to do at this point, I've spent too many hours this weekend trying in both VBox and VMware to get Ubuntu going. Isn't there like a very basic graphic display or something I can use to at least get into the desktop? I explored the GRUB some more, and tried to look at the startup and xserver logs - both are blank. No help there I guess? When I try to choose 'Edit the configuration file, then 'ok' screen just refreshes on same menu options, nothing happens. thx for any advice. I really need to focus on learning Linux, Apache and PHP, so perhaps Ubuntu just won't work on my hardware? Any other suggestions? I will need to virtualize - THANKS for any help/advice.

    Read the article

  • 13.10 upgrade dropping wifi [on hold]

    - by Daryl
    Almost a complete newb here. After my last upgrade from 12.04 to 13.10 my wifi now randomly drops. The only way I can get a signal back is a shutdown and restart otherwise it shows no network is even available to connect to. Had no problems until the upgrade. Any help would be appreciated. H/W path Device Class Description ==================================================== system h8-1534 (H2N64AA#ABA) /0 bus 2AC8 /0/0 memory 64KiB BIOS /0/4 processor AMD FX(tm)-6200 Six-Core Processor /0/4/5 memory 288KiB L1 cache /0/4/6 memory 6MiB L2 cache /0/4/7 memory 8MiB L3 cache /0/d memory 10GiB System Memory /0/d/0 memory DIMM Synchronous [empty] /0/d/1 memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/d/2 memory 2GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/d/3 memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/100 bridge RD890 PCI to PCI bridge (external gfx0 port B) /0/100/0.2 generic RD990 I/O Memory Management Unit (IOMMU) /0/100/2 bridge RD890 PCI to PCI bridge (PCI express gpp port B) /0/100/2/0 display Turks PRO [Radeon HD 7570] /0/100/2/0.1 multimedia Turks/Whistler HDMI Audio [Radeon HD 6000 Series] /0/100/5 bridge RD890 PCI to PCI bridge (PCI express gpp port E) /0/100/5/0 bus TUSB73x0 SuperSpeed USB 3.0 xHCI Host Controller /0/100/11 storage SB7x0/SB8x0/SB9x0 SATA Controller [RAID5 mode] /0/100/12 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/12.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/100/13 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/13.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/100/14 bus SBx00 SMBus Controller /0/100/14.2 multimedia SBx00 Azalia (Intel HDA) /0/100/14.3 bridge SB7x0/SB8x0/SB9x0 LPC host controller /0/100/14.4 bridge SBx00 PCI to PCI Bridge /0/100/14.5 bus SB7x0/SB8x0/SB9x0 USB OHCI2 Controller /0/100/15 bridge SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0) /0/100/15.1 bridge SB700/SB800/SB900 PCI to PCI bridge (PCIE port 1) /0/100/15.2 bridge SB900 PCI to PCI bridge (PCIE port 2) /0/100/15.2/0 wlan0 network RT3290 Wireless 802.11n 1T/1R PCIe /0/100/15.2/0.1 generic RT3290 Bluetooth /0/100/15.3 bridge SB900 PCI to PCI bridge (PCIE port 3) /0/100/15.3/0 eth0 network RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller /0/100/16 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/16.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/101 bridge Family 15h Processor Function 0 /0/102 bridge Family 15h Processor Function 1 /0/103 bridge Family 15h Processor Function 2 /0/104 bridge Family 15h Processor Function 3 /0/105 bridge Family 15h Processor Function 4 /0/106 bridge Family 15h Processor Function 5 /0/1 scsi0 storage /0/1/0.0.0 /dev/sda disk 1TB WDC WD1002FAEX-0 /0/1/0.0.0/1 volume 189MiB Windows FAT volume /0/1/0.0.0/2 /dev/sda2 volume 244MiB data partition /0/1/0.0.0/3 /dev/sda3 volume 931GiB LVM Physical Volume /0/2 scsi2 storage /0/2/0.0.0 /dev/cdrom disk DVD A DH16ACSHR /0/3 scsi6 storage /0/3/0.0.0 /dev/sdb disk SCSI Disk /0/3/0.0.1 /dev/sdc disk SCSI Disk /0/3/0.0.2 /dev/sdd disk SCSI Disk /0/3/0.0.3 /dev/sde disk MS/MS-Pro /0/3/0.0.3/0 /dev/sde disk /1 power Standard Efficiency I apologize for my newbness. I hope this is enough info for the hardware. Thanks Bruno for pointing out I needed to add more info. If I am lacking anything else please let me know and I'll post it.

    Read the article

  • After startup, display reverts to one desktop duplicated on both screens

    - by Skilldrick
    I have a Samsung R530 laptop. I'm actually running Linux Mint, which is based on Ubuntu (which is why I thought it would be OK to post this here). I'm now getting exactly the same problem on Ubuntu 10.10. I've got it setup with a secondary LCD monitor, and I've changed the display settings so the laptop screen and monitor are at optimal resolution, with the desktop shared between them. After startup though (typically around 10-30 seconds) the display reverts to 1024 x 768 on both screens, and the display is duplicated (i.e. both screens show the same thing). I think this is something to do with my video card, as I had the same problem when I was running Arch Linux. Any ideas? Edit: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8040 PCI-E Fast Ethernet Controller It seems it's Intel, not Nvidia. I have a current work around, which is to have a launcher to a script containing xrandr --output LVDS1 --mode 1366x768 --output HDMI1 --mode 1280x1024 --left-of LVDS1 on my desktop, and to run it after the displays revert, but it's not ideal.

    Read the article

  • Best system for creating a 2d racing track

    - by tesselode
    I am working a 2D racing game and I'm trying to figure out what is the best way to define the track. At the very least, I need to be able to create a closed circuit with any amount of turns at any angle, and I need vehicles to collide with the edges of the track. I also want the following things to be true if possible (but they are optional): The code is simple and free of funky workarounds and extras I can define all of the parts of the track (such as turns) relative to the previous parts I can predict the exact position of the road at a certain point (that way I can easily and cleanly make closed circuits) Here are my options: Use a set of points. This is my current system. I have a set of turns and width changes that the track is supposed to make over time. I have a point which I transform according to these instructions, and I place a point every 5 steps or so, depending on how precise I want the track to be. These points make up the track. The main problem with this is the discrepancy between the collisions and the way the track is drawn. I won't get into too much detail, but the picture below shows what is happening (although it is exaggerated a bit). The blue lines are what is drawn, the red lines are what the vehicle collides with. I could work around this, but I'd rather avoid funky workaround code. Beizer curves. These seem cool, but my first impression of them is that they'll be a little daunting to learn and are probably too complicated for my needs. Some other kind of curve? I have heard of some other kinds of curves; maybe those are more applicable. Use Box2D or another physics engine. Instead of defining the center of the track, I could use a physics engine to define shapes that make up the road. The downside to this, however, is that I have to put in a little more work to place the checkpoints. Something completely different. Basically, what is the simplest system for generating a race track that would allow me to create closed circuits cleanly, handle collisions, and not have a ton of weird code?

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • WebLogic Partner Community Newsletter October 2012

    - by JuergenKress
    Dear WebLogic partner community member Oracle OpenWorld and the JavaOne is just over with lots of product updates and highlights. In this newsletter you will find the key information on many new product and launches. Make sure you download the presentation from our WebLogic Community Workspace (WebLogic Community membership required), to train yourself and for your next customer meeting. Thanks for all the tweets tweets #WebLogicCommunity, the pictures at our facebook page and the nice blog posts from Guido & Lucas & Jan. Java One was a super sucess - JavaOne 2012: Strategy and Technical Keynote - Java 2,5 years after the acquisition - IDC report - make the future Java! If you want to become a Java Expert, make sure you attend one of our WebLogic 12c Bootcamps or our fist ExaLogic Hackers Night - November 19th Nürnberg Germany. All developers can use WebLogic free of charge! For developers, there are lots of ADF news on Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald & GlassFish Extension for Oracle JDeveloper & Installing, Configuring, and Testing WebLogic Server 12c Developer Zip Distribution in NetBeans. If you want to become a certified WebLogic company, WebLogic Server 12c Specialization is now available for you. You just need to go to the Knowledge Zone section, select the “Specialization” tab and click on “Apply Now” Now available: WebLogic Server 12c Implementation Specialist Boot Camp LVT. Now in Production: Oracle WebLogic Server 12c Implementation Specialist certification (1Z0-599) In our specialization benefit series we highlight this month the opportunity to promote your WebLogic services by google ads. Torsten Winterberg, OFM ACE Director published Mobile Web Applications – A guide for professional development. Please feel free to let us know if you publish a book or article! Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsOctober2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Iscroll Wrapper doesnt get a height

    - by MCSell
    I got the following code: <div data-role="content" height="100%" data-iscroll> <div class="homebutton_zeile"> <a id="picture_home" href="#pictrues"> <div class="homebutton_all"> <div class="homebutton_name">Picture</div> <div class="homebutton_picture"> <img src="images/picture.png" alt="image" style="position: relative;"> </div> </div> </a> </div> </div> </div> My Classes of CSS .homebutton_zeile{ width: 100%; height: 30%; } .homebutton_all{ width: 30%; height: 90%; float:left; margin-left: 2%; margin-top:15px; } .homebutton_picture{ position: relative; width: 100%; height: 85%; float: left; background-color: #AAC7BD; border: 1px solid black; border-radius: 15px; box-shadow:8px 8px 8px #666; } .homebutton_name{ text-align:center; position: relative; top:-10px; width: 100%; height: 15%; margin-left: auto; text-decoration:none; color:black; } I am Using: iscroll.js jquery 1.8.2 jquery mobile 1.2.0 jqery mobile iscrollview. And if its needed to know jstorage.js and fastclick.js But the div above is not getting a height at the wrapper of iscroll. There is also a login before and this page will be shown automatically after the login after a $.mobile.changePage("#home"); function. I tried to do it as first page before the function of changePage and it gave me the same effect. If i put a   for example after the <div data-role="content" height="100%" data-iscroll>&nbps; The Wrapper get a height of 15px for the &nbps; but not for the images inside.

    Read the article

  • OSB unit testing, part 1 by Qualogy

    - by JuergenKress
    First you need to implement the simple bpel process like this : In my current project, I inherited a lot of OSB components that have been developed by (former) team members, but they all lack unit tests. This is a situation I really dislike, since this makes it much harder to refactor or bug-fix the existing code base. So, for all newly created components (and components I have to bug-fix) I strive to add unit tests. Of course, the unit tests will be created using my favourite testing tool: soapUI ! Unit of test The unit test should be created for the service composition, which in OSB terms should be the proxy service combination with its business service. Now, since you do not want to rely on any other services, you should provide mock services for all services invoked from your Component-Under-Test. In a previous article, I wrote about mocking your services in soapUI. While this approach would also be valid here, creating a mock service (and certainly deploying it on a separate WebServer) does violate one of the core principles of unit testing: to make your unit tests as self-contained as possible, i.e. not depending on any external components. In this article, I will show you how to achieve this by simply providing a mock response inside your unit test. Scenario The scenario I implement for testing is a simple currency converter; the external request consists of a from and a to currency, and an amount (in currency from). The service will perform an exchange rate lookup using the WebServiceX CurrencyConverter and return a response to the caller consisting of both the source and target currencies and amounts. For the purpose of unit testing, I will implement a mock response for the exchange rate lookup. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Qualogy,OSB,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How should game objects be aware of each other?

    - by Jefffrey
    I find it hard to find a way to organize game objects so that they are polymorphic but at the same time not polymorphic. Here's an example: assuming that we want all our objects to update() and draw(). In order to do that we need to define a base class GameObject which have those two virtual pure methods and let polymorphism kicks in: class World { private: std::vector<GameObject*> objects; public: // ... update() { for (auto& o : objects) o->update(); for (auto& o : objects) o->draw(window); } }; The update method is supposed to take care of whatever state the specific class object needs to update. The fact is that each objects needs to know about the world around them. For example: A mine needs to know if someone is colliding with it A soldier should know if another team's soldier is in proximity A zombie should know where the closest brain, within a radius, is For passive interactions (like the first one) I was thinking that the collision detection could delegate what to do in specific cases of collisions to the object itself with a on_collide(GameObject*). Most of the the other informations (like the other two examples) could just be queried by the game world passed to the update method. Now the world does not distinguish objects based on their type (it stores all object in a single polymorphic container), so what in fact it will return with an ideal world.entities_in(center, radius) is a container of GameObject*. But of course the soldier does not want to attack other soldiers from his team and a zombie doesn't case about other zombies. So we need to distinguish the behavior. A solution could be the following: void TeamASoldier::update(const World& world) { auto list = world.entities_in(position, eye_sight); for (const auto& e : list) if (auto enemy = dynamic_cast<TeamBSoldier*>(e)) // shoot towards enemy } void Zombie::update(const World& world) { auto list = world.entities_in(position, eye_sight); for (const auto& e : list) if (auto enemy = dynamic_cast<Human*>(e)) // go and eat brain } but of course the number of dynamic_cast<> per frame could be horribly high, and we all know how slow dynamic_cast can be. The same problem also applies to the on_collide(GameObject*) delegate that we discussed earlier. So what it the ideal way to organize the code so that objects can be aware of other objects and be able to ignore them or take actions based on their type?

    Read the article

  • Help recovering broken OS (permissions issue)

    - by Guandalino
    (At the bottom there is an important update.) I was doing experiments in order to backup a remote account to my local system, Ubuntu 12.04 LTS. I'm not confident with duplicity and probably, due to wrong syntax, some local files have been replaced with remote files. This is just a supposition, I'm not sure this is the real cause of OS corruption. The corruption happened after experimenting with backups, so I think I did something wrong at this regard. I was aware there was a problem when I tried to access a command using sudo: $ sudo ls sudo: unable to open /etc/sudoers: Permission denied sudo: no valid sudoers sources found, quitting sudo: unable to initialize policy plugin This is how /etc/sudoers looks like: $ ls -ald /etc/sudoers -r--r----- 1 root root 788 Oct 2 18:30 /etc/sudoers At this point I tried to reboot and now this is the message I get: The system is running in low graphics mode. Your screen, graphics card and input device settings could not be detected correctly. You will need to configure these yourself. I tried to follow the wizard to configure these settings, but without luck (the system prevents me going on when I press "Next"). The thing that makes me a bit less worried is that all the data on the disk seems readable and I'm able to access them using a live cd. I run memtest and RAM seems to be OK. Do you have any idea about how to recover my system? I'm very glad to provide further information, just let me know what info could be helpful. UPDATE. The issue is about wrong permissions and this is how I discovered: I mounted the root partition of the broken OS on /mnt/broken/ (live CD) and did ls /mnt/broken/. I got a permission denied error, while I expected to have the directory listing. I had to do sudo ls /mnt/broken/ and this worked. Thus without having root permission via sudo it's impossible to access the root of broken os. The current output of ls -ld /mnt/broken/ is: drwxr-x--- 29 1000 812 4096 2012-12-08 21:58 /mnt/broken Any thoughts on how to restore the old (working) set of permissions?

    Read the article

  • How do I make camera move at same speed when rotating and moving forward

    - by dez
    I made a camera in DX9. To move forward I press the Up arrow. To rotate on the Y axis I use the mouse. When I perform these movements on their own the camera moves at the speed I want. However, if I hold down Up and move the mouse at the same time then the camera moves a lot faster than it should. I want it to move at the same speed as it does when only the Up arrow is pressed. I think I need to normalize something somewhere but not sure what and not sure where. Have tried various combinations without success so if anyone can point me in the right direction that would be great. Thanks. I've post code below. #define KEY_DOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0) LRESULT WINAPI MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { if( KEY_DOWN(VK_UP)) MovePlayer(D3DXVECTOR3(0, 0, -1.0f)); if( KEY_DOWN(VK_DOWN)) MovePlayer(D3DXVECTOR3(0, 0, 1.0f)); switch( msg ) { case WM_MOUSEMOVE: ProcessMouseInput(); } } void MovePlayer( D3DXVECTOR3 in_vec ) { D3DXMATRIX CameraRot; D3DXMatrixRotationY(&CameraRot,D3DXToRadian(AngleY)); D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&in_vec,&CameraRot); CameraPos += (m_timeElapsed * CameraRotTarget); } void ProcessMouseInput() { GetCursorPos( &CurrentMouseState ); if ((CurrentMouseState.x != GameMouseState.x) || (CurrentMouseState.y != GameMouseState.y)) { int dx = CurrentMouseState.x - GameMouseState.x; int dy = CurrentMouseState.y - GameMouseState.y; AngleY+=m_timeElapsed*dx*7.0f; } GameMouseState = CurrentMouseState; // Set back to window center in Render function } VOID UpdateCamera() { D3DXVECTOR3 CameraOrigTarget(0, 0, -1); D3DXVECTOR3 CameraOrigUp(0, 1, 0); D3DXMATRIX CameraRot; D3DXMATRIX CameraRotX; D3DXMatrixRotationX(&CameraRotX,D3DXToRadian(AngleX)); D3DXMATRIX CameraRotY; D3DXMatrixRotationY(&CameraRotY,D3DXToRadian(AngleY)); CameraRot = CameraRotX * CameraRotY; D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&CameraOrigTarget,&CameraRot); D3DXVECTOR3 CameraTarget; CameraTarget = CameraPos + CameraRotTarget; D3DXVECTOR3 vUpVec( 0.0f, 1.0f, 0.0f ); D3DXMatrixLookAtLH( &matView, &CameraPos, &CameraTarget, &vUpVec ); g_pd3dDevice->SetTransform( D3DTS_VIEW, &matView ); D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI / 4, 1.0f, 1.0f, 100.0f ); g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj ); }

    Read the article

  • Properly create centered vertical navigation list with a hover image on both sides?

    - by Damainman
    I have a navigation list I placed inside of a , I am attempting to get an image that appears on both sides of a link when you hover. Basically a an arrow on each side of the link. I have managed to get the effect I am looking for with: <ul> <li style=""> <a href="#">Services</a> </li> <li> <a href="#">About</a> </li> <li> <a href="#">Media</a> </li> <li> <a href="#">FAQ</a> </li> <li> <a href="#">Portfolio</a> </li> <li> <a href="#">Contact</a> </li> </ul> .nav ul{ list-style:none; text-align:center; } .nav li:hover a:before, .nav li:hover a:after { content:url(../images/nav_bullet.png); } The end result will look like the following but with the list centered: link >> Hovered Link << link However I am not sure if that is the best way of going about it, and the image is too close to the text. I tried placing a margin and padding but that didn't work. To top of it off, the image is not vertically centered with the link text. Anyone know of a proper way to do this as I am just going by trial and error? Thank you in advance!

    Read the article

  • C# Collision Math Help

    - by user36037
    I am making my own collision detection in MonoGame. I have a PolyLine class That has a property to return the normal of that PolyLine instance. I have a ConvexPolySprite class that has a List LineSegments. I hav a CircleSprite class that has a Center Property and a Radius Property. I am using a static class for the collision detection method. I am testing it on a single line segment. Vector2(200,0) = Vector2(300, 200) The problem is it detects the collision anywhere along the path of line out into space. I cannot figure out why. Thanks in advance; public class PolyLine { //--------------------------------------------------------------------------------------------------------------------------- // Class Properties /// <summary> /// Property for the upper left-hand corner of the owner of this instance /// </summary> public Vector2 ParentPosition { get; set; } /// <summary> /// Relative start point of the line segment /// </summary> public Vector2 RelativeStartPoint { get; set; } /// <summary> /// Relative end point of the line segment /// </summary> public Vector2 RelativeEndPoint { get; set; } /// <summary> /// Property that gets the absolute position of the starting point of the line segment /// </summary> public Vector2 AbsoluteStartPoint { get { return ParentPosition + RelativeStartPoint; } }//end of AbsoluteStartPoint /// <summary> /// Gets the absolute position of the end point of the line segment /// </summary> public Vector2 AbsoluteEndPoint { get { return ParentPosition + RelativeEndPoint; } }//end of AbsoluteEndPoint public Vector2 NormalizedLeftNormal { get { Vector2 P = AbsoluteEndPoint - AbsoluteStartPoint; P.Normalize(); float x = P.X; float y = P.Y; return new Vector2(-y, x); } }//end of NormalizedLeftNormal //--------------------------------------------------------------------------------------------------------------------------- // Class Constructors /// <summary> /// Sole ctor /// </summary> /// <param name="parentPosition"></param> /// <param name="relStart"></param> /// <param name="relEnd"></param> public PolyLine(Vector2 parentPosition, Vector2 relStart, Vector2 relEnd) { ParentPosition = parentPosition; RelativeEndPoint = relEnd; RelativeStartPoint = relStart; }//end of ctor }//end of PolyLine class public static bool Collided(CircleSprite circle, ConvexPolygonSprite poly) { var distance = Vector2.Dot(circle.Position - poly.LineSegments[0].AbsoluteEndPoint, poly.LineSegments[0].NormalizedLeftNormal) + circle.Radius; if (distance <= 0) { return false; } else { return true; } }//end of collided

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • No Sound in 12.04 on Acer TimelineX 3830TG

    - by Fawkes5
    When i first installed ubuntu 12.04 everything worked fine. All of a sudden my sound has stopped working!! This has happened before [like 2 days ago], however the next day it started working again. That may happen again...[I've already rebooted 5 times] My Sound Card is a HDA intel PCH My Sound Chip is Intel CougarPoint HDMI My machine is an Acer TimelineX 3830TG, heres my lcpci output 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b4) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b4) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev ff) 02:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) 03:00.0 Network controller: Atheros Communications Inc. AR9287 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5116 PCI Express Card Reader (rev 01) 05:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) I'm running a dual boot with Windows 7, sound always works there. I've tried loading earlier kernels [3.2.0 ] but that hasn't helped. I've tried playing with alsamixer, unmuting and raising volume, doesn't work I've tried playing with pauvcontrol, doesnt work. Althought, strangely, when i play a song, under output i can see a bar moving implying that something is working, just the sound isn't getting through my speakers. Thus it might be a hardware problem. I've tried unistalling and reinstalling pulsaudio annd alsamixer, doesn't work [maybe made it worse...]. I've heard reverting back to kernel 3.0... works, but i'm reluctant to try that[i dont know much about kernels]. Thank you. Please tell me what other information you need. EDIT: My audio has started working again after another reboot. However this time im not connected to my external audio on startup. I will see if this is the issue next time i'm at home Edit: Connecting to external audio is not the problem.

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • Misused mke2fs and cannot boot into system

    - by surlogics
    I installed Ubuntu with WUBI in Windows 7 64bit, and I had installed Mandriva 2011 with a disk. I tried to learn Linux with Ubuntu and misused mke2fs; after I reboot my computer, Windows 7 and Ubuntu has crashed. As I have Mandriva, I boot into Mandriva and found # df -h /dev/sda7 12G 9.8G 1.5G 88% / /dev/sda2 15G 165M 14G 2% /media/logical /dev/sda6 119G 88G 32G 74% /media/2C9E85319E84F51C /dev/sda5 118G 59G 60G 50% /media/D25A6DDE5A6DBFB9 /dev/sda9 100G 188M 100G 1% /media/ae69134a-a65e-488f-ae7f-150d1b5e36a6 /dev/sda1 100M 122K 100M 1% /media/DELLUTILITY /dev/sda3 98G 81G 17G 83% /media/OS # fdisk /dev/sda Command (m for help): p Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd24f801e Device Boot Start End Blocks Id System /dev/sda1 2048 206847 102400 6 FAT16 /dev/sda2 * 206848 30926847 15360000 7 HPFS/NTFS/exFAT /dev/sda3 30926848 235726847 102400000 7 HPFS/NTFS/exFAT /dev/sda4 235728864 976771071 370521104 f W95 Ext'd (LBA) /dev/sda5 235728896 481488895 122880000 7 HPFS/NTFS/exFAT /dev/sda6 727252992 976771071 124759040 7 HPFS/NTFS/exFAT /dev/sda7 481500243 506674034 12586896 83 Linux /dev/sda8 506674098 514851119 4088511 82 Linux swap / Solaris /dev/sda9 514851183 727246484 106197651 83 Linux Partition table entries are not in disk order I think I may used the following command mke2fs -j -L "logical"/dev/sda2 but I had forgotten what kind of partition it was before I transfered it into ext3. perhaps ntfs Data was not lost, and I can view my files as I could in Windows. In Mandriva, there are following disks: 117.2 GB hard disk, files in it is the same as my Windows D:, and Ubuntu was installed in it; 119.0 GB hard disk is my G:, with my personal files in it; 12.0 GB is the same with Mandriva / (with means root), 101.3 GB hard disk with nothing but lost+found; DELLUTILITY should be Dell computer utilities pre-installed in my computer; logical is the disk which I had spoiled, I can view nothing but lost+found; and OS is the C: in my Windows. After I boot, grub lets me choose Mandriva or Windows. I chose Windows and it tells me: FILE system type unknown, partition type 0x7 Error 13: Invalid or unsupported executable format I doubt something wrong with windows MBR or something # cat /boot/grub/menu.lst timeout 5 color black/cyan yellow/cyan gfxmenu (hd0,6)/boot/gfxmenu default 0 title linux kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=linux root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot logo.nologo quiet resume=UUID=34c546e4-9c42-4526-aa64-bbdc0e9d64fd splash=silent vga=788 initrd (hd0,6)/boot/initrd.img title linux-nonfb kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=linux-nonfb root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot resume=UUID=34c546e4-9c42-4526-aa64-bbdc0e9d64fd initrd (hd0,6)/boot/initrd.img title failsafe kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=failsafe root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot failsafe initrd (hd0,6)/boot/initrd.img title windows root (hd0,1) makeactive chainloader +1 I can boot into Linux, but not Ubuntu, it boot into Mandriva. I don't have a boot disk. Help me find a way to make it work again.

    Read the article

  • Wifi problems with Ubuntu 13.10

    - by Sparrowhawk4
    I should probably say straight off that I am brand new to Ubuntu. I installed 13.10 alongside Windows 7 a couple of days ago and I've managed to clear up all the little problems, apart from with the wifi. My internet is fine using an ethernet cable. I can connect to the wifi, and it tells me that the signal is good, but for the vast majority of the time I can't load anything, and system monitor tells me that I am sending and receiving nothing. The problem is somewhere in Ubuntu. It works fine on Windows 7, my Android phone and all of my flatmates laptops. I know I'm not the only one with this problem, but the fixes I've seen either don't work or I don't understand them enough to carry them out. Help? EDIT: from lspci command as requested 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.3 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 00:1f.6 Signal processing controller: Intel Corporation 82801I (ICH9 Family) Thermal Subsystem (rev 03) 02:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8191SEvA Wireless LAN Controller (rev 10) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02)

    Read the article

  • Translating multiple objects in GUI based on average position?

    - by user1423893
    I use this method to move a single object in 3D space, it accounts for a local offset based on where the cursor ray hits the widget and the center of the widget. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; objectSelected.Position = p; I would like to move multiple objects based on the same principle and using a widget which is located at the average position of all the objects currently selected. I thought that I would have to translate each object based on their offset from the average point and then include the local offset. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; int numSelectedObjects = objectSelectedList.Count; for (int i = 0; i < numSelectedObjects; ++i) { objectSelectedList[i].Position = (objectSelectedList[i].Position - translationWidget.Position) + p; } This doesn't work as the object starts shaking, which I think is because I haven't accounted for the new offset correctly. Where have I gone wrong?

    Read the article

  • Is Ubuntu recognizing and/or using my NVIDIA graphics card?

    - by user212860
    This is my first post here, and I'm pretty new to Ubuntu/Linux. I currently have no other OS except for Ubuntu 13.10. (I used to have Win7 until i got a new terabyte hard drive). My current PC build, if any of this helps: CPU: Intel i5 quad-core Graphics: NVIDIA GeForce GTX 650 RAM: 8 GB HDD: 1 TB SATA 3 Motherboard: MSi Z77 A-G41 OS Ubuntu 13.10 So I recently installed Ubuntu 13.10 and put Steam on it, and I'm seeing that my games run a lot slower than they did when I had Win7. I figured it was a graphics problem, so I checked System Settings Details Overview. It says in "Graphics" that I have "Gallium 0.4 on NVE7" (don't really know what that is). Does this mean that Ubuntu is not using my graphics card? In System Settings Software & Updates Additional Drivers, it clearly shows like this: NVIDIA Corporation: GK107 [GeForce GTX 650] -This device is using an alternative driver (And then it shows a list of drivers that I can switch back and forth to) So this is a bit confusing. In Software and Updates, it clearly shows that I have my NVIDIA card installed, and that I have a driver selected for it. But in System Settings, it shows I have some Gallium 0.4 thing. I had done a bit of research, and ended up typing command: "lspci|grep VGA" in the Terminal. It showed this in response: VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX 650] (rev a1) The Terminal seems to recognize my graphics card. What it looks like to me, is that I don't have the proper driver, and I might be using my CPU's integrated graphics. When I switch around which driver I am using in that list, it still does not see my card in System Settings. Some of the drivers in the list give me some sort of OpenGL error when I try to run a game. It might just be that my games are running slow because the game developers have not optimized it for Ubuntu that well. However, that still doesn't take away from the fact that System Settings is not showing my NVIDIA card. TL;DR Version: How do I know if my video card is being recognized/used? If my video card is not being used, what is the best way fix that? Please make your answers easy to understand. I do not mind wordy responses, as long as I can follow what you're saying. Any help would be greatly appreciated! Thanks, Jabber5

    Read the article

  • A Windows Update Prevented Live Discs From Working? [migrated]

    - by user88311
    First off I'll state my system specs. Acer Aspire M1100 Windows Vista Home Basic 32bit OEM 2 Gigs of DDR2 ram 160Gig hard drive 2.7Ghz AMD Athlon 64 processor ATI Radeon X1250 Graphics card A few days ago my computer did automatic updates and updated windows defender to KB915597 (Definition 1.135.415.0), After which when shutting down and starting up I would receive BSOD with the information BUGCODE_USE_DRIVER and 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x877330000) upon where my computer would not start up with any USB devices plugged in and it always require me to run startup repair before it started. Upon when I first started it up and was able to fully boot windows, I had no use of the mouse so I was unable to install the fix that the windows solutions center brought up on my screen, so I restarted again and installed the fix hoping it would cease the problem, it did not. Upoon starting up after installing the fix and restarting I was confronted with the BSOD 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x83291000) at which I found the startup repair could not fix the problem and I restored, as I most like should have in the first place. After going through that I read that simply installing the latest defender version from the microsoft site had fixed this problem for others, so I did that, to find I still received the BSOD's. So in a attempt to find a fix to the problem I went to the microsoft answers site to try to find a way to fix the problem, there I was told to simply disable defender and reboot to see if that fixed the problem, upon doing this my computer would no longer even startup, when I boot normally I get to just when the loading screen finishes and then my computer restarts and when I run startup repair, it runs for about 15 seconds and then my computer restarts as well. I have tried running ubuntu live discs in order to simply access the drive and simply copy and paste the 2 month old physical backup I have of my C drive to the C drive, but whenever I run the live OS when it gets to the end of the load screen and is about to boot, the computer again restarts, yet if I put in a gparted disc, I am able to boot it fully, although it does not give me access to the file system just partition managing and when I attempt to access the internet through it, the computer once again restarts. So my question is, how could the update and what has happened prevent me from running the live OS's properly?

    Read the article

  • LWJGL: Camera distance from image plane?

    - by Rogem
    Let me paste some code before I ask the question... public static void createWindow(int[] args) { try { Display.setFullscreen(false); DisplayMode d[] = Display.getAvailableDisplayModes(); for (int i = 0; i < d.length; i++) { if (d[i].getWidth() == args[0] && d[i].getHeight() == args[1] && d[i].getBitsPerPixel() == 32) { displayMode = d[i]; break; } } Display.setDisplayMode(displayMode); Display.create(); } catch (Exception e) { e.printStackTrace(); System.exit(0); } } public static void initGL() { GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GL11.glClearDepth(1.0); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LEQUAL); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GLU.gluPerspective(45.0f, (float) displayMode.getWidth() / (float) displayMode.getHeight(), 0.1f, 100.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST); } So, with the camera and screen setup out of the way, I can now ask the actual question: How do I know what the camera distance is from the image plane? I also would like to know what the angle between the image plane's center normal and a line drawn from the middle of one of the edges to the camera position is. This will be used to consequently draw a vector from the camera's position through the player's click-coordinates to determine the world coordinates they clicked (or could've clicked). Also, when I set the camera coordinates, do I set the coordinates of the camera or do I set the coordinates of the image plane? Thank you for your help. EDIT: So, I managed to solve how to calculate the distance of the camera... Here's the relevant code... private static float getScreenFOV(int dim) { if (dim == 0) { float dist = (float) Math.tan((Math.PI / 2 - Math.toRadians(FOV_Y))/2) * 0.5f; float FOV_X = 2 * (float) Math.atan(getScreenRatio() * 0.5f / dist); return FOV_X; } else if (dim == 1) { return FOV_Y; } return 0; } FOV_Y is the Field of View that one defines in gluPerspective (float fovy in javadoc). This seems to be (and would logically be) for the height of the screen. Now I just need to figure out how to calculate that vector.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >