Search Results

Search found 10209 results on 409 pages for 'multi monitor'.

Page 30/409 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • A Multi-Channel Contact Center Can Reduce Total Cost of Ownership

    - by Tom Floodeen
    In order to remain competitive in today’s market, CRM customers need to provide feature-rich superior call center experience to their customers across all communication channels while improving their service agent productivity. They also require their call center to be deeply integrated with their CRM system; and they need to implement all this quickly, seamlessly, and without breaking the bank. Oracle’s Siebel Customer Relationship Management (CRM) is the world’s leading application suite for automated customer-facing operations for Sales and Marketing and for managing all aspects of providing service to customers. Oracle’s Contact On Demand (COD) is a world-class carrier grade hosted multi-channel contact center solution that can be deployed in days without up-front capital expenditures or integration costs. Agents can work efficiently from anywhere in the world with 360-degree views into customer interactions and real-time business intelligence. Customers gain from rapid and personalized sales and service, while organizations can dramatically reduce costs and increase revenues Oracle’s latest update of Siebel CRM now comes pre-integrated with Oracle’s Contact On Demand. This solution seamlessly runs fully-functional contact center provided by a single vendor, significantly reducing your total cost of ownership. This solution supports Siebel 7.8 and higher for Voice and Siebel 8.1 and higher for Voice and Siebel CRM Chat.  The impressive feature list of Oracle’s COD solution includes full-control CTI toolbar with Voice, Chat, and Click to Dial features.  It also includes context-sensitive screens, automated desktops, built-in IVR, Multidimensional routing, Supervisor and Quality monitoring, and Instant Provisioning. The solution also ships with Extensible Web Services interface for implementing more complex business processes. Click here to learn how to reduce complexity and total cost of ownership of your contact center. Contact Ann Singh at [email protected] for additional information.

    Read the article

  • Test a simple multi-player (upto four players) Android game in single developer machine

    - by Kush
    I'm working on a multi-player Android game (very simple it is that it doesn't have any game-engine used). The game is based on Java Socket. Four devices will connect the game server and a new thread will manage their session. The game server will server many such sessions (having 4 players each). What I'm worried about is the testing of this game. I know it is possible to run multiple android emulators, but my development laptop is very limited in capabilities (3 GB RAM, 2 Ghz Intel Core2Duo and on-board Graphics). And I'm already using Ubuntu to develop the game so that I have more user memory available than I'd have with Windows. Hence, the laptop will burn-to-death on running 4 emulator instances. I don't have access to any android device, neither I have another machine with higher configuration. And I still have to develop and test this game. P.S. : I'm a CS student, and currently don't work anywhere, and this game is college project, so if there are any paid solutions, I cannot afford it. What can I do to test the app seamlessly? ability to test even only 4 clients (i.e. only 1 session) would suffice, its alright if I can't simulate real environment with some 10-20 active game sessions (having 4 players each).

    Read the article

  • A Cautionary Tale About Multi-Source JNDI Configuration

    - by scott.s.nelson(at)oracle.com
    Here's a bit of fun with WebLogic JDBC configurations.  I ran into this issue after reading that p13nDataSource and cgDataSource-NonXA should not be configured as multi-source. There were some issues changing them to use the basic JDBC connection string and when rolling back to the bad configuration the server went "Boom".  Since one purpose behind this blog is to share lessons learned, I just had to post this. If you write your descriptors manually (as opposed to generating them using the WLS console) and put a comma-separated list of JNDI addresses like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool,contentDataSource, contentVersioningDataSource,portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0,portalDataSource-rac1</data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params> so long as the first address resolves, it will still work. Sort of.  If you call this connection to do an update, only one node of the RAC instance is updated. Other wonderful side-effects include the server refusing to start sometimes. The proper way to list the JNDI sources is one per node, like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool</jndi-name> <jndi-name>contentDataSource</jndi-name> <jndi-name>contentVersioningDataSource</jndi-name> <jndi-name>portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0, portalDataSource-rac1, portalDataSource-rac2 </data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params>(Props to Sandeep Seshan for locating the root cause)

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • Cannot set monitor to native resolution

    - by S B
    problem is similar to so many other users, but solutions found do not work. Background: Fresh install of 12.04 (completely updated) on a Fit-PC2 (specs). Read in several places that the new 3.X kernel that 12.04 runs on has a new psb_gfx driver which supports the gma500 graphics card (poulsbo chipset). All's pretty much working (there are some glitches which are documented, so I won't raise them here), except for the screen resolution. My native monitor resolution is 1920X1080, but all I get is 1024x768. Output running xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 1024 x 768, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 0.0* Although I read that Ubuntu does not come with an xorg.conf file anymore, I also tried running sudo X :1 -configure, and here's the end of the output: Number of created screens does not match number of detected devices. Configuration failed. When I look in the xorg.conf.new file created in my home directory, it seems that for some reason X thinks I have two screens. Don't know what to do with that. Ideas anyone? Thanks for your time.

    Read the article

  • Multi-Resolution Mobile Development

    - by user2186302
    I'm about to start development on my first game for mobile phone (I already have a flash prototype completed so it's jsut a matter of "porting" it to mobile and fixing up the code) and plan on hopefully being able to get the game working on iphones and most android devices. I am using Haxe along with OpenFL and HaxeFlixel for development. My question is: What resolution should I design the game in initially and/or what is the best way to develop a game for multiple resolutions. I have found multiple different methods, the best, in my opinion, being strategy 3 on this page: http://wiki.starling-framework.org/manual/multi-resolution_development. However I have some questions about this. First, what would the best base resolution to use be, the guide suggests 240*320 which seems alright to me, although if I chose to use pixel graphics as I most probably will given I'm using HaxeFlixel, I'm not sure if they'll look too blocky on larger screens which I'm not even sure is a problem as it might still look alright. (Honestly, not sure about that and if anyone has any examples of games that use this method and look nice). Finally, please feel free to share whatever methods you use and think is best. For example, HaxeFlixel has a scaling feature that scales the game to fit the exact screen size, but I'm afraid that would lead to blurry and improperly scaled graphics since it would scale by non-integers. But, I'm not sure how noticeable a problem that may or may not be. Although from experience I'm pretty sure it won't look nice and currently I do not think I'm going to go for this option. So, I would really appreciate any help on this subject. Thank you in advance.

    Read the article

  • Software Design Idea for multi tier architecture

    - by Preyash
    I am currently investigating multi tier architecture design for a web based application in MVC3. I already have an architecture but not sure if its the best I can do in terms of extendability and performance. The current architecure has following components DataTier (Contains EF POCO objects) DomainModel (Contains Domain related objects) Global (Among other common things it contains Repository objects for CRUD to DB) Business Layer (Business Logic and Interaction between Data and Client and CRUD using repository) Web(Client) (which talks to DomainModel and Business but also have its own ViewModels for Create and Edit Views for e.g.) Note: I am using ValueInjector for convering one type of entity to another. (which is proving an overhead in this desing. I really dont like over doing this.) My question is am I having too many tiers in the above architecure? Do I really need domain model? (I think I do when I exposes my Business Logic via WCF to external clients). What is happening is that for a simple database insert it (1) create ViewModel (2) Convert ViewModel to DomainModel for Business to understand (3) Business Convert it to DataModel for Repository and then data comes back in the same order. Few things to consider, I am not looking for a perfect architecure solution as it does not exits. I am looking for something that is scalable. It should resuable (for e.g. using design patterns ,interfaces, inheritance etc.) Each Layers should be easily testable. Any suggestions or comments is much appriciated. Thanks,

    Read the article

  • Is this a dual monitor reset bug?

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) A few minutes after I login to my dual monitor setup, it gets reset to mirror screens. If I restore the settings via the displays application, everything is fine. On each reset, the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset, /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like its interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Another good free utility - Campwood Software Source Monitor

    - by TATWORTH
    The Campwoood Source Monitor at http://www.campwoodsw.com/sourcemonitor.html  says in its introduction "The freeware program SourceMonitor lets you see inside your software source code to find out how much code you have and to identify the relative complexity of your modules. For example, you can use SourceMonitor to identify the code that is most likely to contain defects and thus warrants formal review. SourceMonitor, written in C++, runs through your code at high speed, typically at least 10,000 lines of code per second." It is indeed very high-speed and is useful as it: Collects metrics in a fast, single pass through source files. Measures metrics for source code written in C++, C, C#, VB.NET, Java, Delphi, Visual Basic (VB6) or HTML. Includes method and function level metrics for C++, C, C#, VB.NET, Java, and Delphi. Offers Modified Complexity metric option. Saves metrics in checkpoints for comparison during software development projects. Displays and prints metrics in tables and charts, including Kiviat diagrams. Operates within a standard Windows GUI or inside your scripts using XML command files. Exports metrics to XML or CSV (comma-separated-value) files for further processing with other tools.

    Read the article

  • Simple multi-seat

    - by Oli
    I've asked about multiseat before. The answer (for 10.04) involved doing it the proper way (eg through gdm, multiple server layouts). The problem was that gdm needs to be patched or reverted to 2.20 for multiseat. It's an ugly hack that, worse than anything, will hold up future updates. As a result, I didn't do anything. I still have a spare video card. I still have the monitor, keyboard and mouse all sitting waiting to jump into action. And I still want to be able to turn that into a simple desktop. My needs don't seem complicated. I have a second video card, a USB hub and anything connected to that USB hub that I want to be dedicated to another X server. I don't need a login screen (I'm happy hard-coding in a auto-login and I'd be happy with the user starting the X server if that's possible). This is so simple in my head that I only need two questions: How can I explicitly start an X server from the command line on an unused video adapter (by passing it whatever configuration I need to)? Can I have this new X session load a desktop environment on load? This seems like something you should be able to write in a little upstart script within 10 minutes. That would be perfect for me as then I'd have a nice start/stop control over the secondary desktop from the main desktop (that I want to leave unscathed!) I'm thinking something as simple as this for the payload: su -u other_user -c "startx -- localhost hardware-information" And use .xinitrc to load openbox or something...

    Read the article

  • Sharding / indexing strategy for multi-faceted search

    - by Graham
    I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across all users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: I create an index for each month By default, searches are aliased against the most recent X months If no results are found, we can search against previous X months As we grow, we can reduce the interval X Each document is routed by the user ID So, this should let us do the following: search by user. This will search all indeces across 1 shard search by time. This will search ~2 indeces (by default) across all shards Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario.

    Read the article

  • Ubuntu 12.04 dual monitor reset bug

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) Few minutes after I login to my dual monitor setup its get reset to mirror screens. If I restore the settings via displays application everything is fine. On each reset the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like it's interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Dual monitor, permission issue

    - by cenna75
    I had a dual monitor configuration going on for quite some time. One day, after moving the computer to another location and reconnecting everything, it changed such that I saw everything in double (being very much sober), I think it's called the 'mirrors' config. Anyway, from there on, there was nothing to be done through the system settings gui to change it back, as it wouldn't allow me to save any modification. The error I get when clicking 'save' is : "Failed to create file /home/me/monitors.xml.xxxxxx. Permission denied", xxxx being a random code, changing everytime. However, I can save all the configurations I want just fine by using the terminal, in my case: xrandr --output DVI-I-1 --right-of VGA-1 So I do have a workaround and this is therefore a question more out of curiosity. What could possibly have changed to make it impossible to do it through the gui and still letting me change the config using xrandr without being root? I'm having a hard time believing it could have anything to do with disconnecting/reconnecting the monitors... Any idea? Thanks !

    Read the article

  • Lock statement vs Monitor.Enter method.

    - by Vokinneberg
    I suppose it is an interesting code example. We have a class, let's call it Test with Finalize method. In Main method here is two code blocks where i am using lock statement and Monitor.Enter call. Also i have two instances of class Test here. The experiment is pretty simple - nulling Test variable within locking block and try to collect it manually with GC.Collect method call. So, to see the Finilaze call i am calling GC.WaitForPendingFinalizers method. Everything is very simple as you can see. By defenition of lock statement it's opens by compiler to try{...}finally{..} block with Minitor.Enter call inside of try block and Monitor.Exit in finally block. I've tryed to implement try-finally block manually. I've expected the same behaviour in both cases. in case of using lock and in case of unsing Monitor.Enter. But, surprize, surprize - it is different as you can see below. public class Test : IDisposable { private string name; public Test(string name) { this.name = name; } ~Test() { Console.WriteLine(string.Format("Finalizing class name {0}.", name)); } } class Program { static void Main(string[] args) { var test1 = new Test("Test1"); var test2 = new Test("Tesst2"); lock (test1) { test1 = null; Console.WriteLine("Manual collect 1."); GC.Collect(); GC.WaitForPendingFinalizers(); Console.WriteLine("Manual collect 2."); GC.Collect(); } var lockTaken = false; System.Threading.Monitor.Enter(test2, ref lockTaken); try { test2 = null; Console.WriteLine("Manual collect 3."); GC.Collect(); GC.WaitForPendingFinalizers(); Console.WriteLine("Manual collect 4."); GC.Collect(); } finally { System.Threading.Monitor.Exit(test2); } Console.ReadLine(); } } Output of this example is Manual collect 1. Manual collect 2. Manual collect 3. Finalizing class name Test2. Manual collect 4. And null reference exception in last finally block because test2 is null reference. I've was surprised and disasembly my code into IL. So, here is IL dump of Main method. .entrypoint .maxstack 2 .locals init ( [0] class ConsoleApplication2.Test test1, [1] class ConsoleApplication2.Test test2, [2] bool lockTaken, [3] bool <>s__LockTaken0, [4] class ConsoleApplication2.Test CS$2$0000, [5] bool CS$4$0001) L_0000: nop L_0001: ldstr "Test1" L_0006: newobj instance void ConsoleApplication2.Test::.ctor(string) L_000b: stloc.0 L_000c: ldstr "Tesst2" L_0011: newobj instance void ConsoleApplication2.Test::.ctor(string) L_0016: stloc.1 L_0017: ldc.i4.0 L_0018: stloc.3 L_0019: ldloc.0 L_001a: dup L_001b: stloc.s CS$2$0000 L_001d: ldloca.s <>s__LockTaken0 L_001f: call void [mscorlib]System.Threading.Monitor::Enter(object, bool&) L_0024: nop L_0025: nop L_0026: ldnull L_0027: stloc.0 L_0028: ldstr "Manual collect." L_002d: call void [mscorlib]System.Console::WriteLine(string) L_0032: nop L_0033: call void [mscorlib]System.GC::Collect() L_0038: nop L_0039: call void [mscorlib]System.GC::WaitForPendingFinalizers() L_003e: nop L_003f: ldstr "Manual collect." L_0044: call void [mscorlib]System.Console::WriteLine(string) L_0049: nop L_004a: call void [mscorlib]System.GC::Collect() L_004f: nop L_0050: nop L_0051: leave.s L_0066 L_0053: ldloc.3 L_0054: ldc.i4.0 L_0055: ceq L_0057: stloc.s CS$4$0001 L_0059: ldloc.s CS$4$0001 L_005b: brtrue.s L_0065 L_005d: ldloc.s CS$2$0000 L_005f: call void [mscorlib]System.Threading.Monitor::Exit(object) L_0064: nop L_0065: endfinally L_0066: nop L_0067: ldc.i4.0 L_0068: stloc.2 L_0069: ldloc.1 L_006a: ldloca.s lockTaken L_006c: call void [mscorlib]System.Threading.Monitor::Enter(object, bool&) L_0071: nop L_0072: nop L_0073: ldnull L_0074: stloc.1 L_0075: ldstr "Manual collect." L_007a: call void [mscorlib]System.Console::WriteLine(string) L_007f: nop L_0080: call void [mscorlib]System.GC::Collect() L_0085: nop L_0086: call void [mscorlib]System.GC::WaitForPendingFinalizers() L_008b: nop L_008c: ldstr "Manual collect." L_0091: call void [mscorlib]System.Console::WriteLine(string) L_0096: nop L_0097: call void [mscorlib]System.GC::Collect() L_009c: nop L_009d: nop L_009e: leave.s L_00aa L_00a0: nop L_00a1: ldloc.1 L_00a2: call void [mscorlib]System.Threading.Monitor::Exit(object) L_00a7: nop L_00a8: nop L_00a9: endfinally L_00aa: nop L_00ab: call string [mscorlib]System.Console::ReadLine() L_00b0: pop L_00b1: ret .try L_0019 to L_0053 finally handler L_0053 to L_0066 .try L_0072 to L_00a0 finally handler L_00a0 to L_00aa I does not see any difference between lock statement and Monitor.Enter call. So, why i steel have a reference to the instance of test1 in case of lock, and object is not collected by GC, but in case of using Monitor.Enter it is collected and finilized?

    Read the article

  • Can I use all my RAM for application data?

    - by gsedej
    Hi! I have yet another question about "where is my Linux memory" Question goes: can I use cache for application data? On my laptop I have 1GB ram. Situation after some time of work: browser takes 400MB and all other apps caa 300MB (quickly summed in system monitor). System monitor says I use 90% of RAM and I have already 200MB on swap. Laptop is getting slower when I start new things (e.g. open new tab in browser or open new Nautilus window). probably putting memory on swap So there should be 1200MB (ram+swap) used but all app I see uses only 600MB. Where are other 600MB? Out of this 600MB there is 400MB real RAM. I am not copying or any other massive IO activity. I read about Linux smartly uses all ram it has using buffers and cache. So, kernel (cache) uses 300MB. What if I don't want to have disk mirrored and I want to use memory for application data (e.g. new browser tab)? I don't need 200MB of mirrored disk data, because I (for example) won't use open the same photos on data partition I just seen. So can I use all my RAM for application data? (including browser, desktop, xorg, other services). How?

    Read the article

  • how to profile multi threaded c++ app on linux?

    - by anon
    I used to do all my linux profiling with gprof. However, with my multi threaded app, it's output appears to be inconsistent. Now, I dug this up http://sam.zoy.org/writings/programming/gprof.html howver, it's from a long time ago -- and in my gprof output, it appears my gprof is listing functions used by non-main threads. So, my questions are: 1) in 2010, can I easily use gprof to profile multi threaded linux c++ apps? (ubuntu 9.10) 2) what other tools should I look into for profiling? thanks!

    Read the article

  • nvidia-settings error: nvidia driver not in use

    - by Bastarasov
    I have an Asus with GEFORCE GT520M CUDA (Optimus) and I am running Kubuntu 12.04, 64bit. I am trying to connect an external monitor through DVI and the monitor is not detected. Nvidia settings dont show properly and each time I fire them up there is a warning message: "You do not appear to be using the NVidia X driver. Please, edit...." (you probably know it and heard of this before). I have googled a lot and I have tried some things out but no luck so far. Is there a solution which has worked for someone out there? If so, please be very specific about what I need to do since I am really not good at using the terminal and generally new to ubuntu. I can use the terminal only to copy-paste things. :) Thanks in advance to everyone! ps. Seems like some people dealt with this by fixing the Nvidia settings problem but the instructions have never been clear enough for me to be able to understand.

    Read the article

  • Connecting / disconnecting DisplayPort causes crash

    - by iGadget
    I wanted to file a bug about this using ubuntu-bug xserver-xorg-video-intel, but the system prompted my to try posting here first. So here goes :-) While the situation in Ubuntu 11.10 was still somewhat workable (see UI freezes when disconnecting DisplayPort), in 12.04 (using Unity 3D) it has gotten worse. The weird part is that during the 12.04 beta's, the situation was actually improving! I was able to successfully connect and disconnect a DisplayPort monitor without the system breaking down on me. But now with 12.04 final (with all updates), it's just plain terrible. When I now connect an external monitor using the DisplayPort connector on my HP ProBook 6550b, it only works sometimes. Most times (but not always!) the screen just goes blank and the system seems to crash (not even CTRL+ALT+F1 works anymore). Only a hard shutdown by keeping the power button pressed for several seconds and then a restart gets me out of this. I suspect the chances of the system crashing become higher as the system's uptime increases, especially when there have been one or more suspend-resume cycles (although I have also experienced this bug once from a cold boot). Disconnecting is roughly the same as with 11.10 (see issue mentioned above), with the difference that if I resume from suspend, I no longer have to do a CTRL+ALT+F1, ALT+F7 cycle to get my screen back. So what more can I try? Or should I just go ahead and file the bug anyway?

    Read the article

  • Where is my ram?

    - by gsedej
    I have 2GB installed on my machine running Ubuntu 12.04. After some time of use, I see much of my RAM used. The RAM does not free enough even though I closed all my programs. I included 2 screenshots. First is "Gnome system monitor" (all process) and second is "htop" (with sudo), both sorted by memory usage. From both you see, that it's not possible that all running apps takes together 1GB of memory. First 7 biggest programs sum 250, but others are much smaller (all can't sum even 100MB). The cache is 300MB (yellow ||| on htop) and is not included in 1GB used. Also 260MB is already on swap. (which actually makes 1,3GB of used memory) If i start Firefox (or Chrome) with many tabs, it has only 1GB available and not potentially 1,5 GB (assume 0,5GB is for system). When I need more ram, swapping happens. So where is my ram? Which program is using it? How can i free it, to be available for e.g. Firefox? Gnome system monitor htop

    Read the article

  • Nvidia Fullscreen Metanode "Sliding" Issue

    - by user68202
    i have 2 monitors, the left one is my "main" monitor with 1920x1080_120, the right one my second with 1680x1050_60. (i have a nvidia card setup with twinview) when i play a game or something in fullscreen mode, the full resolution is used in fullscreen (monitor 1 + monitor 2). i read something about the metanodes i can use to shut down the one monitor that i dont need durning a "fullscreen session". i used the following: Option "metamodes" "DFP-0: 1920x1080_120 +0+0, DFP-2: 1680x1050_60 +1920+0; DFP: 1920x1080_120 +0+0, NULL" Its working great, the second (right) monitor is shutting down when i press "CTRL ALT +" and is starting agin when i press the same keystroke. But in the second mode when the second monitor is "down", i got the full "monitor 1 + monitor 2" resolution on my first (left) monitor, i can move my mouse to the right to see the contents of the second monitor and move it again to the left to the what is normally seen on the first monitor. Its something sliding between the 2 monitors on one display. How can i avoid this?

    Read the article

  • How to fix bluescreen in windows 7 with multi-boot?

    - by Ismail Sensei
    I have HP laptop 6730S with two Operating systems : Windows 7 Ultimate 32bit Centos 6.4 64Bit The GRUB2 is not installed in MBR, use Windows' bootloader. After I choose Windows in the start, the blue-screen appears with unmountable_boot_volume problem so I tried some help from similar questions here ( use Command Prompt and enter the following command: chkdsk /R C: ). But the problem is, I can't get repair my computer it took so long and nothing happened after I waited more than 2H and when I put my Windows 7 DVD to boot it charge the files then same thing happened nothing show up so I couldn't use command prompt. But when I use Centos everything works just fine the D partition i can mounted normally but C partition it shows me error and tell me to go to windows and repair it with Chkdsk command and here is where I am stuck.

    Read the article

  • Will Windows repair my multi-boot when I format the 1st physical partition with boot sector?

    - by user2353806
    Due to historical reasons I got a laptop with Vista, Windows 7 and Windows Server 2008R2 partitions. (boot from external wasn´t that viable) Nothing (Windows Repair, bootrec /whateveroption) worked when I restored only the Windows 7 and WS2k8 with Acronis TrueImage. Don´t ask me through what idiotic error messages I went during repair tries. (Wrong Windows version,...) So I grudgingly restored all three - with the little additional excursion that I thought changing the active partition to the Windows 7 partition would move the boot sector and let me format the Vista part... Oh no. Seems too logical for MS. (Dunno what I changed, but today it will let me format!) So the real question is: Will formatting the Vista part trash things again beyond comprehension or will Windows Repair bring back the boot rec and remove Vista from the boot options? Or should I just erase all the files to avoid trashing the boot? Where will the boot rec be (after repair) when I format the Vista? On 1st or 2nd partition? And if I get drunk and install Windows 8.1 on the 1st, will anything work? ;-) Thanks

    Read the article

  • How to chain GRUB2 for Ubuntu 10.04 from Truecrypt & its bootloader (multi boot alongside Windows XP partition)?

    - by Rob
    I want Truecrypt to ask for password for Windows XP as usual but with the standard [ESC] option, on selecting that, i.e via Escape key, I want it to find the grub for the (unencrypted) Ubuntu install. I've installed Windows XP on the 120Gb hard drive of a Toshiba NB100 netbook then partitioned to make room for Ubuntu 10.04 and installed that after the Windows XP install. When I encrypt Windows XP, Truecrypt will overwrite the grub entry in the master boot record (MBR), I believe (?) and I won't be able to choose between XP and Ubuntu anymore. So I need to restore it back. I've searched fairly extensively for answers on Ubuntu forums and elsewhere but have not yet found a complete answer that covers all eventualities, scenarios and error messages, or otherwise they talk of legacy GRUB and not GRUB2. Ubuntu 10.04 uses GRUB2. My setup: Partitions: Windows XP, NTFS (to be encrypted with Truecrypt), 40Gb /boot (Ext4, 1Gb) Ubuntu swap, 4Gb Ubuntu / (root) - main filesystem (20gb) NTFS share, 55Gb I know that the Truecrypt boot loader replaces the GRUB when boot up because I've already tried it on another laptop. I want boot loader screen to look something like the usual: Truecrypt Enter password: (or [ESC] to skip) password is for WindowsXP and on pressing [ESC] for it to find the Ubuntu grub to boot from Thanks in advance for your help. The key area of the problem is how to instruct Truecrypt when escape key is pressed, and how the Grub/Ubuntu can be made visible to the truecrypt bootloader to find it, when the esc key is pressed. Also knowing as chaining.

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >