Search Results

Search found 33587 results on 1344 pages for 'case management'.

Page 136/1344 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • How can I create a solid business case for upgrading our programmers to 256 GB SSD and 16 GB of RAM?

    - by Alex. S.
    We have an environment based on Microsoft stack (VS2010, SQL Server, etc), and I firmly believe that we could improve productivity a little bit, having more RAM and a faster secondary SSD. What data do you advice to gather so I can solidify my request in such a way the advantages can be unbiasedly demonstrated? Currently we have only 6GB of RAM and slower HD drives, and at home I have a 128 GB SSD in my desktop and 16 GB of RAM (I also think is the max amount of memory supported by our workstations, if we could go bigger then better), so I can feel the difference and it's real. I also want to add that we are in an industry with plenty of money, so the issue actually is how to get a budget approval from management and spend it wisely to increase productivity.

    Read the article

  • WPF: Editable ComboBox; how to make search/auto-fill functionality case sensitive?

    - by unforgiven3
    Say I have a ComboBox, like so: <ComboBox IsEditable="True" Height="30"> <ComboBoxItem>robot</ComboBoxItem> <ComboBoxItem>Robot</ComboBoxItem> </ComboBox> If a user comes along and starts by typing a lower-case 'r' into that ComboBox when it is empty, the ComboBox predictably auto-fills itself with the word 'robot'. Great. Now the same user comes along and starts typing an upper-case 'R' into that ComboBox when it is again empty. Unpredictable, the ComboBox auto-fills itself with the lower-case word 'robot'. Not great. I desperately want it to auto-fill itself with 'Robot', but WPF does not seem to want to smile down upon me. No matter what you do (CAPS lock, shift+key), the ComboBox will always auto-fill with the lower case 'robot', provided that the lower case 'robot' precedes the upper case 'Robot' in the ComboBox's items collection. Is there any way to prevent this? This behavior is maddening and makes for an absolutely abysmal user experience.

    Read the article

  • What are the worst examples of moral failure in the history of software engineering?

    - by Amanda S
    Many computer science curricula include a class or at least a lecture on disasters caused by software bugs, such as the Therac-25 incidents or Ariane 5 Flight 501. Indeed, Wikipedia has a list of software bugs with serious consequences, and a question on StackOverflow addresses some of them too. We study the failures of the past so that we don't repeat them, and I believe that rather than ignoring them or excusing them, it's important to look at these failures squarely and remind ourselves exactly how the mistakes made by people in our profession cost real money and real lives. By studying failures caused by uncaught bugs and bad process, we learn certain lessons about rigorous testing and accountability, and we make sure that our innocent mistakes are caught before they cause major problems. There are kinds of less innocent failure in software engineering, however, and I think it's just as important to study the serious consequences caused by programmers motivated by malice, greed, or just plain amorality. Thus we can learn about the ethical questions that arise in our profession, and how to respond when we are faced with them ourselves. Unfortunately, it's much harder to find lists of these failures--the only one I can come up with is that apocryphal "DOS ain't done 'til Lotus won't run" story. What are the worst examples of moral failure in the history of software engineering?

    Read the article

  • Restore Emacs Session/Desktop

    - by Patrick McLaren
    I've been searching for how to restore an emacs session, with no luck. I'm looking to restore all previously open buffers, some of which might contain erc, shells, directory listings, files, etc. Every time I open emacs, I spend a considerable amount of time arranging my buffers; splitting them into rows and columns, opening a shell, arranging irc channels. It takes a while to get onto work. I've tried adding the following to my init.el (desktop-save-mode 1) And then using M-x desktop-save. This only seems to restore files that are open, not shells or anything else running within buffers. I've also checked the following questions (sorry, not able to post links yet): Session management in emacs using Desktop library Emacs session / projects / window management Emacs: reopen buffers from last session on startup? And read through: DeskTop and EmacsSession at emacswiki.org/emacs/SessionManagement Here's a screenshot example of my emacs session. A simple answer would be to just focus on real work :P

    Read the article

  • Wowhead.com Site Framework

    - by Byran
    I'm building a community website (not WoW related) and am curious what, if any, framework(s) Wowhead may use. The general, non-WoW specific functions of the site are near identical to what I need. A few of the features I'm interested in are: Item page comments User/Account management Forums Blog Content Management Search box suggestion I'm sure allot of their site is custom built but I assume that some portions may be third-party solutions, like the forums and blog. I just don't want to reinvent the wheel if it's out there ready for me to make use of.

    Read the article

  • Apache rewrite_mod: RewriteRule path and query string

    - by 1ace
    I currently have a website with a standard web interface on index.php, and I made an iPhone-friendly version in iphone.php. Both pages handle the same arguments. It works fine when I manually go to .../iphone.php, but I'd like to rewrite anything on .../path/ and .../path/index.php to iphone.php if %{HTTP_USER_AGENT} contains mobile, and optionally add the query string (not sure if/when I'd need to add it). So far, this is what I have in my .../path/.htaccess: RewriteEngine On RewriteCond %{HTTP_USER_AGENT} ^.+mobile.+$ [NC] RewriteRule index.php?(.*) iphone.php?$1 [L] RewriteRule index.php iphone.php [L] The problems are, it matches index.php in any subfolder, and it won't match .../path/?args… Special thanks to anyone who can correct/simplify/optimize anything =)

    Read the article

  • ARC, worth it or not?

    - by MSK
    When I moved to Objective C (iOS) from C++ (and little Java) I had hard time understanding memory management in iOS. But now all this seems natural and I know retain, autorelease, copy and release stuff. After reading about ARC, I am wondering is there more benefits of using ARC or it is just that you dont have to worry about memory management. Before moving to ARC I wanted to know how worth is moving to ARC. XCode has "Convert to Objective C ARC" menu. Is the conversion is that simple (nothing to worry about)? Does it help me in reducing my apps memory foot-print, memory leaks etc (somehow ?) Does it has much testing impact on my apps ? What are non-obvious advantages? Any Disadvantage os moving to it?

    Read the article

  • PostgreSQL + Rails citext

    - by glebm
    I am trying to move to heroku which uses PostgreSQL 8.4 which has a citext column type which is nice since the app was written for MySQL. Is there any way to use :citext with rails (so that if the migrations are run on MySQL the citext would just use string/text? I found this ticket, but it seems like it isn't going to be a part of rails for a while: https://rails.lighthouseapp.com/projects/8994/tickets/3174-add-support-for-postgresql-citext-column-type

    Read the article

  • sql count conditions

    - by user1311030
    there! I have this question, hope you guys can help me out. So i have this table with two fields: type and authorization in type i have 2 different values: Raid and Hold in authorization i have 2 different values: Accepted or Denied I need to make a view that returns values like this: TYPE:RAID ACCEPTED:5 DENIED:7 Basically i need to know how many of the values in TYPE are Raid, and then how many of them are Accepted and Denied. Thank you in advance!!

    Read the article

  • Determine an elements position in a variable length grid of elements

    - by gaoshan88
    I have a grid of a variable number of elements. Say 5 images per row and a variable number of images. I need to determine which column (for lack of a better word) each image is in... i.e. 1, 2, 3, 4 or 5. In this grid, images 1, 6, 12 and 17 would be in column 1 while 4, 9 and 15 would be in column 4. 1 2 3 4 5 6 7 8 9 10 12 13 14 15 16 17 18 19 What I am trying to do is apply a background image to each element based on it's column position. An example of this hard coded and inflexible (and if I'm barking up the wrong tree here by all means tell me how you'd ideally accomplish this as it always bugs me when I see someone ask "How do I build a gold plated, solar powered jet pack to get to the top of this building?" when they really should be asking "Where's the elevator?"): switch (imgnum){ case "1" : case "6" : case "11" : value = "1"; break; case "2" : case "7" : case "12" : value = "2"; break; case "3" : case "8" : case "13" : value = "3"; break; case "4" : case "9" : case "14" : value = "4"; break; case "5" : case "10" : case "15" : value = "5"; break; default : value = ""; } $('.someclass > ul').css('background','url("/img/'+value+'.png") no-repeat');

    Read the article

  • Is there a mean to specify specialization-genralization (inheritance) of actors in UML?

    - by Ivan
    I am just starting to use UML and have came to the following question: Some actors clearly are specialized versions of a natural entity. For example I've got Administrator and User actors which are clearly nothing but different roles of a user, Authorizer and Dispatcher which are services (and are going to be implemented this way). Should I just ignore these facts while modelling actors and use cases or specify it some way? I think I could make good use of such a specification to facilitate code generation.

    Read the article

  • IOS SDK : first advices for beginners

    - by VdesmedT
    I too often answer the same kind of questions regarding basic topics like memory management, UITableView, Interface Orientation, MVC, etc... I understand very well that everyone starting with that SDK is too exited about getting its hand on it but a few reading would save them hours of debugging and the frustration that come along with the feeling that "We miss something here". I'd like experience users to share the few small articles, white papers, docs, book chapters that helps other save their time and avoid frustration. My first vote would be for : IOS Memory Management Guide View Controller Programming Guide Table View Guide And as a General recommendation, read the Overview section that come along with every class documention in the reference library, they contains most of what you need to know to avoid big traps !

    Read the article

  • Use Java and RegEx to convert casing in a string

    - by Andreas
    Problem: Turn "my testtext TARGETSTRING my testtext" into "my testtext targetstring my testtext" Perl supports the "\L"-operation which can be used in the replacement-string. The Pattern-Class does not support this operation: Perl constructs not supported by this class: [...] The preprocessing operations \l \u, \L, and \U. http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html

    Read the article

  • Why do they initialize pointers this way?

    - by Rob
    In almost all of the books I read and examples I go through I see pointers initialized this way. Say that I have a class variable NSString *myString that I want to initialize. I will almost always see that done this way: -(id)init { if (self = [super init]) { NSString *tempString = [[NSString alloc] init]; myString = tempString; [tempString release]; } return self; } Why can't I just do the following? -(id)init { if (self = [super init]) { myString = [[NSString alloc] init]; } return self; } I don't see why the extra tempString is ever needed in the first place, but I could be missing something here with memory management. Is the way I want to do things acceptable or will it cause some kind of leak? I have read the Memory Management Guide on developer.apple.com and unless I am just missing something, I don't see the difference.

    Read the article

  • django image managment

    - by Andrey
    I'am new in django. In my project I'am working with images. I don't know how to organize the management of images. I need to dynamically upload the pictures and resize them. First question - what is the best way to dynamically upload images with progress bar and without flash? I found this and this, but I believe there is a better way. Second question. I have to save one image in different sizes. I won't use these thumbnails on my pages, but another application will. Many clients could upload images at the same time. This means that I can not resize all the images at the same time. How should I organize this process? Is there are a better ready-to-use solution for image management issue?

    Read the article

  • C++ corrupt my thinking, how to trust auto garbage collector?

    - by SnirD
    I use to program mainly with C/C++, that's make me dealing with pointers and memory management daily. This days I'm trying to develop using other tools, such as Java, Python and Ruby. The problem is that I keep thinking C++ style, I'm writing code as C++ usually written in almost every programming language, and the biggest problem is the memory management, I keep writing bad code using references in Java and just get as close as I can to the C++ style. So I need 2 thinks here, one is to trust the garbage collector, let's say by seeing benchmarks and proofs that it's realy working in Java, and know what I should never do in order to get my code the best way it can be. And the second think is knowing how to write other languages code. I mean I know what to do, I'm just never write the code as most Java or Python programmers usually do, are there any books for C++ programmers just to introduce me to the writing conventions? (by the way, forgive me for my English mistakes)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Outlook 2007 / 2010 Calendar: hide meetings in specific category

    - by Jeroen
    Question Is there any easy way in Outlook 2007/2010 to show/hide meetings in a specific category? Preferably only for a specific view (the Month view, in this case). Note: I was almost done writing this question, adding just one more "What I've tried" option, when I found an acceptable (though imperfect) solution. Remembering this SE blog post I figured I might as well post it after all and answer it myself. And who knows, perhaps someone else has a more elegant solution. The reason for me personally is that I'd like to hide the "small, recurring meetings" like our daily stand-up meeting in the month view. I'd prefer an Outlook feature that is meant for this (there must be one for this, right?), but I'm open to workarounds or plugin suggestions as well. What I expected to find somewhere was a list of categories (with added option "No category") where you could select/deselect from which categories you'd see meetings. Something like this mock-up: What I've tried Edit "View Settings", and use a "Filter..." on categories. This has several disadvantages, the major one is that the filter only allows me to choose what I want to show, but not what I want to hide. Even if I tick all categories but one for the filter it would still hide any uncategorized meeting. Similar to 1, but then using Advanced filters. Still a bit clumsy as changing views can be up to three clicks, but this is the best solution so far (see the corresponding answer below). Creating a sub-calendar for these "small" meetings that I wish to hide. This felt a bit clumsy and like overkill, but did provide an easy "select/deselect" option to show/hide these meetings. Search for plug-ins that do this. Couldn't find one (yet).

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Different approaches to share files over local network & playlists "collaboration"

    - by exTyn
    I know, that I can use Google to find methods to share files over local network [1]. But, I have never shared files over local network, and I want to do this in a good, professional way. Also, this could be a good community wiki, I think. Well, what I am asking for, is: what are pros and cons of different methods to sharing files ofver local network? In my case, I need to share files between Linux & Win 7, and I want it to be secure (= without access for anyone else but me & people in my room). Another question (connected with above topic) is about playing music over the local network. Let's say, I live with 2 other guys in a room, one of us have speakers and we want to collaborate in creating playlists (e.g. everyone is choosing 3 songs to be played). Is it possible? How to do this? I am asking this question on SuperUser, because it (question) is connected with hardware & software (network, connecting computers, software for managing playlists in network etc.). I think it is most accurate place for such question (I have considered SO and SF). [1] And I have already done this! But, I do not have an experience in this field (sharing files over local network), do I am asking about pros and cons.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >