Search Results

Search found 3094 results on 124 pages for 'fit'.

Page 86/124 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Switching from Onboard intel to Nvidia Dedicated GPU

    - by Anarkie
    How can I switch from Intel onboard grpahics to Nvidia Dedicated GPU? When I go to windows screen resolution I see intel. I cant change it. I go to Device Manager, I see both Adapters are there and Nvidia is known.I disabled intel, I didnt see any option to set one as primary so I disabled intel, black screen!Reboot and re-enable intel. I right click on the desktop, choose "Nvidia Control Panel" and on 3D options I chose the desired game I want to play, High performance Nvidia, but it didnt switch when I started the game. Then I made preferred GPU in the global settings High performance Nvidia for everything it still didnt change.I understand to save the battery etc. there is a switch option between these two but I dont see this switch when it is necessary, I cant also switch manually?Is there a manual switch FN key?I looked but couldnt find. Why I want to do this? 1) Better game peformance. 2) I want to play an old game from 2002(Diablo 2 LOD), when I start the game there are black bars on the sides, so screen becomes just smaller which I dislike!I heard this is intel's specification to center the display.But instead I would like to scale or expand it to fit widescreen(fullscreen).Which should be possible with Nvidia. My Notebook Specs: Fujitsu Lifebook AH531, Win7 , 64 bit, i5, intel HD graphics onboard, Nvidia GT 525. I didnt install Nvidia later, it was always installed and ready from the moment I turned on the computer first time. How I determined that the cards werent switched when I am playing the game: with the windows key I exited from the game, then looked at screen resolutions menu, still saw intel, also the game was still with black bars.I know intel GPU should enough for Diablo 2 but I am interested in this answer for further games, I dont always play Diablo, what if I install an up to date game for example?Then Intel will not be sufficient.I would like to learn the switch option.

    Read the article

  • Which revision control system for single user

    - by G. Bach
    I'm looking to set up a revision control system with me as a single user. I'd like to have access (read and write) protected using SSL, little overhead, and preferrably a simple setup. I'm looking to do this on my own server, so I don't want to use the option of registering with some professional provider of such a service (I like having direct control over my data; also, I'd like to know how to set up something like that). As far as I'm aware, what kind of project I want to subject to revision control doesn't really matter, but just for completeness' sake, I'm planning on using this for Java project, some html/css/php stuff, and in the future possibly as a synchronizing tool for small data bases (ignore that later one if it doesn't fit in with the paradigm of revision control). My questions primarily arise from the fact that I only ever used Subversion from Eclipse, so I don't have thorough knowledge of what's out there, what fits better for which needs, etc. So far I've heard of Subversion, Git, Mercurial, but I'm open to any system that's widely used and well supported. My server is running Ubuntu 11.10. Which system should I choose, what are the advantages of the respective systems, and if you know of any particularly useful ones, are there tutorials regarding the setup of the system I should choose that you could recommend?

    Read the article

  • Create a PDF that defaults to flip on short edge when printed double-sided

    - by user568458
    We're creating a 2-page PDF brochure with a target audience who will print it on their regular office or home printers. If it is printed on a double-sided printer (common in offices), it'll come out correctly if set manually by the user to "Flip on short edge", but will come out with the second page upside down if default settings are used (flip on long edge). Our target audience aren't very tech-literate, and we've found that even within our own office network there is variation in the location of the 'Flip on short edge' setting - so it isn't realistic to give everyone who downloads the PDF instructions on how to change this setting or to expect everyone to find out how to change the setting off their own backs. So, when creating a PDF (ideally using Adobe InDesign or Acrobat, but if other software or hacking is needed that's fine...), is there a way to configure the PDF file itself so that when printed double-sided with default settings, it flips on the short edge? If possible, it'll be useful supplementary info to know how reliable any such methods are across different PDF readers (e.g. Adobe Reader, Acrobat, Mac Preview, inbuilt browser readers (e.g. chrome), FoxIt, etc). If questions about content creation like this aren't a great fit here, feel free to migrate it to the graphic design stackexchange site - this question seems to fall half way between the two sites

    Read the article

  • Remote Desktop Zooming

    - by codeulike
    Using Remote Desktop from a device with a hi-res screen (say, a Surface Pro) is decidedly tricky - as everything displays 1:1 scale and so looks tiny. If the machine you are remoting into runs Server 2008 R2 or later, you can change the dpi zooming setting (see here). But for older hosts, that doesn't work. Using normal Remote Desktop, you can connect with a lower resolution, say 1280x768, and turn on smart-sizing. However smart-sizing can scale down (to display a huge desktop in a small area) but does not seem to scale up (to display a small desktop in a big area). Using the Windows 8 Remote Desktop App, you can zoom - but you cannot set the default resolution of the host. What I want is a lower resolution in the host, scaled up to fit my screen. So both of those are close to what I want, but dont quite work. So question is: Does the Remote Desktop App allow screen resolution to be set somehow? Is there some other Remote Desktop client that can handle zooming better?

    Read the article

  • Why can't I debug my ASP project through a remote desktop connection?

    - by Anthony Benavente
    I just asked this question in Stack Overflow but I figured this stack exchange forum is a better fit. It's been about a month of trying to figure out this problem and we've still not found a solution. We have about seven virtual machines on a server running Windows XP Professional w/ SP 3 all with Visual Studio Interdev and IIS 5.1 installed. Running the programs all work fine, but we just can't debug through remote desktop. When we are logged into the server console (through VM Sphere) and log into one of the virtual machines through there, we are able to debug properly. We figured the issue lies with some kind of permissions for Remote Desktop Users. We've tried nearly every article on the internet (exaggerating of course) and are about to give up hope. One more thing, when we are logged into the virtual machine through the server console and then remote in, the user that was logged into the console is kicked off but debugging works! Does remoting in trick the computer into giving us the correct permissions? I'm really not sure how it works. I know that this technology predates human history, but we are in the process of migrating from ASP Classic to ASP.NET Specs: - Windows XP Professional W/ SP3 - IIS 5.1 - Visual Studio 6 Interdev EDIT: By "debug" I mean running the project with breakpoints. Interdev doesn't stop at breakpoints.

    Read the article

  • Looking for issue tracker software for residential property management

    - by Rob
    This question is about a computer software (as per SU guidelines) application for centrally tracking issues concerning the management of a residental block of flats (apartments as they say in the US and France). Issues are incidents - and their resultant unplanned maintenance to address them, also planned one-off maintenance and also regular planned routine maintenance. I live in a block of flats (apartments), and along with other residents, are looking to more closely watch over issues with the communal, shared areas of the premises (corridors, courtyards, stairs, lifts, lights, trash/bin shed, bike stands, parking areas etc) and their maintenance, currently done by a property management company. Our own homes are our own affair internally, its the outside communal areas that I have the interest. The aim being to control costs and possibly reduce them, by proactively managing the property using historical data to predict issues and also to scrutinise maintenance charges against such data to ensure that the costs are as expected. Trending could also be established whereby recurrences of things can be detected and pre-empted to reduce costs. As a software professional, I'm aware of Bugzilla, eventum being free tools for software - which could be customised to fit this application, but wondered if there was something more appropriate. It might be useful for such software to be on a web server, with secure access, so that residents can log in and view the issues.

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell) ?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • Installing Windows 7 from a USB Hard Drive.

    - by Mark Tomlin
    I have a Western Digital Passport External Hard Drive (320GB) that I want to partition to keep the data on, but use some of the free space to install Windows 7 onto my desktop computer. Microsoft has given me the Windows 7 Enterprise Edition ISO to download. I would like to take the External HD and partition it so I can fit the ISO image onto it. How would I go about doing this? Trying to use GParted to partition the external hard drive has caused a chicken or the egg problem. GParted can't see the drive unless it's mounted, and when it is mounted it will not allow me to do anything to the partition. When it's not mounted, GParted can't see the drive at all and as such can't do anything to the drive. Once the drive is correctly partition, how do I go about moving the ISO image Microsoft gave me to my USB External Hard Drive? Are there any special steps that I need to take? I am using Ubuntu 11.04 & GParted 0.7.0, on my Chromebook to do this. Any support would be appreciated.

    Read the article

  • Low-profile, PCI Express, x1 video card with VGA-out?

    - by Dandy
    I just bought an Acer Aspire EasyStore H340 system (http://us.acer.com/acer/productv.do?LanguageISOCtxParam=en&kcond61e.c2att101=54825&sp=page16e&ctx2.c2att1=25&link=ln438e&CountryISOCtxParam=US&ctx1g.c2att92=450&ctx1.att21k=1&CRC=936243954) It comes with Windows Home Server, which I don't particularly care for (too dumbed down for my liking)--I bought the box mainly for the form factor, and intend to install Server 2008 on it and have it run as a small domain controller. The geniuses at Acer however went out of their way to ensure you can only run Home Server--you can only connect to it via the Home Server Connector software, as it has no video-out whatsoever (so essentially, there's no way to even get into the BIOS). It only has a low-profile, PCI Express x1 slot. It turns out to be way harder than I thought to locate a video card that both has an x1 connector, and is low-profile (I'd really rather not snip the bracket at the back just so it'll fit the case). I know they're out there, and I've seen one with Display Port outputs, but I don't have a monitor with this connection. So to reiterate, it needs to be: - x1 connector - low profile - VGA out (though DVI would be okay; I have some spare adapters) Can anyone recommend anything at all?

    Read the article

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • Adding more drives to a drive array

    - by Mystere Man
    I have a friend who has two servers, a Dell 1800 and an HP 350 ML G5, both have SAS drive arrays. The Dell is a 3.5" and the HP is a 2.5". They currently only have 3 drives in each array. We want to add additional drives, but they do not appear to have caddies, just "fake" covers. I haven't been able to take a good look at them, so I'm not sure what I need to do here. Are the "sockets" just there, and I can buy additional caddies and just stick them in? Or do I have to buy some kind of caddy adapter? Also, i'm thinking of just going 2.5" in the new server, so is there a 2.5" adapter caddy that will fit in the 3.5" chassis for the Dell, so I can use 2.5" drives in the 3.5" chassis? Can I buy 6GB/s drives and add them to the 3GB/s controller? The reason is that we're going to replace both computers in a year or so, and we want to bring the drives with. So rather than buy 3GB/s drives, we just want to buy 6GB/s drives so they can be used in the new server.

    Read the article

  • How to reinstall bootloader after migration to SSD

    - by hijarian
    I must say, it was difficult to name this question. Basically, I need to properly reinstall the bootloader on my system, because I already have the working system disks for my OSes. The long story is this: I had the large slow HDD with Windows7 & Debian Wheezy dual-boot on it, perfectly bootable. Then, I ordered the SSD drive and prepared my system partitions to fit onto the much smaller SSD. I wanted the following schema: 128 GB Windows 24 GB / on Debian 86 GB /home on Debian Strange size for /home because there's no such thing as true 256GB disk drive. So, I've prepared such a partitions on my initial HDD and installed the new SSD and then I loaded the GParted live USB (can't remember now how it was really named), and then just copypasted the partitions from HDD to SSD. So, now I have the following partitions across the physical disks: SSD 128 GB copy of original Windows partition 24 GB copy of presumably Debian / 86 GB copy of presumably Debian /home HDD 128 GB Windows 24 GB / on Debian 86 GB /home on Debian ... several other partitions with non-system data ... And the behavior of the system right after the Ctrl+C, Ctrl+V in GParted was as follows: no GRUB, system boots right into the Windows on HDD. In BIOS settings are to boot from SSD first. I managed to create the Debian Testing installation USB and loaded it into the rescue mode, found that it identified my SSD as /dev/sda and installed the GRUB to the /dev/sda. Now my system loads the GRUB which lists both Windows and Debian. From HDD. So, I am now back into initial position. Please, how I should set up the GRUB so it'll load the OSes correctly from SSD? Should I fire up my Debian, fiddle with the GRUB's config and reinstall it again to the same place (at SSD)?

    Read the article

  • Linux file server for an inexperienced admin

    - by Pat
    A charity I volunteer for wants a file server for their mostly Windows machines (about five XP and 7 machines, with some Mac laptops every now and then). For the server, I have a PC with an Intel Core 2 Duo 3GHz proc, 4GB of DDR2 400MHz RAM, and a 500 GB HDD. (I should point out that they do not currently have any server - they are just using Windows to share a folder on one of the PCs.) What is a linux distro that is easy to configure for Windows file serving yet stable and secure enough to protect sensitive data without an expert sysadmin? I'm guessing that a Debian distro would probably fit the security bill, but I don't know of any tailored to novice sysadmins. Also, are there any killer apps for making this easy to administer and set up (as a Windows file server, in particular - this answer is a good example)? Would FreeNAS be sufficient? Once it's all set up, what are the minimum measures I need to take to keep the data secure? I found this somewhat helpful answer, but it's not specific to my question of just getting a secure file server up, running, and maintained.

    Read the article

  • What LPR arguments do I need to print a 1400x800 pixel image on a 4x6 label?

    - by Nick
    This is driving me nuts. UPS sends our system a 1400x800 GIF image of a shipping label, which is supposed to fit nicely on a 4x6 page. Unfortunately, I can't seem to get the command line options right to make it happen. We're using an Eltron/Zebra 2844 with a network adapter, and printing from our Ubuntu 8.04 server using CUPS. We're using the correct drivers, and test pages print correctly. No matter what I try though, it insists on printing the UPS labels accross 6 pages, with a little bit of the label on each page, or way too small. I've tried a bazillion different lpr settings, most of them producing garbage. The closest I've gotten is this: lpr -P Eltron2844 -o natural-scaling=55 -o page-right=0 -o page-left=0 -o landscape -o media="4x6" ./1ZY437560399620027.gif but it causes the image to be too small on the page. It's about an inch too short, and there's a 1/2" margin on both sides. If I bump the scale up to 56, it explodes the image onto two pages, and squashes it. Any ideas?

    Read the article

  • Recommendation for a simple no-frills Windows PDF printer driver?

    - by Scott Bussinger
    I'm looking for an extremely simple Windows PDF printer driver that I can recommend to clients. Ideally it would have these characteristics: When you print something, it should just create it as a temporary file and then display it in their default PDF viewer with no prompting. If they want to save it, they can save it manually from inside the viewer. This workflow should be with no special post-install configuration. Installation should be very simple. A double click the installation program and click "Finish" sort of thing. No complicated multi-step installation, no asking questions your grandmother wouldn't know the answer to (preferably no questions at all), no extra crap being installed. An option for a completely silent installation would be nice, but not necessary. Ideally it would be free to simplify their installing on a small network, but low cost is an option. I've tried a quite a few but none really fit the bill. Some can achieve the first goal but only after careful configuration, some try to install extra toolbars, some have other installation complexities that would make it hard for extremely novice users to succeed. Any suggestions? Thanks!

    Read the article

  • Simple Distributed Disconnected way to sync a directory

    - by Rory
    I want to start regularly backup my home directory on my ubuntu laptop, machine X. Suppose I have access to 2 different remote (linux) servers that I can backup to, machines A & B. Machine X will be the master, and should be synced to A and B. I could just regularly run rsync from X to A and then from X to B. That's all I need. However I'm curious if there's a more bandwidth effecient, and hence faster way to do it. Assuming X is going to be on residential style broadband lines, and since I don't want to soak up the bandwidth, I would limit the transfer from X. A and B will be on all the time, however X, will not be, so I'd also like to reduce the amount of time that X is transfering, potentially allowing A and B to spend more time transfering. Also, X won't be connected all the time. What's the best way to do this? rsync from X to A, then from A to B? Timing that right could be troublesome. I don't want to keep old files around, so if I was to rsync, then the --del option would be used. Could that mean something might get tranfered from A to B, then deleted from B, then transfered from A to B again? That's suboptimal. I know there are fancy distributed filesystems like gluster, but I think that's overkill in this case, and might not fit with the disconnected nature.

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

  • Any ideas why Ettercap filters aren't seeing packet data?

    - by Bryan
    I'm using an Ettercap filter to detect a query response coming back from a particular service on a remote machine. When I see a response from the service, I'm searching through the data in the packet to see if an offset is a specific value, and if so I'm changing the value at another offset. Trouble is, when I try this on a new virtual machine I built my Ettercap filter's no longer getting any data in the DATA.data variable available to it. if(ip.proto == TCP && tcp.src == 17867) { msg("Response seen!\n"); if(DATA.data + 2 == "\0x01") { msg("Flag detected!\n"); DATA.data + 5 = 0x09; } } The filter's getting applied to the traffic because "Response seen!" messages get printed out by Ettercap. However, "Flag detected!" messages do not. I think DATA.data is indeed empty because if I change my second "if" statement to check for DATA.data == "" then the "Flag detected!" message gets printed. Any ideas why this may be happening?! Also, if this is the wrong site to be asking questions like this, please let me know. I wasn't sure if it fit better here or somewhere like superuser or serverfault. By the way, this is a cross-post from StackOverflow... I should have posted on this forum instead I think. :)

    Read the article

  • Write stderror to a file using PowerShell

    - by Zian Choy
    How do I capture error messages from a PowerShell-launched command in a text file? I searched the Internet for a while and found that supposedly, I should be able to do something like cmd /c "big blob of text >C:\output.txt 2>c:\errors.txt" to direct the output to output.txt and the errors to errors.txt but when I try to run the command, I get the following error: cmd.exe : The filename, directory name, or volume label syntax is incorrect. At C:\Users\Zian\Desktop\Untitled1.ps1:27 char:4 + cmd <<<< /c $command + CategoryInfo : NotSpecified: (The filename, d...x is incorrect.:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Furthermore, if I try to run the command without everything starting at "2", then the command executes correctly and output.txt catches the right output. I looked at Redirect stderr to variable in powershell but it wasn't helpful because the answer to that question suggests capturing the entire output and filtering it in memory. In my case, I am backing up every database on a computer and since the databases won't fit in my laptop's RAM, I cannot use the question's solution. I also found tantalizing suggestions about using $err = @(command goes here) but with no information on what to do other than simply inserting that line of text. I tried to utilize the search function on Serverfault with the string "@()", but it did not return any results. What can I do to get the error messages into errors.txt?

    Read the article

  • One Way Sync with Dropbox?

    - by user244805
    Is there any way I can mirror a dropbox folder to my C drive by just running a portable file? Extra background information because I know you guys hate it when you don't get the entire situation: I go back to University in fall and I need a new storage solution. I decided to use DropBox to sync my tiny University files (< 5 MB). I need to access these files from 4 machines: Windows 7 Home machine Windows 7 University A machine Windows 7 University B machine Android tablet 1 and 4 are a non-issue. The problem lies with 2 and 3. I want to be able to edit my files on 2 and 3 but those machines are not mine. There is an easy fix. Run a portable version of the DropBox syncer on a USB drive. But the problem is that I don't want to carry a USB drive around with me all the time. In that case, I can just run the small portable DropBox syncer off the internet. But where will it to store the files? A temporary directory on the C drive. There is only one issue left: there are hundreds of machines that I will randomly use that fit in categories 2 and 3. My portable DropBox syncer will notice that the temporary directory is empty on each new PC I use and instead of downloading my DropBox folder to the machine, it syncs the other way around i.e. it deletes my entire DropBox. The solution is to mirror my DropBox onto the temporary directory before running the DropBox syncer.

    Read the article

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • AMD processors witn graphics card bundled [closed]

    - by shybovycha
    Sorry for posting this question here - just don't know to which StackExchange website i should be writing. I've heard AMD created processors with video card bundled. So now these processors should work as fast as just usual processors with discrete video card but AMD's ones should use less power and spread less warm. Some googling around gave me the result like "AMD processors of A-series". They were mentioned to be build using that technology i described above. But on the other hand, we have a small rate of publishing and not-very-good quality of AMD drivers. I am a game-developer and web-developer so i need a powerful processor and graphics card and a lot of RAM on board (to make it possible to create a sample Grails application, for example or to create some 3D models in Maya/Cinema4D, for instance). Still i want my battery to be a long-living one, so power usage is a bit critical for me. So, my questions are: are there any processor building technology like i've described and which series they are (if they exist)? which processor shall fit the laptop the best: AMD one or i5/i7 on with nVidia graphics card for the purposes mentioned above?

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • How to disable my laptop's defective keyboard?

    - by Kolink
    I have an Alienware M17x R2, and after a couple years of use I started having problems with the keyboard: Intermittently, and without warning or apparent cause (even if I'm not touching the computer), the S key (and more recently the D key) will start firing incessant signals until I hit the offending key. I've been using a USB keyboard, which is conveniently exactly the right size to fit over the built-in keyboard without touching any buttons. However, even as I type the defective keyboard will fire its stream of signals. I have contacted Dell about this, twice in fact. The first time, I was just bluntly told that they don't ship replacement keyboards. The second time I was told that they did, but they were out of stock and they would contact me when they were in stock again. They haven't contacted me since. I can't seem to work out which device to disable in the device manager. Here's a screenshot of what I have: None of the keyboard devices have "Disable" in their context menus, only "Uninstall"... so you'll understand if I'm hesitant to try anything myself. Any advice on how to fix this?

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >