Search Results

Search found 17944 results on 718 pages for 'size'.

Page 409/718 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Windows 7 - Ubuntu 10.10 Dual Boot Partitioning Recommendation for HP Laptop OEM

    - by Denja
    Hi Linux Community, After been temporary impressed with the newb Windows 7 and after intensly using it I find my self struggling with the ever slow and buggy windows OS once again. It's Time to go with the Ubuntu/Linux way for a better and faster tomorrow. Unfortunately in my country most Users/Business use Windows based Systems. As a Computer technician i want to learn and use both Systems and possibly introduce New users to more affordable Linux Based Systems. For now I want to create dual-boot or even triple boot layouts on my laptop machine Here's the layout in use now: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I want to make. * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows data partition (user files) NTFS - 60GB(Extended or Primary)(sda2);wanna share with Linux * Linux root Ext4 - 10GB (Primary)(sda3) * Linux swap swap- RAM size, 3GB (sda4) * Linux home Ext4- 164,9GB (Extended)(sda5) Question 1: Is the layout that i want to make correct as the Primary and Extended Partition concerns ? Question 2: Can I definitely get rid of SYSTEM Boot loader of windows? Question 3: If I get rid of HP_TOOLS and RECOVERY partition will it be a problem? Question 4: Based on my layout what is your suggestion for a Triple Boot layout for OSX or Puppy Linux? Thank you in advance for your advises and suggestions.

    Read the article

  • Mysqldump causes "Too many connections"

    - by vbachev
    A scheduled backup using mysqldump on one of our databases is causing Too many connections. The database is of both InnoDB and MyISAM tables with size of around 500Mb. The Too many connections appears for about 2-3 minutes We understand that mysqldump locks the tables and causes all other queries and connections to pile up and jam the mysql server. We need frequent backups and we cannot afford server downtime or putting websites in maintenance mode while doing it. Our websites are global and traffic is high all the time so its hard to find a moment for backups. How can we avoid downtime during backups?Is there maybe a way to use mysqldump in way that it will not lock all tables at the same time?Is there an alternative to backing up with mysqldump?

    Read the article

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • Add unallocated space to lvm

    - by Newbie
    I shrunk my windows partition and now have 10 GB of unallocated space that I now want to use to grow my / partition which is an ext4 in an lvm. I'm running Fedora 12. I ran system-config-lvm but the "Initialize Entry" button is greyed out. The unallocated space is not adjacent to the lvm but I cannot move the partitions in GParted like I was able to with ext3 in the past. I cannot create a new partition either as it says it cannot have more than 4 primary partitions. I don't see any option to create an extended partition. So my question is, how do I add that unallocated space to the lvm so I can grow the size of the / partition? I don't want to reinstall Fedora.

    Read the article

  • What groupware/project-management apps (preferably self-hosted webapp) do you recommend for a small dev shop?

    - by HedgeMage
    I run a small Drupal consulting shop and we've been trying different groupware solutions for what seems like ages, yet nothing we've found seems to be a good fit. We don't need CRM-overkill such as SugarCRM offers -- it's just too much for our small size. We do need git integration (at a minimum, an easy way to associate commits with issues) Time tracking on configurable or 15m increments per-project issue tracking billing (incl. recurring billing for support contracts, etc) some sort of per-project notes/wiki for things like login credentials, client contact info, etc. Contact logging (Client foo called at 2:20pm and asked to add bar to the spec, signed addendum with pricing due to client NLT CoB today, to be returned by CoB tomorrow) Open source solutions are greatly preferred to closed ones Most of all, it should be very efficient to use. Several solutions just fell out of use here because they required too many clicks for simple, frequent tasks like logging time spent on an issue or noting a call from a client. It shouldn't take 20 minutes to make a note. Edit: I almost forgot to mention: we're a mixed Linux/Mac shop with no Windows users.

    Read the article

  • Where is the Mac Divx Web Player 7 cache folder?

    - by user30710
    Until recently, I was using Divx web player 1.4.2 because it seemed to be the least buggy. It was saving files in users/xxxxxx/movies/divx movies/temporary added files and was deleting them when the cache limit was reached. Now with 7, it's saving them alright cause I can watch my HD space go down, but I can't find them. And it's not respecting the cache limit size (mine is 4GB). The only way to clear up this space is a restart of the Mac. I'm running 10.6.8and Chrome. I've looked everywhere for the folder manually. Where is it?

    Read the article

  • Oracle TechCast Live: "MySQL 5.5 Does Windows"

    - by bertrand.matthelie(at)oracle.com
    @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } Interested in MySQL on Windows? Join our next Oracle TechCast Live on Tuesday January 11th at 10.00 am PT! MySQL Product Manager Mike Frank will then tell you all about the major MySQL 5.5 performance gains on Windows.   In case you're not familiar with the Oracle TechCast Live events, they're akin to online "fireside chats" with experts about new tools, technologies and trends in application development. They also include live Q&A sessions, and you can ask questions via Twitter & Facebook. You can check out a few archived sessions here.   Get ready to ask your questions to Mike!   We hope many of you will join.

    Read the article

  • Good software to take a blog and format it for printing

    - by vaccano
    I have much of my family's doings on a Blogspot blog. I would like to print this out in a nice book. The actual printing I plan to just send to CostCo as Photo Prints. But I need some kind of software to reformat the posts into printable paper size sheets. I would like it if I could retain my blog's background and let me adjust how the pictures fit on the screen. Now I could do all of this with MS Publisher or Word. But I am curious if there is any other software out there that does this nice and easy. Anyone know of some cool software that will do this for me? Free is nice, but I am not above paying a modest fee for cool software. I would prefer to avoid another website that will charge for the printing as well as the converting.

    Read the article

  • trouble backing up large mysql database

    - by Patrick
    I have a wordpress MU database with something like 10,000+ tables for various user's blogs. I need to upgrade wordpress MU to newest version, but want to backup the DB before hand. PHPMyAdmin fails to even load the page when i click export. Ive tried going into the server (windows) and using dos command line: mysqldump -u USERNAME -p PASSWORD> BACKUP.sql but it hangs for a minute and gives me the error: error 23: out of resources when opinging file '.\USERNAME\wp_1037_links.MYD' (Errorcode: 24) when using LOCK Tables What am i doing wrong, or should i be doing? Is PHPMyAdmin right for something this size? Is there a better way of doing this than the two methods i tried? **Note that this is not my site, so any suggestions as to the setup of the DB ill have to run by the owner. Im just here for WP related crap, this is kind of out of scope for what i was brought on to do.

    Read the article

  • Visual Studio 2010 Winform Application &ndash; Unable to resolve custom assemblies?

    - by Harish Ranganathan
    Recently I surfaced a problem where, one of my friend had a tough time in getting rid of an assembly reference error.  Despite adding reference to the assembly, while referencing it in code, it was spitting out the “The type or namespace name ‘ASSEMBLYNAME’ could not be found” error.   This was a migration project and owing to the above error, it was throwing another 100 errors. We tried adding reference to the assembly in other projects and it was not even resolving the namespace while typing out in the using section. Upon further digging into the error warnings, it indicated something to do with the .NET Framework targeted i.e. 4.0.  My suspicion grew since the target framework was 4.0 and the assembly should be able to be recognized.  Then, when we checked “Project – “<APPNAME> Properties…”, the issue was with the default target framework which is “.NET Framework 4 Client Profile” By default, Visual Studio 2010 creates Windows Forms App/WPF Apps with the Target Framework set to .NET Framework 4 Client Profile.  This is to minimize the framework size required to be bundled along with the app. Client Profile is new feature since .NET 3.5 SP1 that allows users to package a minified version of .NET Framework that doesn’t include stuff such as ASP.NET, Server programming assemblies and few other assemblies which are typically never used in the Desktop Applications. Since the .NET Framework client profile is a minified version, it doesn’t contain all the assemblies related to Web services and other deprecated assemblies.  However, this application is a migration app and needed some of the references from Services and hence couldn’t run. Once, we changed the Target Framework to .NET Framework 4 instead of the default client profile, the application compiled. Here is link to a very nice article that explains the features of .NET Framework 4 client Profile, the assemblies supported by default etc., http://blogs.msdn.com/b/jgoldb/archive/2010/04/12/what-s-new-in-net-framework-4-client-profile-rtm.aspx Cheers !!

    Read the article

  • Finder Sidebar Icons - How do I duplicate?

    - by Wilco
    I've noticed that some system directories, when dragged to the Finder's sidebar, utilize special small-scale icons not visible in any other place. Even when looking at one of these folders in a Finder window using the smallest possible icon size, these "special" icons don't appear (so it's not just the small version of the folder's icon). So my question is, where is this information stored? If I wanted to duplicate this behavior for an arbitrary folder, where would I need to look? I like to replace my home directory with a symlink to a location on another partition, but when I do this, I lose this sidebar icon behavior. I would love to get this back if I can.

    Read the article

  • C string question

    - by user208454
    I am writing a simple c program which reverses a string, taking the string from argv[1]. Here is the code: #include <stdio.h> #include <stdlib.h> #include <string.h> char* flip_string(char *string){ int i = strlen(string); int j = 0; // Doesn't really matter all I wanted was the same size string for temp. char* temp = string; puts("This is the original string"); puts(string); puts("This is the \"temp\" string"); puts(temp); for(i; i>=0; i--){ temp[j] = string[i] if (j <= strlen(string)) { j++; } } return(temp); } int main(int argc, char *argv[]){ puts(flip_string(argv[1])); printf("This is the end of the program\n"); } That's basically it, the program compiles and everything but does not return the temp string in the end (just blank space). In the beginning it prints temp fine when its equal to string. Furthermore if I do a character by character printf of temp in the for loop the correct temp string in printed i.e. string - reversed. just when I try to print it to standard out (after the for loop/ or in the main) nothing happens only blank space is printed.

    Read the article

  • I can't run uwsgi as normal user

    - by atomAltera
    I want to run uwsgi server as www user, but if I write: uwsgi --socket $SOCKET --chmod-socket 666 --pidfile $PIDFILE --daemonize $LOGFILE --chdir $CHDIR --pp $PYTHONPATH --module main --post-buffering 8192 --workers 1 --threads 10 --uid www --gid www A socket creation error occurs: Log: 1 *** Starting uWSGI 1.4.1 (64bit) on [Mon Dec 10 22:15:23 2012] *** 2 compiled with version: 4.4.5 on 17 November 2012 23:31:14 3 os: Linux-2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 4 nodename: autoblog 5 machine: x86_64 6 clock source: unix 7 pcre jit disabled 8 detected number of CPU cores: 2 9 current working directory: / 10 writing pidfile to /tmp/uwsgi_mysite.pid 11 detected binary path: /usr/local/bin/uwsgi 12 setgid() to 1002 13 set additional group 1004 (files) 14 setuid() to 1002 15 *** WARNING: you are running uWSGI without its master process manager *** 16 your memory page size is 4096 bytes 17 detected max file descriptor number: 1024 18 lock engine: pthread robust mutexes 19 unlink(): Operation not permitted [core/socket.c line 109] 20 bind(): Address already in use [core/socket.c line 141]

    Read the article

  • Reading from a staging 2D texture array in DirectX10

    - by Don Reba
    I have a DX10 program, where I create an array of 3 16x16 textures, then map, read, and unmap each subresource in turn. I use a single mip level, set resource usage to staging and CPU access to read. Now, here is the problem: Subresource 0 contains 1024 bytes, pitch 64, as expected. Subresource 1 contains 512 bytes, pitch 64. Subresource 2 contains 256 bytes, pitch 64. I expect all three to be the same size. Debugging output is enabled, but not reporting any warnings or errors. Am I missing something, or might this be some sort of driver issue? Here is the code. The language is Nemerle, but C# and C++ would look almost the same. I have looked through the generated code, and am fairly confident the problem is not language-related. def cpuTexture = Texture2D ( device , Texture2DDescription() <- { Width = 16; Height = 16; MipLevels = 1; ArraySize = 3; Format = Format.R32_Float; Usage = ResourceUsage.Staging; CpuAccessFlags = CpuAccessFlags.Read; SampleDescription = SampleDescription(count = 1, quality = 0); } ); foreach (subresource in [0 .. 2]) { def data = cpuTexture.Map(subresource, MapMode.Read, MapFlags.None); Console.WriteLine($"subresource $subresource"); Console.WriteLine($"length = $(data.Data.Length)"); Console.WriteLine($"pitch = $(data.Pitch)"); cpuTexture.Unmap(subresource); }

    Read the article

  • SQL User Group Events coming - Cambridge, Leeds, Manchester and Edinburgh

    - by tonyrogerson
    Neil Hambly and myself are presenting next week in Cambridge, Neil will be showing us how to use tools at hand to determine the current activity on your database servers and I'll be doing a talk around Disaster Recovery and High Availability and the options we have at hand.The User Group is growing in size and spread, there is a Southampton event planned for the 9th Dec - make sure you keep your eyes peeled for more details - the best place is the UK SQL Server User Group LinkedIn area.Want removing from this email list? Then just reply with remove please on the subject line.Cambridge SQL UG - 25th Nov, EveningEvening Meeting, More info and registerNeil Hambly on Determining the current activity of your Database Servers, Product demo from Red-Gate, Tony Rogerson on HA/DR/Scalability(Backup/Recovery options - clustering, mirroring, log shipping; scaling considerations etc.)Leeds SQL UG - 8th Dec, EveningEvening Meeting, More info and registerNeil Hambly will be talking about Index Views and Computed Columns for Performance, Tony Rogerson will be showing some advanced T-SQL techniques.Manchester SQL UG - 9th Dec, EveningEvening Meeting, More info and registerEnd of year wrap up, networking, drinks, some discussions - more info to follow soon.Edinburgh SQL UG - 9th Dec, EveningEvening Meeting, More info and registerSatya Jayanty will give an X factor for a DBAs life and Tony Rogerson will talk about SQL Server internals.Many thanks,Tony Rogerson, SQL Server MVPUK SQL Server User Grouphttp://sqlserverfaq.com

    Read the article

  • How can I tell if my Amazon Windows instance was an SQL Server AMI?

    - by Aligma
    I want to purchase some reserved instances, because I have several instances already created and running 24 hours a day. When I go to purchase a Windows instance, I can see 3 options, Windows Windows with SQL Server Standard Windows with SQL server Web I don't know which of these was used to create the original instance. Is there a way I can find out? My assumptions: the instance type is is important because as far as I understand, the way to purchase a reserved instance is to first have a running instance, and then purchase a matching reserved instance. The reserved instance is not itself a new machine, but a kind of contract between you and Amazon to pay for an instance for 1 or 3 years, at a discounted rate. The contracted, reserved instance will "offset" one matching running instance where they have the same size and platform. Please feel free to correct me if these assumptions are incorrect.

    Read the article

  • Game Changer Appliance for SMBs Powered by Oracle Linux

    - by Zeynep Koch
    In the November 28th CRN article  Review: Thumbs-Up On Oracle Database Appliance  , Edward F. Moltzen mentions that "The Test Center likes this appliance (Oracle Database Appliance) , for the performance and for the strong security offered by the underlying Oracle Linux in the box. It’s more than a solid offering for the SMB space; it’s potentially a game-changer as data and security needs race to keep up with the oncoming generations of technology." The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g—in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and network that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. All hardware and software components are supported by a single vendor—Oracle—and offer customers unique pay-as-you-grow software licensing to quickly scale from 2 processor cores to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades. It is: Simple—Complete plug-and-go hardware and software Reliable—Advanced management features and single-vendor support Affordable—Pay-as-you-grow platform for small database consolidation The Oracle Database Appliance is a 4U rack-mountable system pre-installed with Oracle Linux and Oracle appliance manager software. Redundancy is built into all components and the Oracle appliance manager software reduces the risk and complexity of deploying highly available databases. It's perfect for consolidating OLTP and data warehousing databases up to 4 terabytes in size, making it ideal for midsize companies or departmental systems. Read more about Oracle's Database Appliance  Read more about Oracle Linux

    Read the article

  • Internal only DNS?

    - by ethrbunny
    We are running a research project with hundreds (becoming thousands) of remote hosts. Each host is running OpenVPN so we can find them regardless of what their 'assigned' IP is. We have been using DynDNS to manage this but we're running into some issues with them ( API is weak/nonexistent, size constraints, etc). Im looking into setting up a internal-only domain (EG "our.stuff" so a host would be "site1.our.stuff" or "site3.net4.our.stuff") that I can configure with the info from the OpenVPN server. Since we'd have to point our internal DNS to this machine it would have to be able to route/cache requests for 'external' machines as well. I've been trying to read about 'internal DNS', 'private', 'non-routeable' but I'm not having much success. Summary: need info on internal, caching DNS server. Something with open-source would be ideal. If not, I can script out changes to .conf, etc.

    Read the article

  • partition alignment on fresh windows 2003 ent server

    - by Datapimp23
    Hi, I have this server which has it's physical disks in RAID 5 controlled by a 3com raid controller. size of the stripe unit is unknown for the moment (Can check tomorrow in the office). I need to install windows server 2003 ENT and create 2 partitions (OS, Data). I'd like to create the partitions before the installation on windows server. They have to be aligned properly. I have the newest version of gparted on a disc but I have no clue if this is the right tool. Can someone point me in the right direction? Thanks

    Read the article

  • java.lang.OutOfMemoryError on ec2 machine

    - by vinchan
    I have a java app on a large instance that will spawn up to 800 threads. I can run the application fine as user "root" but not as another user which I created. I get the deadly. java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:657) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1325) nightmare. I have tried increasing the stack size already in limits.conf to no avail. Please, help me out. What is different here for the root and other user?

    Read the article

  • Unable to install updates on 14.04 LTS

    - by Mike
    I have been getting update notifications for a few weeks now but whenever I attempt to install them I get this message; The upgrade needs a total of 74.6 M free space on disk '/boot'. Please free at least an additional 29.8 M of disk space on '/boot'. Empty your trash and remove temporary packages of former installations using 'sudo apt-get clean'. First of all I don't have permission to access /boot (don't know why as its a standalone machine and i'm the only user). Secondly, I emptied the trash; Thirdly, I launched Terminal and entered sudo apt-get clean I was a asked for a sudo password. I entered my system password. Re-entered sudo apt-get clean. The cursor stopped blinking - I assumed it was doing it's "thing". I let it go for about 10 minutes then exited Terminal. Tried to install the updates but just got the same message. Is there something i'm ignorant of? This is the output I get from the command df -h and I have no idea what it all means! @Tim, What's bash and why am I denied access to fstab and /boot? mike@mike-MS-7800:~$ /etc/fstab bash: /etc/fstab: Permission denied mike@mike-MS-7800:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu--vg-root 913G 11G 856G 2% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 1.7G 4.0K 1.7G 1% /dev tmpfs 335M 1.6M 333M 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 1.7G 14M 1.7G 1% /run/shm none 100M 52K 100M 1% /run/user /dev/sda2 237M 182M 43M 81% /boot /dev/sda1 487M 3.4M 483M 1% /boot/efi /dev/sr1 31M 31M 0 100% /media/mike/Optus Mobile mike@mike-MS-7800:~$ I ran this from the terminal and all is now working. dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge

    Read the article

  • INNODB mysql. Plugin disabled

    - by alexcunn
    When I startup mysql on my unbuntu server I will get a message. 121122 17:39:37 [Note] Plugin 'FEDERATED' is disabled. 121122 17:39:37 InnoDB: The InnoDB memory heap is disabled 121122 17:39:37 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121122 17:39:37 InnoDB: Compressed tables use zlib 1.2.3.4 121122 17:39:37 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121122 17:39:37 InnoDB: Completed initialization of buffer pool 121122 17:39:37 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121122 17:39:37 [ERROR] Plugin 'InnoDB' init function returned error. 121122 17:39:37 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121122 17:39:37 [ERROR] Unknown/unsupported storage engine: InnoDB 121122 17:39:37 [ERROR] Aborting 121122 17:39:37 [Note] mysqld: Shutdown complete a few times I have got a message saying that the plugin is disabled. I use webmin to configure it. Could that be a problem?

    Read the article

  • Machine freezes when configuring dual display on Ubuntu 9.10 (karmic)

    - by sa125
    Hi - I'm trying to configure dual displays on an Ubuntu 9.10 machine. When I connect the 2 screens (1 VGA input, the other DVI), I see them in a mirrored display. I opened up Display Settings and unchecked the 'mirror screens' box, and when I clicked apply, the machine froze and I had to force restart it. This happened repeatedly for about 6 times until I gave up. How do I set it up to boot up normally with dual display working? thanks. edit: I thought it might be related to the virtual screen size, so I tried to edit /etc/X11/xorg.conf to add: SubSection "Display" Virtual 2560 1024 EndSubSection But that didn't do much. Each screen works fine on it's own, and together with mirrored display.

    Read the article

  • Why does 1080p through a VGA cable fit my HDTV but is oversized when through an HDMI cable?

    - by GraemeF
    I have put together a new PC with a XFX GeForce GTX 260 graphics card and have it connected to my HDTV. First, I used an old VGA cable with a DVI to VGA adapter and plugged it in to my HDTV's VGA port. Running at 1920x1080 it fit the screen perfectly. Now, to avoid running another cable across the room, I have connected it with a DVI to HDMI cable to my TV's HDMI port, and the desktop at 1920x1080 is cropped by the edge of the screen. I have "fixed" the cropping by using NVIDIA's "Adjust desktop size and position" tool, which created a screen resolution of 1814x1022 to fit the screen, but this is no longer the TV's native resolution and confuses some software (e.g. WoW). Why does VGA work as expected, but HDMI is scaled up? Can it be avoided?

    Read the article

  • Add unallocated space to lvm

    - by Newbie
    I shrunk my windows partition and now have 10 GB of unallocated space that I now want to use to grow my / partition which is an ext4 in an lvm. I'm running Fedora 12. I ran system-config-lvm but the "Initialize Entry" button is greyed out. The unallocated space is not adjacent to the lvm but I cannot move the partitions in GParted like I was able to with ext3 in the past. I cannot create a new partition either as it says it cannot have more than 4 primary partitions. I don't see any option to create an extended partition. So my question is, how do I add that unallocated space to the lvm so I can grow the size of the / partition? I don't want to reinstall Fedora.

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >