Search Results

Search found 7606 results on 305 pages for 'raam dev'.

Page 201/305 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • What are working xorg.conf settings for using a Matrox TripleHead2Go @ 5040x1050?

    - by Brendan Abel
    I'm trying to configure xorg.conf to correctly set the resolution of my screens. I'm using a matrox triplehead, so the monitor is a single 5040x1050 screen. Unfortunately, it's being incorrectly set to 3840x1024. Here is my xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 260.19.06 (buildd@yellow) Mon Oct 4 15:59:51 UTC 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Matrox" HorizSync 31.5 - 80.0 VertRefresh 57.0 - 75.0 #Option "DPMS" Modeline "5040x1050@60" 451.27 5040 5072 6784 6816 1050 1071 1081 1103 #Modeline "5040x1050@59" 441.28 5040 5072 6744 6776 1050 1071 1081 1103 #Modeline "5040x1050@57" 421.62 5040 5072 6672 6704 1050 1071 1081 1103 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9800 GTX+" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "5040x1050" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • How do you set up DNS in Window Server 2008 in a Hyper-V environment?

    - by Nathan DeWitt
    I have a laptop running Server 2008 and Hyper-V. I have created a virtual machine that is also running Server 2008, that I used dcpromo to create as a domain controller. I disabled IPv6 because I had no idea how to enter a default address, and I just wanted to make a standalone MOSS dev environment. I have tried every combination of creating a virtual network on the host and then connecting to that in the VM, but I can't get the VM to communicate with the host and vice versa. No pinging, no copy and paste, nothing. Thanks. To update: My VM (which is its own DC) currently does not have a static IP. When I set the IP to static, I could not find anything that would let it talk to the host machine.

    Read the article

  • Why does a zip file appear larger than the source file especially when it is text?

    - by PeanutsMonkey
    I have a text file that is 19 bytes in size and having compressed the file using zip and 7zip, it appears to be larger. I had a read of the question on Why is a 7zipped file larger than the raw file? as well as Why doesn't ZIP Compression compress anything? but considering the file is not already compressed I would have expected further compression. Attached is a screenshot. EDIT0 I took the example further by creating a file that contained random data as follows dd if=/dev/urandom of=sample.log bs=1G count=1 and attempted to compress the file using both zip and 7zip however there were no compression gains. Why is that?

    Read the article

  • Subversion for web designer: repository on a network share and ftp to the live server?

    - by ceatus
    My configuration: htdocs on a windows network share (z:) web developers check out with dreamweaver modify and check in back to the drive z LAMP running on a Ubuntu server virtualized on Hyper-V with apache that point on the z drive for dev in order to test the websites Upload by FTP on the live server Now: I need multiple access to the repository, keep them on a network shares and we manage about 200 websites. All the web developers, administrators and IT need to access to the share. I found out that creating a svn server is the best way for me, so I created it on a Ubuntu Server which is virtualized on Hyper-V. Right now I have the repos local on the Ubuntu Server but I'd like them on my network drive and I'd like to have a post-commit, if possible, in order to ftp directly on my live server. Do you guys think that a WebDav solution would be better? Thanks in advance Angelo

    Read the article

  • mirroring linux server to external usb harddrive

    - by DuPie
    My google-fu must be sucking. i havent been able to find a good solution for the following: numerous Linux server on commodity hardware Trying to do a recovery mirror copy to external harddrives External harddrives are smaller than source harddrives, but larger than data External drives are connected via usb2 (slow) Servers range from 20GB of data to 400GB of data Servers are remote, so hands on access is a pain need to copy boot files. empty external drives currently Basicly, looking for a way to do use a ghosting solution from INSIDE a running linux server to an external harddrive, without booting a cd etc. the rsync/cpio solutions i've looked at dont work great with grub/dev/proc etc. I understand that since the system isnt offline, it wont be a "mirror" image as files change, but thats ok. Are there any free/commercial products that would work?

    Read the article

  • Ubuntu server hangs on reboot "could not stat resume device file"

    - by matnagel
    Instead of booting into the running system this machine stops and on the terminal I can see a message: could not stat resume device file /dev/sdb5 When I attach a keyboard and press enter the boot continues and the machine comes up like normal. But it's essential that this machine comes up under most circumstancs alone. There never was something like a "resume" on this machine. I tried several times to reboot, but this does not happen on all boots, I can not find a pattern here. There is a software raid running on the box. This is the syslog during boot: http://privatepaste.com/ff0fd0a51c/sdabfjahfgasjkgfu4gfsdzjcgfafasdjfhgasdcjfgauzfgafasgdufzg How can I get rid of this boot failure? We do not need resumes and best would be something that works here and now.

    Read the article

  • adding a route entry to linux routing table

    - by netg
    hi, I have two systems with ip address say 64.103.56.1(A)(Dev name -wlan0) and 64.103.225.18(B),now what i want is , everytime I ping B from my system A, it has to be routed via a router say with address 10.0.0.251(C)(I want this to be my next hop to reach B) , but this router is on a different subnetwork than the two systems.How do I do this? /* Things I tried: I used 'route add -host B gw C wlan0', and got an error saying " no such process exist or no such device found". Tried ping C and traceroute and found the gw addr at my side is some 63.103.236.3(D), so added another entry route add -host C gw D wlan0, I was able to do this without any error! */

    Read the article

  • Arch Linux shows a blinking cursor instead of a booting installer

    - by fakedrake
    Arch Linux shows a blinking cursor instead of a booting installer. I ran sudo dd if=archlinux-2010.05-core-i686.iso of=/dev/sdb1 and checked the MD5 sums. I tried to boot it on two different PCs and got the same result: instead of booting GRUB — or anything useful, for that matter — it just showed a blinking cursor at the top left corner of the screen. The machines became unresponsive to any kind of input, the flash drive LED didn't seem to blink or shiver at all and there seemed to be no other activity whatsoever. I tried using another flash drive, but the machine completely ignored it, booting Windows "normally."

    Read the article

  • fstab line for auto mount drive that all users can read/write

    - by evilblender
    I have installed a cable that connects from the CPU's SATA motherboard connection to a removable drives' ESATA connection. I would like to be able to swap drives on the ESATA connection and have all users be able to read and write to these drives. I have created the directory /archive/ where I would like the drive(s) to mount. The drives are all formatted Fat 32 - but in the future I may use HFS for formatting. When I used the command (as root): mount /dev/sdc1 /archive the drive was mounted (but read only) What can I use in my /etc/fstab file that will allow drives to be mounted and unmounted by all users on the system? (both reading and writing) Also, will I be able to mount and unmount these drives without shutting down? or will I need to reboot every time I want to change drives? Thank you. Jeff

    Read the article

  • No virtual console on ubuntu 12.10

    - by Buzzzz
    When I try to do a ctr-alt f(1-6) in ubuntu 12.10 I only get a black screen with a blinking cursor but no login prompt. Any ideas on what could be wrong? It is a fresh install of 12.10 using a amd radeon 5850 graphics card. i have tried different things in my /etc/default/grub but at the moment I use the following: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash vga=normal" #GRUB_CMDLINE_LINUX="vga=0x0376" #RUB_CMDLINE_LINUX_DEFAULT="vga=0x014c" #GRUB_CMDLINE_LINUX="vga=0x014c" #GRUB_GFXPAYLOAD_LINUX=1600x1200x24 # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console

    Read the article

  • How can I disable logging in Tomcat 7?

    - by WilliamMayor
    I have a Tomcat 7 server running in a VM that has very little disk space (20G). Over the course of a few days Tomcat will fill the space with logging info (usually about 15G before it runs out). I've tried turning down the log level (from INFO to SEVERE) in the logging.properties file, I've also tried sending the log info to /dev/null. It doesn't seem to work as I still get a full log directory after no time at all. Can I put a file size limit on the log files? Is something overriding the properties I'm setting? Where can I find this information? My Google Fu just returns information about logging from within an application using JULI.

    Read the article

  • rsync - How to exclude one .htaccess but not all of them

    - by Cory Gagliardi
    I have an rsync command for copying my files from dev to production. I don't want to copy the .htaccess file that's in the root of the HTML directory but, I do want to copy the few .htaccess files that are in its sub directories. I'm using the argument --exclude .htaccess which is stopping all of the files from getting copied. The other arguments I'm including are -a --recursive --times --perms. Is it possible to configure rsync to do this? Edit: Here is my full command: rsync -a --recursive --times --perms \ --exclude prop_images --exclude tracking --exclude vtours \ --exclude .htaccess --exclude .htaccess_backup --exclude "*~" \ /home/user/dev_html/* /home/user/public_html/

    Read the article

  • recommended way to collect email notifications from crond in Arch Linux

    - by nponeccop
    Arch Linux doesn't have sendmail installed by default. So I get the following messages in my syslog: Sep 15 13:16:01 zorro crond[18497]: mailing cron output for user collectors sh cronjob.sh Sep 15 13:16:01 zorro crond[18497]: unable to exec /usr/sbin/sendmail: cron output for user collectors sh cronjob.sh to /dev/null What is the recommended way to fix this default behaviour so actual messages are sent? heirloom-mailx is installed and capable of sending email messages using SMTP. Is it possible for crond to use mailx to send notifications? Is there any drop-in replacement for sendmail that sends using mailx? Sendmail is not even in the repositories.

    Read the article

  • Password for iis after installing

    - by zapping
    After installing IIS on my dev system, window xp professional, its asking for username and password while trying to access http://localhost. Can you please help me out. Tried googling and tried may things but could not resolve the issue. Anonymous access is enabled iusr_ is given full access to the wwwroot folder asp.net2.0 has been registered etc. But still not working. :( EDIT: Now the password issue has gone off and shows this: Error Type: Microsoft VBScript runtime (0x800A0046) Permission denied: 'GetObject' /localstart.asp, line 40 Browser Type: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Page: GET /localstart.asp

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • Redirect Web Subfolder to Root (/folder to /)

    - by manyxcxi
    I am trying to redirect /folder to / using .htaccess but all am I getting is the Apache HTTP Server Test Page. My root directory looks like this: / .htaccess -/folder -/folder2 -/folder3 My .htaccess looks like this: RewriteEngine On RewriteCond %{REQUEST_URI} !^/folder/ RewriteRule (.*) /folder/$1 What am I doing wrong? I checked my httpd.conf (I'm running Centos) and the mod_rewrite library is being loaded. As a side note, my server is not a www server, it's simply a virtual machine so it's hostname is centosvm. Addition: My httpd.conf looks like so: <VirtualHost *:80> ServerName taa.local DocumentRoot /var/www/html SetEnv APPLICATION_ENV "dev" Alias /taa /var/www/html/taa/public <Directory /var/www/html/taa/public> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • web based source control management software [closed]

    - by tom smith
    hi. not sure if this is the right place, but hopefully someone might have thoughts on a solution/vendor. Starting to spec out a project that will require multiple (50-100) developers to be able to manipulate source files/scripts for a large scale project. The idea is to be able to have each app go through a dev/review/test process, where the users can select (or be assigned) the role they're going to have for the given app. I'm looking for web-based, version control, issue tracking, user roles/access, workflow functionality, etc... Ideally, the process will also allow for the reviewed/valid app to then be exported to a separate system for testing on the test server/environment. This can be hosted on our servers, or we can do the colo process. I've checked out Alassian/Collabnet, but any thoughts you can provide would me appreciated as well. thanks

    Read the article

  • Mount encrypted hfs in ubuntu

    - by pagid
    I try to mount an encrypted hfs+ partition in ubuntu. An older post described quite good how to do it, but lacks the information how to use encrypted partitions. What I found so far is: # install required packages sudo apt-get install hfsprogs hfsutils hfsplus loop-aes-utils # try to mount it mount -t hfsplus -o encryption=aes-256 /dev/xyz /mount/xyz But once I run this I get the following error: Error: Password must be at least 20 characters. So I tried to type it in twice, but that results in this: ioctl: LOOP_SET_STATUS: Invalid argument, requested cipher or key (256 bits) not supported by kernel Any suggestions? Thx Edit: One thing I'm not sure about is whether I use the right password. My assumption is that it is my default one for these situations. But I'm not sure whether Max OSX choose another password (internally) for that.

    Read the article

  • Ubuntu hang, and cannot be soft-reset after resuming from stand by mode

    - by Phuong Nguyen
    I have downgrade my xorg drivers, so I can hibernate and stand by my ubuntu smoothly. However, in some case, there's a problem. My ubuntu get hang. When I tried to switch to console mode (Ctrl+Alt+F1), then I cannot login. The system always reply with an error whenever I tried to press a key. When I press Ctrl+Alt+Del to perform a softreset, here what's it said: [80141.320122] end_request: I/O error, dev sda, sector 193687181 init: control-alt-delete main process (5660) terminated with status 2. This error is not even recorded in syslog. I guess this should be a problem with my hard disk since it said something about a bad sector. Exactly, what kind of error is this?

    Read the article

  • Ubuntu make symbolic link between new folder in Home to existing folder

    - by Fath
    Hello, To the point. I have Ubuntu Maverick running on my Lenovo G450. Before, it was Windows 7. All my data are inside another partition, its NTFS. FSTAB line to mount that partition : /dev/sda5 /data ntfs auto,users,uid=1000,gid=1000,utf8,dmask=027,fmask=137 0 0 Inside /data there are folder Musics, Graphics, Tools, Cores, etc. If I'm about to create new folder, let see, GFX on /home/apronouva/GFX and make it link or pointing to /data/Graphics, how do I do that ? So when I open /home/apronouva/GFX the content will be the same as inside /data/Graphics .. and whatever changes I made inside GFX, it will also affect /data/Graphics I tried : $ ln -s /data/Graphics /home/apronouva/GFX it resulted : error, cannot make symbolic link between folder Thanks in advance, Fath

    Read the article

  • Best memory-efficient web browser for Ubuntu?

    - by Steve K
    I've installed Ubuntu 10.10 on an old laptop with only 756 MB of RAM, Pentium M 1.6 processor. I'm using Google Chrome 11.0 (dev channel) for web browsing, and it appears to be using up most of my memory and processor time. Does anyone know of a better browser than Chrome on Ubuntu, for an older computer like mine? I'm new to Ubuntu, so there may also be tweaks I can make to my existing system to have it perform better. But right now it's pretty slow when I've got ~5-10 tabs open. Related question: memory-efficient web-browser

    Read the article

  • Can't figure out how to make Slitaz USB persistent

    - by Dennis Hodapp
    I installed Slitaz on my USB. However I can't figure out how to make it persistent automatically. There are different sources telling me different ways to make it persistent. One told me to add "slitaz home=usb" to the syslinux.cfg file like this: append initrd=/boot/rootfs.gz rw root=/dev/null vga=normal autologin slitaz home=usb but it didn't work for me. http://www.slitaz.org/en/doc/handbook/liveusb.html gave an example of how to do it manually but I didn't try it and I also want it to happen automatically. custompc.co.uk/features/602451/make-any-pc-your-own-with-linux-on-a-usb-key.html is an older article that also explains how to make the USB persistent but I don't want to try it cause it looks outdated (from 2008) does anyone know the best way to make the USB automatically persistent?

    Read the article

  • Articles of x386 and later CPU based systems

    - by user32569
    Hi there. I know this is hard question, and possibly not to be answered here, but if there is some article, or more you know about, please post a link. About books, its sad but many great computer books cannot be bought in my country. So, you can find many articles online, which says how memory was mapped back in pre x386 CPU. How there was explicit holes ready for MMIO BIOS, Video BIOS, etc. How there was A20 line for allowing higher memory access etc. Problem is, time changed. Today BIOSes are many times larger, and pure x86 16bit mode is used for booting and ROM flashing only. OS ignore BIOS as they access everything using drivers. And I just want to know, how it works today. I know not so specific question, but I read OS dev wiki, many articles, but all refering to days before massive usage of pure 32bit CPUs.

    Read the article

  • Permissions to run a SharePoint 2010 Application Pool?

    - by Michael Stum
    I'm currently in the process of setting up a SharePoint 2010 farm. In my Dev Environments, I have one account that is Local Admin, Farm Administrator and runs all Application Pools. For Production Environment, I would like to go with best Security Practices and run the Web Applications (At least 2: Main Portal and My Sites) with separate Domain Accounts. It's been some time that I worked with IIS, and I remember that there were issues with accessing files in c:\inetpub by non-Admin users. On the other hand, SharePoint "automagically" sets most permissions anyway. Does anyone have some experience with which permissions I need to give to the domain account at minimum in order to work?

    Read the article

  • Disk full, how to move mysql database files?

    - by kopeklan
    my database files located in /var/lib/mysql which located in partition /dev/sda5 this partition is full (refer here for details) so I'm going to move the location of database files from /var/lib/mysql to /home/lib/mysql What is the right way to move this database files? Im going to do this steps: Stop http server and PHP Change datadir=/var/lib/mysql to become datadir=/home/lib/mysql in /etc/my.cnf move all database files to the new location run killall -9 mysql, then /etc/init.d/mysqld start Start http server and PHP Is this right? Correct me if I'm wrong added: currently, mysql won't stop. refer here: mysql wont stop, mysqld_safe appeared in top

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >