Search Results

Search found 25551 results on 1023 pages for 'linux validated rpm oracl'.

Page 393/1023 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • What are working xorg.conf settings for using a Matrox TripleHead2Go @ 5040x1050?

    - by Brendan Abel
    I'm trying to configure xorg.conf to correctly set the resolution of my screens. I'm using a matrox triplehead, so the monitor is a single 5040x1050 screen. Unfortunately, it's being incorrectly set to 3840x1024. Here is my xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 260.19.06 (buildd@yellow) Mon Oct 4 15:59:51 UTC 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Matrox" HorizSync 31.5 - 80.0 VertRefresh 57.0 - 75.0 #Option "DPMS" Modeline "5040x1050@60" 451.27 5040 5072 6784 6816 1050 1071 1081 1103 #Modeline "5040x1050@59" 441.28 5040 5072 6744 6776 1050 1071 1081 1103 #Modeline "5040x1050@57" 421.62 5040 5072 6672 6704 1050 1071 1081 1103 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9800 GTX+" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "5040x1050" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Descending list ordered by file modification time

    - by user62367
    How can i generate a list of files in a directory [e.g.: "/mnt/hdd/PUB/"] ordered by the files modification time? [in descending order, the oldest modified file is at the lists end] ls -A -lRt would be great: https://pastebin.com/raw.php?i=AzuSVmrJ but if a file is changed in a directory it lists the full directory...so the pastebined link isn't good [i don't want a list ordered by "directories", i need a "per file" ordered list] os: openwrt..[no perl - not enough space for it :( + no "stat", or "file" command] Thank you!

    Read the article

  • Encrypt tar file asymmetrically

    - by DerMike
    I want to achieve something like tar -c directory | openssl foo > encrypted_tarfile.dat I need the openssl tool to use public key encryption. I found an earlier question about symmetric encryption at the command promt (sic!), which does not suffice. I did take a look in the openssl(1) man page and only found symmetric encryption. Does openssl really not support asymmetric encryption? Basically many users are supposed to create their encrypted tar files and store them in a central location, but only few are allowed to read them.

    Read the article

  • Adding user to chroot environment

    - by Neo
    I've created a chroot system in my Ubuntu using schroot and debrootstrap, based on minimal ubuntu. However whenever I can't seem to add a new user into this chroot environment. Here is what happens. I enter schroot as root and add a new user.(Tried both adduser and useradd commands) The username lists up in /etc/passwd file and I can 'su' into the new user. So far so good. When I log out of schroot, and re-enter schroot, the user I created has vanished!! There is no mention of that user in /etc/passwd either. How do I make the new user permanent?

    Read the article

  • Raspberry Pi can't see external hard drive

    - by user265818
    My Raspberry Pi (Model B) can't see my external hard drive. It was working before without a problem, until I disconnected and reconnected the drive. It is a self-powered hard drive. When I put another image on a different SD card the Raspberry Pi can see the hard drive no problem, so there is some sort of configuration issue in the current image on the SD card. Any advice will be gratefully received.

    Read the article

  • How to set TV-out options under Linux of an Geforce 9600 GT video card

    - by polemon
    I'm using the TV-out connector of my Geforce 9600 GT to connect it to an old TV set. It's obviously in Composite mode, the other two cables of Component video are dead, only Pb/VIDEO labeled one gives me a signal. The picture appears black/white on the TV, I presume it's because the video card gives me an NTSC signal, but it's a PAL tv set. How do I change the TV-out from NTSC to PAL? My Component to SCART adapter hasn't arrived yet, but I think I should be able to set manually, whether the signal should be Composite or Component. How do I switch modes of the TV-out, between Component and Composite? I'm running Linux, so it's probably some settings I need to make in xorg.conf. Edit: I got this far: I need to set in the "Device" section of my xorg.conf: Option "TVStandard" "PAL-B" Option "TVOutFormat" "COMPOSITE" The whole section looks like this now: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9600 GT" Option "AddARGBGLXVisuals" "True" Option "TVStandard" "PAL-B" Option "TVOutFormat" "COMPOSITE" EndSection How can I list all available settings for "TVStandard" and "TVOutFormat"?

    Read the article

  • apache returning "The connection was reset"

    - by usjes
    One of my dedicated servers had some network issue today and the data center has to replace some router. Since then the sites on that server returns "The connection was reset" error most of the time. I tried installing nginx and it opens better, but it still shows the error sometimes. Everything in the config seems normal, what could be causing this error? UPDATE: Just noticed that in whm apache status there are always only 1 requests currently being processed, 8 idle workers. I know for sure the server received thousands of requests per minute. What could be limiting this to such a low number?

    Read the article

  • How can I change the default domain in an OTRS installation?

    - by Jamie
    I used a turnkeylinux.org otrs installation and I'm trying to configure the default domain of 'yourhost.example.com'. I tried the following: sed -ri 's/yourhost.example.com/mydomain.com/' /usr/share/otrs/Kernel/Config/Defaults.pm sudo shutdown -r now The next time I logged and tried to create a user, the default domain was still there. How can I change default doamin in an OTRS installation?

    Read the article

  • How to add delay in autologin

    - by raj
    I enabled autologin in my system (CentOS 6.2) for that I edited this file /etc/gdm/custom.conf In that I entered this code [daemon] AutomaticLoginEnable=true AutomaticLogin=test Here test means one account name, for that account autologin is working but the problem is not possible to logout. That is because everytime while I logout it will go to gdm(graphical display manager) and there it Again checks for account test. It is available right so it will again login to same account. Here I want add delay, that means it should wait for sometime, If no one login to any other account, then only test account will log. how to add delay?

    Read the article

  • How can I download a package and all of its dependencies with apt-get

    - by Velislav Marinov
    I'm using Ubuntu 12.04 and I would like to use apt-get to download a package and all of it's depenedcies. Those packages will have to be installed on computers with no internet connection, so in addition to the base package I also need to all of the package's dependcies as well. Is there an easy way to do this (like in muon package manager)? I now that I can use the apt-get download command for this, but I don't want to manually specify each package that muon recommends to install or upgrade.

    Read the article

  • Dovecot/Postfix-mysql e-mail Aliases are not correctly forwarded

    - by jo_fryli
    I recently set up Dovecot/postfix-mysql on my Debian Squeeze Server and I have a bit of a problem. When ever I send a email to an alias ([email protected] forwarded to [email protected] for example) Postfix (or Dovecot, I'm not quite sure) puts this email into a Mailbox rather than forwarding it to the real Mail-Adress. I have tested all the MySQL queries and they all behave the way I intend them to do. foobar dovecot: deliver([email protected]): msgid=<000001385b464c9a-e40af11e-3bf4-49f6-903d-1d2369f6bfb6-000000@barfoo: saved mail to INBOX master.cf main.cf Keep in mind that normal E-Mail sending and receiving works just fine! I have set up my MySQL Tables with Postfixadmin. Thanks for your help!

    Read the article

  • Sendmail: external alias not receiving relayed mail under certain circumstances.

    - by ben
    I have set up an alias in /etc/mail/aliases like this: user: [email protected] This relay DOES work when I telnet to example.com 25 and send mail to [email protected] (where example.com is my domain); it indeed turns up in [email protected] inbox. Also mail sent from my server at example.com is generally deliverable to this same email address, [email protected]. HOWEVER, the relay DOES NOT work when I send mail from [email protected] to [email protected], expecting it to be relayed back to [email protected]. The mail.log shows it being received and sent just fine, so I guess it is being blocked by gmail for some reason. Why though? As I said, gmail generally does except mail from this server.

    Read the article

  • SSL Handshake negotiation on Nginx terribly slow

    - by Paras Chopra
    I am using Nginx as a proxy to 4 apache instances. My problem is that SSL negotiation takes a lot of time (600 ms). See this as an example: http://www.webpagetest.org/result/101020_8JXS/1/details/ Here is my Nginx Conf: user www-data; worker_processes 4; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_proxied any; server_names_hash_bucket_size 128; } upstream abc { server 1.1.1.1 weight=1; server 1.1.1.2 weight=1; server 1.1.1.3 weight=1; } server { listen 443; server_name blah; keepalive_timeout 5; ssl on; ssl_certificate /blah.crt; ssl_certificate_key /blah.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { proxy_pass http://abc; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The machine is a VPS on Linode with 1 G of RAM. Can anyone please tell why SSL Hand shake is taking ages?

    Read the article

  • SFTP through proxy

    - by aerodynamic_props
    I have a large amount of data on scratch space at computer b that I want to get. In my network I cannot directly connect to computer b (ssh exits with "No route to host"); I must first connect to computer a, and then connect to computer b. I cannot move the data from the scratch space on computer b to computer a because of a disk quota that is imposed on me at computer a. How can I move the data from computer b to my computer in this situation?

    Read the article

  • recovering a broken GNOME-desktop in debian wheezy

    - by morgon
    an hour ago I had a working gnome-desktop on my debian system (thinkpad x121e). Then I installed compiz that crashed. After a reboot the gnome-desktop no longer started. Then I did some upgrades with aptitude, all gnome-packages seem to be there, but it is still not working. On startup I get a login-dialog, when I login there is no desktop, only some window-manager running that allows me to start a terminal. When I run "gnome-session" I get the error message "failed to load session "gnome". So how do I get back to a working desktop? I have tried "tasksel install gnome-desktop --new-install" but that just displays a progress window that after half an hour still shows 0%. Can someone help me please? I have tried "

    Read the article

  • How to determine the best byte size for the dd command

    - by James
    I know that doing a dd if=/dev/hda of=/dev/hdb does a deep hard drive copy. I've heard that people have been able to speed up the process by increasing the number of bytes that are read and written at a time (512) with the "bs" option. People have suggested that the optimal byte size is due to sector size. I personally think it would have something to do with the amount of cache that the hard drive has. My question is: What determines the ideal byte size for copying from a hard drive? and Why does that determine the ideal byte size?

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • How to mount remote sambe from local host with multiple groups ?

    - by Dragos
    I am using mount.cifs to mount a remote samba share (both client and server are Ubuntu server 8.04) like this: mount.cifs //sambaserver/samba /mountpath -o credentials=/path/.credentials,uid=someuser,gid=1000 `$ cat .credentials username=user password=password I mounted a user from local system with username and password with mount.cifs but the problem is that the user is part of multiple groups on the remote system and with mount.cifs I can only specify one gid. Is there a way to specify all the gids that the remote user has ? Is there a way to: 1) Mount the remote samba with multiple groups on the local system ? 2) Browse the mount from 1) with the terminal since I want to pass some files from samba as arguments to local programs. Other solutions would be: nautilus sftp:// which runs through gvfs but the newer gnome does not write to disk the ~/.gvfs anymore so I can't browse it in terminal. An the last solution would be nfs but that means that I have to synchronize the uids and gids on the local system with the ones from the server.

    Read the article

  • How to analyse logs after the site was hacked

    - by Vasiliy Toporov
    One of our web-projects was hacked. Malefactor changed some template files in project and 1 core file of the web-framework (it's one of the famous php-frameworks). We found all corrupted files by git and reverted them. So now I need to find the weak point. With high probability we can say, that it's not the ftp or ssh password abduction. The support specialist of hosting provider (after logs analysis) said that it was the security hole in our code. My questions: 1) What tools should I use, to review access and error logs of Apache? (Our server distro is Debian). 2) Can you write tips of suspicious lines detection in logs? Maybe tutorials or primers of some useful regexps or techniques? 3) How to separate "normal user behavior" from suspicious in logs. 4) Is there any way to preventing attacks in Apache? Thanks for your help.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >