Search Results

Search found 1586 results on 64 pages for 'restarting'.

Page 59/64 | < Previous Page | 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Win-XP Browsers Hang on page load - (waiting for...)

    - by CHarmon
    Hello, I’m having problems with my browsers hanging on loading pages on my desktop machine. I’m using Windows XP Pro with SP3 and fully updated except for IE 8. All three of my browsers, IE 7, Chrome and Firefox are having the same problems. Pages are not being loaded and are hanging on “waiting for …”. The browsers are waiting for the page being loaded or ad servers. Sometimes a page will load but the loading graphic continues to be displayed as if the page were still loading when the page appears to be fully loaded. The problem is bad enough that I can’t really use any of my browsers. I can eventually get most pages to load by stopping and restarting the page load. I have DSL modem with a wireless router and I have been able to eliminate the modem and router from being the source of my problem. My laptop doesn’t have any problems even when hardwired to the router and with the wireless connection disabled. I deleted the NIC and let XP re-install. Also tried a different network cable. Tried the same router port used in the laptop test. One clue that may be important is that I can’t connect to my router using the desktop machine…the page hangs while trying to connect. I can ping the router and I can quickly connect to the router using the laptop. I also can’t use the Windows update process – the page never fully loads. The problem affects other user accounts and even happens in safe mode. I am convinced the problem is with part of the O/S…some layer able to affect all of the browsers. The purpose of this post is to see if anyone has some ideas before I do a XP repair. I have done quite a bit of trouble-shooting: Ran a full anti-virus scan with AVG – no problems. Ran full scans with Spybot, MalwareBytes and Sophos anti-rootkit – no problems. Ran Chkdsk with both options checked. Ran Disk Clean up Defragged RE-installed IE7 Cleared all the browser caches Ran Ccleaner (registry tool) Ran HijackThis – nothing unusual (problem happens in safe mode too) Ran Process Explorer – no unusual processes Used System Restore and fell back several days – no change in the problem Booted to last known good configuration – no change in the problem Ran MicrosoftFixit50199.msi – no change in the problem Any ideas or suggestions would be appreciated…I’m not looking forward to doing a repair on XP. Thanks in advance for any help.

    Read the article

  • Internet Explorer cannot display page from apache with single SSL virtual host

    - by P.scheit
    I have a question that has come up somehow in different questions but I still can't find the solution, yet. My problem is that I'm hosting a site on apache 2.4 on debian with SSL and Internet Explorer 7 on windows xp shows Internet Explorer cannot display the webpage I have only ONE virtual host that uses ssl, but DIFFERENT virtual hosts that use http. Here is my config for the site with SSL enabled (etc/sites-avaible/default-ssl is NOT linked) <Virtualhost xx.yyy.86.193:443> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de DocumentRoot "/var/local/www/my-certified-domain.de/current/www" Alias /files "/var/local/www/my-certified-domain.de/current/files" CustomLog /var/log/apache2/access.my-certified-domain.de.log combined <Directory "/var/local/www/my-certified-domain.de/current/www"> AllowOverride All </Directory> SSLEngine on SSLCertificateFile /etc/ssl/certs/www.my-certified-domain.de.crt SSLCertificateKeyFile /etc/ssl/private/www.my-certified-domain.de.key SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM SSLCertificateChainFile /etc/apache2/ssl.crt/www.my-certified-domain.de.ca BrowserMatch "MSIE [2-8]" nokeepalive downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost *:80> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de CustomLog /var/log/apache2/access.my-certified-domain.de.log combined Redirect permanent / https://www.my-certified-domain.de/ </VirtualHost> my ports.conf looks like this: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> the output from apache2ctl -S is like this: xx.yyy.86.193:443 www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:1) wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost staging.my-certified-domain.de (/etc/apache2/sites-enabled/010-staging.my-certified-domain.de:1) port 80 namevhost testing.my-certified-domain.de (/etc/apache2/sites-enabled/015-testing.my-certified-domain.de:1) port 80 namevhost www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:31) I included the solution for this question: Internet explorer cannot display the page, other browsers can, possibly htaccess / server error And I understand the answer from this question: How to setup Apache NameVirtualHost on SSL? In fakt: I only have one ssl certificate for the domain. And I only want to run ONE virtual host with ssl. So I just want to use the one ip for the ssl virtual host. But still (after rebooting / restarting / testing) internet explorer will still not show the page. When I intepret the apachectl -S as well, I already have only one SSL host and this should response to the initial SSH handshake, shouldn't it? What is wrong in this setup? Thank you so much Philipp

    Read the article

  • Nagios core Event Handler not working

    - by sivashanmugam
    Nagios Event Handler is not triggering when the service is taking more time to response or down. My configuration in below nagios.cfg enable_event_handlers=1 localhost.cfg define service { use generic-service host_name Server service_description test-server servicegroups test-service check_command check-service is_volatile 0 check_period 24x7 max_check_attempts 4 normal_check_interval 2 retry_check_interval 2 contact_groups testcontacts notification_period 24x7 notification_options w,u,c,r notifications_enabled 1 event_handler_enabled 1 event_handler recheck-service } command.cfg define command{ command_name recheck-service command_line /usr/local/nagios/libexec/alert.sh $SERVICESTATE$ $SERVICESTATETYPE$ $SERVICEATTEMPT$ } alert.sh file !/bin/sh set -x case "$1" in OK) # The service just came back up, so don't do anything... ;; WARNING) # We don't really care about warning states, since the service is probably still running... ;; UNKNOWN) # We don't know what might be causing an unknown error, so don't do anything... ;; CRITICAL) Aha! The HTTP service appears to have a problem - perhaps we should restart the server... Is this a "soft" or a "hard" state? case "$2" in We're in a "soft" state, meaning that Nagios is in the middle of retrying the check before it turns into a "hard" state and contacts get notified... SOFT) # What check attempt are we on? We don't want to restart the web server on the first check, because it may just be a fluke! case "$3" in Wait until the check has been tried 3 times before restarting the web server. If the check fails on the 4th time (after we restart the web server), the state type will turn to "hard" and contacts will be notified of the problem. Hopefully this will restart the web server successfully, so the 4th check will result in a "soft" recovery. If that happens no one gets notified because we fixed the problem! 3) echo -n "Going To Ping the Virtual Machine (3rd soft critical state)..." # Call the init script to restart the HTTPD server myresult=`/usr/local/nagios/libexec/check_http xyz.com -t 100 | grep 'time'| awk '{print $10}'` echo "Your Service Is taking the following time Delay" "$myresult Seconds" |mail -s "WARNING : Service Taken More Time To Response" [email protected] ;; esac ;; # The HTTP service somehow managed to turn into a hard error without getting fixed. # It should have been restarted by the code above, but for some reason it didn't. # Let's give it one last try, shall we? # Note: Contacts have already been notified of a problem with the service at this

    Read the article

  • System failure - need diagnostic recommendation

    - by Ladislav Mrnka
    I have big problem with my computer. Configuration is: Intel i7 + 6x2GB OCZ DDR3 Motheboard: Asus P6T Deluxe V2, HDD controller configured to AHCI Main drive: OCZ Vertex 2 (SSD) - contains all installed programs and system Second drive: Samsung SpinPoint - contains User profiles, ProgramData, virtual machines and databases Third drive: Samsung SpinPoint - data drive + backups OS: Windows 7 Ultimate x64 I have never had any problem with this computer until now. During weekend my computer completely crashed without any reason. Each time I tried to boot to Windows I got BSOD with message BAD_SYSTEM_CONFIG_INFO and automatic restart (I didn't install any new SW or HW). But after restart main OCZ drive was disconnected (not detected by BIOS). When I turned off and on computer, the drive was again connected. It also happend every single time I tried to repair installation somehow. It ended with some error and after restart drive was disconnected. The only thing which worked was format + fresh install. After installing almost everything I wanted to install Visual Studio 2010 Ultimate (complete installation without SQL Server Express). During installation of VS itself I always get BSOD - it is too fast so I'm not able to read description. After restart it searches for all disk drives for really long time and sometimes it changes boot drive so the system is not able to start - Bootmgr not found. After reconfiguring BIOS the system starts. There is no event describing the failure in Event viewer. Installing VS 2010 is absolutely necessary for me. I need help with diagnostic. I need to find where is the problem - I expect that the problem is in OCZ drive or in HDD controller on motherboard but I don't know how to find it. All components still have valid warranty. Can you recommend me some approach or tools to find the problem? Edit: I'm still looking for source of the problem. New information is that Windows are not able to perform check disk (Chkdsk) on the SSD system drive. After restarting it always starts checking drive and in part where files are checked it fails with BSOD - BAD_SYSTEM_CONFIG_INFO. After next restart and skipping check disk tests the system runs.

    Read the article

  • DNS Issue Windows 2003 AD-The server holding the PDC role is down

    - by Dave M
    Our network of Windows 2003 and Windows 2008 servers suddenly hasDNS issues. There are 7 DCs. Two at our main office and one each at branch sites (one branch has two a 2008R2 and WIN2K3) Only two are WIN2008R2 Running DCDIAG on the WIN2K3 at main site (DC1) reports no issues. Running at any branch site reports two issues All other test pass. The server DC1 can be PINGed by name from any site Starting test: frsevent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: FsmoCheck Warning: DcGetDcName(PDC_REQUIRED) call failed, error 1355 A Primary Domain Controller could not be located. The server holding the PDC role is down. Netdom.exe /query DC reports the expected servers. netdom query fsmo This reports the server at the main office holds the following roles: * Schema owner Domain role owner PDC role RID pool manager Infrastructure owner In the DNS management snap-in, DC1 appears as DNS server but does not appear in _msdcs-dc-_sites-Default-First-Site-Name-_TCP There is no _ldap or –kerberos record pointing to DC1 Same issue msdcs-dc-_sites- -_TCP Again there is no _ldap or –kerberos record pointing to DC1 Under Domain DNS Zones there is no entry for the server. This is the case for any _tcp folder in the DNS. The server DC1 appears correctly as a name server in the Reverse Lookup Zone. There is a Host(A) record for DC1 but in the Forward Lookup Zone there is no (same as parent folder) Host(A) for the DC1 server but such an entry exists for the other DCs at branch sites and the other DC at the main office. We have tried stopping and starting the netlogon service, restarting DNS and also dcdiag /fix. Netdiag reports error: Trust relationship test. . . . . . : Failed [FATAL] Secure channel to domain 'XXX' is broken. [ERROR_NO_LOGON_SERVERS] [WARNING] Failed to query SPN registration on DC- One entry for each branch DC All braches lsit the problem server and it can be Pinged by name from any branch Fixing is number one priority but also would like to determine the casue.

    Read the article

  • Galera install failure on Fedora 18

    - by ehime
    I've been trying to reinstall MariaDB and have been encountering multiple issues, $ yum install Mariadb-Galera-server Error: Package: MariaDB-Galera-server-5.5.29-1.i386 (mariadb) Requires: galera Available: galera-23.2.4-1.rhel5.i386 (mariadb) galera galera = 23.2.4-1.rhel5 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest there is a requirement that libssl.so.6 and libcrypto.ssl.6 are installed, these DO show up in my /lib64 and /lib though as linked items. /usr/lib -rwxr-xr-x 1 root root 1356700 Nov 23 2010 libcrypto.so.0.9.8e lrwxrwxrwx 1 root root 19 Jun 28 12:03 libcrypto.so.6 -> libcrypto.so.0.9.8e -rwxr-xr-x. 1 root root 394272 Mar 18 14:22 libssl.so.1.0.1e lrwxrwxrwx 1 root root 16 Jun 28 12:03 libssl.so.6 -> libssl.so.0.9.8e /usr/lib64 -rwxr-xr-x 1 root root 1849680 Mar 18 14:21 libcrypto.so.1.0.1e lrwxrwxrwx 1 root root 26 Jun 28 11:54 libcrypto.so.6 -> /lib64/libcrypto.so.1.0.1e -rwxr-xr-x 1 root root 421712 Mar 18 14:21 libssl.so.1.0.1e lrwxrwxrwx 1 root root 23 Jun 28 11:54 libssl.so.6 -> /lib64/libssl.so.1.0.1e So the deps SHOULD be met, trying to $ yum install galera returns this Resolving Dependencies --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Finished Dependency Resolution No errors? but no install either .... ? lets try wget and rpm'ing the package instead I guess? $ wget https://launchpad.net/galera/2.x/23.2.4/+download/galera-23.2.4-1.rhel5.x86_64.rpm $ rpm -ivh galera-23.2.4-1.rhel5.x86_64.rpm This issues the dreaded error: Failed dependencies: libcrypto.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 libssl.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 But we saw above these packages are here =( Whats going on?? Is openssl not installed? $ yum install openssl Loaded plugins: langpacks, presto, refresh-packagekit Package 1:openssl-1.0.1e-4.fc18.x86_64 already installed and latest version Nothing to do Its there.... ??? wth Fedora?

    Read the article

  • How to make Windows 7 use the internet connection that I specify

    - by user138957
    I have a LAN adapter and a USB wireless internet connection. When both connected windows 7 always uses the USB. I tried changing the metric values but no luck. Let me explains the steps I took. Currently automatic metric on all adapters. LAN connected. ipconfig shows that it is connected to the correct ip/dns/gateway etc. IPv4 Route table shows Metric 24 Then connected USB. ipconfig shows USB connectivity then LAN in that order. Internet is now through USB. IPv4 Route table shows Metric 4249 for LAN and USB is 41. Gateway for USB shows "on-link". netstat -rn shows USBDEVICE on top. Changed LAN metric to 5 and now the route table shows LAN as 9 (not sure why it added 4) and USB as 41. netstat shows LAN then USB. ipconfig shows LAN then USB. But still connection is through USB. How do I know? Task manager shows utilization only through USB as well as speed is showing around 1mbps rather than LANs 10mbps. How can I get win7 use LAN while USB is connected. I am just trying to use USB as a backup just in case I lose LAN connection. Please help!! I thought i will make USB metric manually to say 10. But it says I have to reconnect for it to be effective. Currently USB still shows below LAN and still has 9 and 41 in the table. Disconnected USB. Table shows LAN metric as 24 (Not sure why it got changed from 9 and setting got reverted by to automatic) Reconnected USB. Now in the setting still shows 10 and the route table shows 11 for USB and LAN shows 4249 (settings shows 4245, 4 less)) For some reason restarting USB is resetting LAN setting when reconnected. Thanks

    Read the article

  • linux disk usage report inconsistancy after removing file. cpanel inaccurate disk usage report

    - by brando
    relevant software: Red Hat Enterprise Linux Server release 6.3 (Santiago) cpanel installed 11.34.0 (build 7) background and problem: I was getting a disk usage warning (via cpanel) because /var seemed to be filling up on my server. The assumption would be that there was a log file growing too large and filling up the partition. I recently removed a large log file and changed my syslog config to rotate the log files more regularly. I removed something like /var/log/somefile and edited /etc/rsyslog.conf. This is the reason I was suspicious of the disk usage report warning issued by cpanel that I was getting because it didn't seem right. This is what df was reporting for the partitions: $ [/var]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 9.3G 108M 99% /var This is what du was reporting for /var mount point: $ [/var]# du -sh 528M . clearly something funky was going on. I had a similar kind of reporting inconsistency in the past and I restarted the server and df reporting seemed to be correct after that. I decided to reboot the server to see if the same thing would happpen. This is what df reports now: $ [~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 697M 8.7G 8% /var This looks more like what I'd expect to get. For consistency this is what du reports for /var: $ [/var]# du -sh 638M . question: This is a nuisance. I'm not sure where the disk usage reports issued by cpanel get their info but it clearly isn't correct. How can I avoid this inaccurate reporting in the future? It seems like df reporting wrong disk usage is a strong indicator of the source problem but I'm not sure. Is there a way to 'refresh' the filesystem somehow so that the df report is accurate without restarting the server? Any other ideas for resolving this issue?

    Read the article

  • Linux service --status-all shows "Firewall is stopped." what service does firewall refer to?

    - by codewaggle
    I have a development server with the lamp stack running CentOS: [Prompt]# cat /etc/redhat-release CentOS release 5.8 (Final) [Prompt]# cat /proc/version Linux version 2.6.18-308.16.1.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-52)) #1 SMP Tue Oct 2 22:50:05 EDT 2012 [Prompt]# yum info iptables Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.anl.gov * extras: centos.mirrors.tds.net * rpmfusion-free-updates: mirror.us.leaseweb.net * rpmfusion-nonfree-updates: mirror.us.leaseweb.net * updates: mirror.steadfast.net Installed Packages Name : iptables Arch : x86_64 Version : 1.3.5 Release : 9.1.el5 Size : 661 k Repo : installed .... Snip.... When I run: service --status-all Part of the output looks like this: .... Snip.... httpd (pid xxxxx) is running... Firewall is stopped. Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) ....Snip.... iptables has been loaded to the kernel and is active as represented by the rules being displayed. Checking just the iptables returns the rules just like status all does: [Prompt]# service iptables status Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) .... Snip.... Starting or restarting iptables indicates that the iptables have been loaded to the kernel successfully: [Prompt]# service iptables restart Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] [Prompt]# service iptables start Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] I've googled "Firewall is stopped." and read a number of iptables guides as well as the RHEL documentation, but no luck. As far as I can tell, there isn't a "Firewall" service, so what is the line "Firewall is stopped." referring to?

    Read the article

  • Windows server detected error with hard disk

    - by user53864
    We have hosting Windows server 2008 R2 and I am working as admin in small company. The server is hanging and restarting as the hard disk seems to be damaged due to power fluctutaion(though having inverter) as it's showing the below error message on server reboot: Problem detected with the hard disk Press any key to continue It's Seagate 1TB SATA hard disk and it's booting after pressing enter. So it's clear that the hard disk is dying. Yes, it's in warranty but it's fact that warranty won't recover the lincesed windows server 2008 and it's data. As it's booting now, I backed up required things and I am thinking to clone the entire hard disk. The first thing it striked me is checking on the Seagate site if any tool available for cloning and I found Seagate DiskWizard but not specified it for windows server 2008. Please anybody could help me giving your best ideas for the below: Urgently, What's the best way(free of cost) for me to clone in my case with the new same sized hard disk? It's a one time lincenced and I cannot use the same key again if I reinstall the server. Will the lincense be carried with new disk if cloned? else there is a way to contact Microsoft explaining the problem occurred, to obtain new key for no charge?. I want to take measure for future. How do I keep two disks in continuous sync? mirrored & raid are the only options converting the disks to dynamic? or is there a best way I could do with no additional charge?. Any help is greatly appreciated. Thank you! EDIT:1 I started cloning the disk with CloneZilla and it was going proper showing in GUI. But after some time there is no GUI but a black screen with some codes(looks like disk location numbers) going page by page(I have attached the screenshots below captured from my phone). Do you people think it's actually cloning?. I started in the morning and it's evening now. I left the office now to let it finish what it's trying to do and I'll go & check it tomorrow. Slowly lost hope, don't know what face it's going to show tomorrow. Any ideas?

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • debugging JBoss 100% CPU usage

    - by Nate
    We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

  • A Windows Update Prevented Live Discs From Working?

    - by user88311
    First off I'll state my system specs. Acer Aspire M1100 Windows Vista Home Basic 32bit OEM 2 Gigs of DDR2 ram 160Gig hard drive 2.7Ghz AMD Athlon 64 processor ATI Radeon X1250 Graphics card A few days ago my computer did automatic updates and updated windows defender to KB915597 (Definition 1.135.415.0), After which when shutting down and starting up I would receive BSOD with the information BUGCODE_USE_DRIVER and 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x877330000) upon where my computer would not start up with any USB devices plugged in and it always require me to run startup repair before it started. Upon when I first started it up and was able to fully boot windows, I had no use of the mouse so I was unable to install the fix that the windows solutions center brought up on my screen, so I restarted again and installed the fix hoping it would cease the problem, it did not. Upoon starting up after installing the fix and restarting I was confronted with the BSOD 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x83291000) at which I found the startup repair could not fix the problem and I restored, as I most like should have in the first place. After going through that I read that simply installing the latest defender version from the microsoft site had fixed this problem for others, so I did that, to find I still received the BSOD's. So in a attempt to find a fix to the problem I went to the microsoft answers site to try to find a way to fix the problem, there I was told to simply disable defender and reboot to see if that fixed the problem, upon doing this my computer would no longer even startup, when I boot normally I get to just when the loading screen finishes and then my computer restarts and when I run startup repair, it runs for about 15 seconds and then my computer restarts as well. I have tried running ubuntu live discs in order to simply access the drive and simply copy and paste the 2 month old physical backup I have of my C drive to the C drive, but whenever I run the live OS when it gets to the end of the load screen and is about to boot, the computer again restarts, yet if I put in a gparted disc, I am able to boot it fully, although it does not give me access to the file system just partition managing and when I attempt to access the internet through it, the computer once again restarts. So my question is, how could the update and what has happened prevent me from running the live OS's properly?

    Read the article

  • Remote Debian System Preventing Logon

    - by choobablue
    I have a dozen or so single board computers on a network running Debian (squeeze) and access them via ssh (ssh server is dropbear). To give an idea of the hardware of these computers they're 1.2 GHz x86 processors, 1GB of RAM and 4GB flash drives formatted as ext2 (I avoided ext3 to prevent the added flash write stress from journaling), there is also a swap partition on the drive. Normally the setup I'm using works great and I can access all the computers. Every once in a while one will prevent access. What happens is I try to connect via ssh (putty) and it gives me the login prompt, I enter the username and password and it responds 'Access Denied' and it will also refuse any public key in ~/.ssh/authorized_keys. The credentials are correct as they worked previously. The computer responds to pings and putty recognizes the server public key, which implies to me the system is still running. Restarting the server fixes the problem and I can log in again. (I tried a temporary fix of putting shutdown -r now in the root crontab but this doesn't seem to reliably be run once the hang happens) Once I restart however there doesn't seem to be any information in any of the system logs to indicate what happened, the logs are simply empty for that time period, as if the system had crashed. There is some custom software running on the system which appears to stop working (which is why I wanted to ssh to begin with). I'm assuming that this program is the source of the problems but I'm unsure of how it would cause it and how to debug what is happening. The most likely explanation I can think of is that there is a memory leak in the other program that then prevents dropbear from spawning a new login shell (and crontab from executing shutdown) as there is not enough free memory. But looking at memory usage of the other (working) computers there doesn't seem to be any meaningful increase in memory to indicate a leak (unless it's a very big, fast acting and rare leak). I would think that when the OS ran out of memory it would restart the system or kill processes (the Linux kernel restarts right?). The other thing I wonder about is if the fact that they are running off a flash drive could have some effect, especially the swap partition (which I think I should remove to prevent wear of the flash), but the flash drives are young (~1 month) and I don't think that wear would be a factor yet. Does anybody have an idea of what could cause these symptoms, if it could be done by a memory leak, or something else I haven't thought of. And does anybody know of a method to try to debug the problem and find out more information about what's going wrong?

    Read the article

  • Windows 7 immediately disconnects a USB drive

    - by Daniel Saner
    I am having a problem with Windows 7 x64 consistently disconnecting one specific USB mass storage drive immediately after it is connected. The drive in question is a Cowon C2 digital music player which works in standard mass storage controller mode (i.e. no device-specific drivers needed/available). When I connect the player, Windows plays the "USB connect" sound and the device appears (under its correct name) in the device manager, but it never appears as a drive. The player itself displays "USB Connected" for a split-second before reporting that it has been disconnected again. Since the player, by design, reboots after it has been disconnected, Windows plays the "USB disconnect" sound before restarting the whole cycle once the player has powered back on. I am connecting the player through an Intel X79 Chipset motherboard (Gigabyte GA-X79-UD3) to Windows 7 Pro 64-bit. The player used to work fine the first few times I connected it, showing up as an external drive; it only recently stopped working. It is not a problem with the player, since it works fine when connected to another computer, even such running the exact same operating system. It is also not a problem with the USB controller, since the issue is the same on both the Intel USB 2.0 and the Fresco Logic FL1009 USB 3.0 controller ports. I have also not had the problem with any other drive so far. Among the things I have tried so far: Disabling USB legacy mode in BIOS Disabling energy-saving power down for all USB controllers in Windows' device manager Removing and reinstalling Windows' USB mass storage driver Removing and reinstalling Intel and Fresco Logic USB controller driver Restoring the player to factory defaults None of these made a difference. Again, the player used to work fine on the exact same system just days ago; I didn't install any new hardware or drivers on it since then. I would be very grateful for any hints on what else to try. Edit: Here is another new hint; I found out that when I connect the drive before booting Windows, it is available in Windows Explorer as it should, and does not automatically disconnect. If I remove and reconnect it though, the infinite connect/disconnect-loop starts anew.

    Read the article

  • Diagnose PC Hardware Problems with an Ubuntu Live CD

    - by Trevor Bekolay
    So your PC randomly shuts down or gives you the blue screen of death, but you can’t figure out what’s wrong. The problem could be bad memory or hardware related, and thankfully the Ubuntu Live CD has some tools to help you figure it out. Test your RAM with memtest86+ RAM problems are difficult to diagnose—they can range from annoying program crashes, or crippling reboot loops. Even if you’re not having problems, when you install new RAM it’s a good idea to thoroughly test it. The Ubuntu Live CD includes a tool called Memtest86+ that will do just that—test your computer’s RAM! Unlike many of the Live CD tools that we’ve looked at so far, Memtest86+ has to be run outside of a graphical Ubuntu session. Fortunately, it only takes a few keystrokes. Note: If you used UNetbootin to create an Ubuntu flash drive, then memtest86+ will not be available. We recommend using the Universal USB Installer from Pendrivelinux instead (persistence is possible with Universal USB Installer, but not mandatory). Boot up your computer with a Ubuntu Live CD or USB drive. You will be greeted with this screen: Use the down arrow key to select the Test memory option and hit Enter. Memtest86+ will immediately start testing your RAM. If you suspect that a certain part of memory is the problem, you can select certain portions of memory by pressing “c” and changing that option. You can also select specific tests to run. However, the default settings of Memtest86+ will exhaustively test your memory, so we recommend leaving the settings alone. Memtest86+ will run a variety of tests that can take some time to complete, so start it running before you go to bed to give it adequate time. Test your CPU with cpuburn Random shutdowns – especially when doing computationally intensive tasks – can be a sign of a faulty CPU, power supply, or cooling system. A utility called cpuburn can help you determine if one of these pieces of hardware is the problem. Note: cpuburn is designed to stress test your computer – it will run it fast and cause the CPU to heat up, which may exacerbate small problems that otherwise would be minor. It is a powerful diagnostic tool, but should be used with caution. Boot up your computer with a Ubuntu Live CD or USB drive, and choose to run Ubuntu from the CD or USB drive. When the desktop environment loads up, open the Synaptic Package Manager by clicking on the System menu in the top-left of the screen, then selecting Administration, and then Synaptic Package Manager. Cpuburn is in the universe repository. To enable the universe repository, click on Settings in the menu at the top, and then Repositories. Add a checkmark in the box labeled “Community-maintained Open Source software (universe)”. Click close. In the main Synaptic window, click the Reload button. After the package list has reloaded and the search index has been rebuilt, enter “cpuburn” in the Quick search text box. Click the checkbox in the left column, and select Mark for Installation. Click the Apply button near the top of the window. As cpuburn installs, it will caution you about the possible dangers of its use. Assuming you wish to take the risk (and if your computer is randomly restarting constantly, it’s probably worth it), open a terminal window by clicking on the Applications menu in the top-left of the screen and then selection Applications > Terminal. Cpuburn includes a number of tools to test different types of CPUs. If your CPU is more than six years old, see the full list; for modern AMD CPUs, use the terminal command burnK7 and for modern Intel processors, use the terminal command burnP6 Our processor is an Intel, so we ran burnP6. Once it started up, it immediately pushed the CPU up to 99.7% total usage, according to the Linux utility “top”. If your computer is having a CPU, power supply, or cooling problem, then your computer is likely to shutdown within ten or fifteen minutes. Because of the strain this program puts on your computer, we don’t recommend leaving it running overnight – if there’s a problem, it should crop up relatively quickly. Cpuburn’s tools, including burnP6, have no interface; once they start running, they will start driving your CPU until you stop them. To stop a program like burnP6, press Ctrl+C in the terminal window that is running the program. Conclusion The Ubuntu Live CD provides two great testing tools to diagnose a tricky computer problem, or to stress test a new computer. While they are advanced tools that should be used with caution, they’re extremely useful and easy enough that anyone can use them. Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Persistent Bootable Ubuntu USB Flash DriveAdding extra Repositories on UbuntuHow to Share folders with your Ubuntu Virtual Machine (guest)Building a New Computer – Part 3: Setting it Up TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • How To Replace Notepad in Windows 7

    - by Trevor Bekolay
    It used to be that Notepad was a necessary evil because it started up quickly and let us catch a quick glimpse of plain text files. Now, there are a bevy of capable Notepad replacements that are just as fast, but also have great feature sets. Before following the rest of this how-to, ensure that you’re logged into an account with Administrator access. Note: The following instructions involve modifying some Windows system folders. Don’t mess anything up while you’re in there! If you follow our instructions closely, you’ll be fine. Choose your replacement There are a ton of great Notepad replacements, including Notepad2, Metapad, and Notepad++. The best one for you will depend on what types of text files you open and what you do with them. We’re going to use Notepad++ in this how-to. The first step is to find the executable file that you’ll replace Notepad with. Usually this will be the only file with the .exe file extension in the folder where you installed your text editor. Copy the executable file to your desktop and try to open it, to make sure that it works when opened from a different folder. In the Notepad++ case, a special little .exe file is available for the explicit purpose of replacing Notepad.If we run it from the desktop, it opens up Notepad++ in all its glory. Back up Notepad You will probably never go back once you switch, but you never know. You can backup Notepad to a special location if you’d like, but we find it’s easiest to just keep a backed up copy of Notepad in the folders it was originally located. In Windows 7, Notepad resides in: C:\Windows C:\Windows\System32 C:\Windows\SysWOW64 in 64-bit versions only Navigate to each of those directories and copy Notepad. Paste it into the same folder. If prompted, choose to Copy, but keep both files. You can keep your backup as “notepad (2).exe”, but we prefer to rename it to “notepad.exe.bak”. Do this for all of the folders that have Notepad (2 total for 32-bit Windows 7, 3 total for 64-bit). Take control of Notepad and delete it Even if you’re on an administrator account, you can’t just delete Notepad – Microsoft has made some security gains in this respect. Fortunately for us, it’s still possible to take control of a file and delete it without resorting to nasty hacks like disabling UAC. Navigate to one of the directories that contain Notepad. Right-click on it and select Properties.   Switch to the Security tab, then click on the Advanced button. Note that the owner of the file is a user called “TrustedInstaller”. You can’t do much with files owned by TrustedInstaller, so let’s take control of it. Click the Edit… button. Select the desired owner (you could choose your own account, but we’re going to give any Administrator control) and click OK. You’ll get a message that you need to close and reopen the Properties windows to edit permissions. Before doing that, confirm that the owner has changed to what you selected. Click OK, then OK again to close the Properties window. Right-click on Notepad and click on Properties again. Switch to the Security tab. Click on Edit…. Select the appropriate group or user name in the list at the top, then add a checkmark in the checkbox beside Full control in the Allow column. Click OK, then Yes to the dialog box that pops up. Click OK again to close the Properties window. Now you can delete Notepad, by either selecting it and pressing Delete on the keyboard, or right-click on it and click Delete.   You’re now free from Notepad’s foul clutches! Repeat this procedure for the remaining folders (or folder, on 32-bit Windows 7). Drop in your replacement Copy your Notepad replacement’s executable, which should still be on your desktop. Browse to the two or three folders listed above and copy your .exe to those locations. If prompted for Administrator permission, click Continue. If your executable file was named something other than “notepad.exe”, rename it to “notepad.exe”. Don’t be alarmed if the thumbnail still shows the old Notepad icon. Double click on Notepad and your replacement should open. To make doubly sure that it works, press Win+R to bring up the Run dialog box and enter “notepad” into the text field. Press enter or click OK. Now you can allow Windows to open files with Notepad by default with little to no shame! All without restarting or having to disable UAC! Similar Articles Productive Geek Tips Search and Replace Specific Formatting (fonts, styles,etc) in Microsoft WordHow to Drag Files to the Taskbar to Open Them in Windows 7Customize the Windows 7 or Vista Send To MenuKill Processes from the Windows Command LineChange Your Windows 7 Library Icons the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use My TextTools to Edit and Organize Text Discovery Channel LIFE Theme (Win7) Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Help Protect Your Children with the CEOP Enhanced Internet Explorer 8

    - by Asian Angel
    Do you want to make Internet Explorer safer and more helpful for you and family? Then join us as we look at the CEOP (Child Exploitation and Online Protection Centre) enhanced version of Internet Explorer 8. Setting CEOP Up We chose to install the whole CEOP pack in order to have access to complete set of CEOP Tools. The install process will be comprised of two parts…it will begin with CEOP branded windows showing the components being installed… Note: The components can be downloaded separately for those who only want certain CEOP components added to their browser. Then it will move to the traditional Microsoft Internet Explorer 8 install windows. One thing that we did notice is that here you will be told that you will need to restart your computer but in other windows a log off/log on process is mentioned. Just to make certain that everything goes smoothly we recommend restarting your computer when the installation process is complete. In the EULA section you can see the versions of Windows that the CEOP Pack works with. Once you get past the traditional Microsoft install windows you will be dropped back into the CEOP branded windows. CEOP in Action After you have restarted your computer and opened Internet Explorer you will notice that your homepage has been changed. When it comes to your children that is not a bad thing in this instance. It will also give you an opportunity to look through the CEOP online resources. For the moment you may be wondering where everything is but do not worry. First you can find the two new search providers in the drop-down menu for your “Search Bar” and select a new default if desired. The second thing to look for are the new links that have been added to your “Favorites Menu”. These links can definitely be helpful for you and your family. The third part will require your “Favorites Bar” to be visible in order to see the “Click CEOP Button”. If you have not previously done so you will need to turn on subscribing for “Web Slices”. Click on “Yes” to finish the subscription process. Clicking on the “CEOP Button” again will show all kinds of new links to help provide information for you and your children. Notice that the top part is broken down into “topic categories” while the bottom part is set up for “age brackets”…very nice for helping you focus on the information that you want and/or need. Looking for information and help on a particular topic? Clicking on the “Cyberbullying Link” for example will open the following webpage with information about cyberbullying and a link to get help with the problem. Need something that is focused on your child’s age group? Clicking on the “8-10? Link” as an example opened this page. Want information that is focused on you? The “Parent? Link” leads to this page. The “topic categories & age brackets” make the CEOP Button a very helpful and “family friendly” addition to Internet Explorer. Perhaps you (or your child) want to conduct a search for something that is affecting your child. As you type in a “search term” both of the search providers will provide helpful suggestions for dealing with the problem. We felt that these were very nice suggestions in both instances here… Conclusion We have been able to give you a good peek at what the CEOP Tools can do but the best way to see how helpful it can be for you and your family is try it for yourself. Your children’s safety and happiness is worth it. Links Download the Internet Explorer CEOP Pack (link at bottom of webpage) Note: If you are interested in a singular component or only some use these links. Download the Click CEOP Button Download Search CEOP Download Internet Safety and Security Search Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPWhen to Use Protect Tab vs Lock Tab in FirefoxMake Ctrl+Tab in Internet Explorer 7 Use Most Recent OrderRemove ISP Text or Corporate Branding from Internet Explorer Title BarQuick Hits: 11 Firefox Tab How-Tos TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Download Microsoft Office Help tab The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI

    Read the article

  • Debugging Tips for Skinning

    - by Christian David Straub
    Another guest post by Jeanne Waldman.If you are developing a skin for your Fusion Application in JDeveloper you should know these tips.   1. Firebug is your friend 2. Uncompress the css style classes 3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away 4. View the generated CSS File   1. Firebug is your friend Install Firebug (http://getfirebug.com/layout) into Firefox and use it to view your rendered jspx page in the browser. You can select the HTML dom nodes on your page and you can see the css styles applied to each dom node.   2. Uncompress the css style classes By default the styleclasses that are rendered are compressed. You may see style classes like class="x10" and class="x2e". But in your skin css file you have styleclasses like: af|inputText::content or af|panelBox::header   It is easier for you to develop a skin and debug a skin with Firebug if you see the uncompressed styleclasses. To do this, a. open web.xml b. add   <context-param>     <param-name>org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION</param-name>     <param-value>true</param-value>   </context-param> c. save d. restart the server and re-run your page.   3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away   For performance sake the ADF Faces framework does not check if you skin .css file has changed on every render. But this is exactly what you want to happen when you are developing or debugging a skin. You want your changes to get noticed right away, without restarting the server.   To do this, a. open web.xml b. add   <context-param>     <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>     <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>     <param-value>false</param-value>   </context-param> c. save d. restart the server and re-run your page. e. from then on, you can change your skin's .css file, save it and refresh your page and you should see the changes right away   4. View the generated CSS File   There are different ways to view the generated CSS File which is your skin's css file merged in with all the skins it extends and processed and generated to the filesystem and linked to your generated html page. One way is to view it with Firebug. The problem with this approach is you might see something that is a little different than the actual css file because Firebug may do some extra processing. I like to view the generated css file by: Right click on your page in the browser 

    Read the article

  • Windows Phone 7 development: first impressions

    - by DigiMortal
    After hard week in work I got some free time to play with Windows Phone 7 CTP developer tools. Although my first test application is still unfinished I think it is good moment to share my first experiences to you. In this posting I will give you quick overview of Windows Phone 7 developer tools from developer perspective. If you are familiar with Visual Studio 2010 then you will feel comfortable because Windows Phone 7 CTP developer tools base on Visual Studio 2010 Express. Project templates There are five project templates available. Three of them are based on Silverlight and two on XNA Game Studio: Windows Phone Application (Silverlight) Windows Phone List Application (Silverlight) Windows Phone Class Library (Silverlight) Windows Phone Game (XNA Game Studio) Windows Phone Game Library (XNA Game Studio) Currently I am writing to test applications. One of them is based on Windows Phone Application and the other on Windows Phone List Application project template. After creating these projects you see the following views in Visual Studio. Windows Phone Application. Click on image to enlarge. Windows Phone List Application. Click on image to enlarge.  I suggest you to use some of these templates to get started more easily. Windows Phone 7 emulator You can run your Windows Phone 7 applications on Windows Phone 7 emulator that comes with developer tools CTP. If you run your application then emulator is started automatically and you can try out how your application works in phone-like emulator. You can see screenshot of emulator on right. Currently there is opened Windows Phone List Application as it is created by default. Click on image to enlarge it. Emulator is a little bit slow and uncomfortable but it works pretty well. This far I have caused only couple of crashes during my experiments. In these cases emulator works but Visual Studio gets stuck because it cannot communicate with emulator. One important note. Emulator is based on virtual machine although you can see only phone screen and options toolbar. If you want to run emulator you must close all virtual machines running on your machine and run Visual Studio 2010 as administrator. Once you run emulator you can keep it open because you can stop your application in Visual Studio, modify, compile and re-deploy it without restarting emulator. Designing user interfaces You can design user interface of your application in Visual Studio. When you open XAML-files it is displayed in window with two panels. Left panel shows you device screen and works as visual design environment while right panel shows you XAML mark-up and let’s you modify XML if you need it. As it is one of my very first Silverlight applications I felt more comfortable with XAML editor because property names in property boxes of visual designer confused me a little bit. Designer panel is not very good because it is visually hard to follow. It has black background that makes dark borders of controls very hard to see. If you have monitor with very high contrast then it is may be not a real problem. I have usual monitor and I have problem. :) Putting controls on design surface, dragging and resizing them is also pretty painful. Some controls are drawn correctly but for some controls you have to set width and height in XML so they can be resized. After some practicing it is not so annoying anymore. On the right you can see toolbox with some controllers. This is all you get out of the box. But it is sufficient to get started. After getting some experiences you can create your own controls or use existing ones from other vendors or developers. If it is your first time to do stuff with Silverlight then keep Google open – you need it hard. After getting over the first shock you get the point very quickly and start developing at normal speed. :) Writing source code Writing source code is the most familiar part of this action. Good old Visual Studio code editor with all nice features it has. But here you get also some surprises: The anatomy of Silverlight controls is a little bit different than the one of user controls in web and forms projects. Windows Phone 7 doesn’t run on full version of Windows (I bet it is some version of Windows CE or something like this) then there is less system classes you can use. Some familiar classes have less methods that in full version of .NET Framework and in these cases you have to write all the code by yourself or find libraries or source code from somewhere. These problems are really not so much problems than limitations and you get easily over them. Conclusion Windows Phone 7 CTP developer tools help you do a lot of things on Windows Phone 7. Although I expected better performance from tools I think that current performance is not a problem. This far my first test project is going very well and Google has answer for almost every question. Windows Phone 7 is mobile device and therefore it has less hardware resources than desktop computers. This is why toolset is so limited. The more you need memory the more slower is device and as you may guess it needs the more battery. If you are writing apps for mobile devices then make your best to get your application use as few resources as possible and act as fast as possible.

    Read the article

  • Trash Destination Adapter

    The Trash Destination and this article came from early experiences of using SSIS and community feedback at the time. When developing a package it is very useful to have a destination adapter that does nothing but consume rows with no setup requirement. You often want run a package part way through development, or just add a path so you can set a Data Viewer. There are stock tasks that can be used, but with the Trash Destination all columns are treated as selected automatically (usage type of read-only), so the pipeline knows they are required. It is also obvious that this is for development or diagnostic purposes, and is clearly not a part of the functional design of the package. It is also ideal for just playing around and exploring concepts in SSIS, and is often used in conjunction with the Data Generator Source. Using these two components it is easy to setup a test of an expression in the Derived Column Transformation for example. The Data Generator Source provides some dummy data, and the Trash Destination allows you to anchor the output path and set a Data Viewer to examine the results. It can also be used when performance tuning packages. It is a consistent and known quantity that has no external influences, so it is ideal as a destination when breaking the data flow into sections to isolate a bottleneck. The adapter is really simple to use and requires no setup. Simply drop it onto the pipeline designer and use it to terminate your data flow path. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally, for 2005/2008, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trash Destination transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The Trash Destination is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Trash Destination for SQL Server 2005 Trash Destination for SQL Server 2008 Trash Destination for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.34 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.33 - SQL Server 2008 release. Includes support for upgrade of 2005 packages. RTM compatible, previously February 2008 CTP. (4 Mar 2008) Version 2.0.0.31 - SQL Server 2008 November 2007 CTP. (14 Feb 2008) SQL Server 2005 Version 1.0.2.18 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.1.1 - SQL Server 2005 IDW 15 June CTP. Minor enhancements over v1.0.1.0. (11 Jun 2005) Version 1.0.1.0 - SQL Server 2005 IDW 14 April CTP. First Public Release. (30 May 2005) Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The component could not be added to the Data Flow task. Please verify that this component is properly installed. ------------------------------ ADDITIONAL INFORMATION: The data flow object "Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=1.0.1.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc" is not installed correctly on this computer. (Microsoft.DataTransformationServices.Design) For 2005/2008, once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? This is not necessary for SQL Server 2012 as the new SSIS toolbox automatically detects components. If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Row Count Plus Transformation

    As the name suggests we have taken the current Row Count Transform that is provided by Microsoft in the Integration Services toolbox and we have recreated the functionality and extended upon it. There are two things about the current version that we thought could do with cleaning up Lack of a custom UI You have to type the variable name yourself In the Row Count Plus Transformation we solve these issues for you. Another thing we thought was missing is the ability to calculate the time taken between components in the pipeline. An example usage would be that you want to know how many rows flowed between Component A and Component B and how long it took. Again we have solved this issue. Credit must go to Erik Veerman of Solid Quality Learning for the idea behind noting the duration. We were looking at one of his packages and saw that he was doing something very similar but he was using a Script Component as a transformation. Our philosophy is that if you have to write or Copy and Paste the same piece of code more than once then you should be thinking about a custom component and here it is. The Row Count Plus Transformation populates variables with the values returned from; Counting the rows that have flowed through the path Returning the time in seconds between when it first saw a row come down this path and when it saw the final row. It is possible to leave both these boxes blank and the component will still work.   All input columns are passed through the transformation unaltered, you are not permitted to change or add to the inputs or outputs of this component. Optionally you can set the component to fire an event, which happens during the PostExecute phase of the execution. This can be useful to improve visibility of this information, such that it is captured in package logging, or can be used to drive workflow in the case of an error event. Properties Property Data Type Description OutputRowCountVariable String The name of the variable into which the amount of row read will be passed (Optional). OutputDurationVariable String The name of the variable into which the duration in seconds will be passed. (Optional). EventType RowCountPlusTransform.EventType The type of event to fire during post execute, included in which are the row count and duration values. RowCountPlusTransform.EventType Enumeration Name Value Description None 0 Do not fire any event. Information 1 Fire an Information event. Warning 2 Fire a Warning event. Error 3 Fire an Error event. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Count Plus Transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Count Plus Transformation for SQL Server 2005 Row Count Plus Transformation for SQL Server 2008 Row Count Plus Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.1.0.43 - Bug fix for duration. For long running processes the duration second count may have been incorrect. (8 Sep 2006) Version 1.1.0.42 - SP1 Compatibility Testing. Added the ability to raise an event with the count and duration data for easier logging or workflow. (18 Jun 2006) Version 1.0.0.1 - SQL Server 2005 RTM. Made available as general public release. (20 Mar 2006) Screenshot Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component?

    Read the article

  • Ubutu 14.04 triple screen, third screen black and X cursor

    - by Horse
    I am having some issues getting my third screen working properly. I had triple screens working fine on 12.04, using 2 nvidia cards. Did a fresh install of 14.04 and having no end of problems getting it working. It either will just be disabled, or the screen is black with the cursor as an X. I can only enable it from the nvidia server settings tool. The Ubuntu native display settings won't even show the 3rd screen. I tried copying the xorg.conf from my old install, which upon restarting X worked fine on the login screen, but then it just sat there after I logged in and didn’t do anything (mouse was still working). I am using gnome-session-fallback instead of unity if that makes any difference. Still having these issues if I try unity though. How do I get my 3rd screen working and displaying a desktop? Here is my current xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 331.20 (buildd@roseapple) Mon Feb 3 15:07:22 UTC 2014 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: unknown, VertRefresh source: unknown Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 0.0 - 0.0 VertRefresh 0.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0, DVI-I-3: nvidia-auto-select +1280+0" # Removed Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0" # Removed Option "SLI" "Off" # Removed Option "BaseMosaic" "off" # Removed Option "metamodes" "GPU-109d4eb8-b40b-87d7-3fd6-95830d1d5215.DVI-I-2: nvidia-auto-select +0+0, GPU-109d4eb8-b40b-87d7-3fd6-95830d1d5215.DVI-I-3: nvidia-auto-select +1280+0, GPU-82e96214-175e-5e6a-218c-5bdbc948daf2.DVI-I-1: nvidia-auto-select +3200+0" # Removed Option "SLI" "off" # Removed Option "BaseMosaic" "on" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-0" Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0, DVI-I-3: nvidia-auto-select +1280+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "metamodes" "nvidia-auto-select +0+0" # Removed Option "metamodes" "DVI-I-3: nvidia-auto-select +0+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Here is my old 'working in 12.04' xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 310.19 ([email protected]) Thu Nov 8 02:08:55 PST 2012 Section "ServerLayout" # Removed Option "Xinerama" "0" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen2" Screen 2 "Screen2" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: unknown, VertRefresh source: unknown Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "Apple Cinema HD" HorizSync 74.0 - 74.6 VertRefresh 59.9 - 60.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0, DFP-2: nvidia-auto-select +1280+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select +0+0 {viewportout=1280x720+0+152}" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64  | Next Page >