Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 297/344 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Blank Screen After Login on Windows 7

    - by Leigh Riffel
    When unlocking a Windows 7 computer the screen briefly (less than a second) goes blank before showing the screen. Abound once a month (but sometimes within a few days) when I unlock my computer the screen doesn't come back from this brief blackout and stays black. Sometimes after five minutes or so the display will come back. Other times it has been blank for over 20 minutes, so I give up and restart the computer. It seems to happen more often when the computer has been locked for a longer period of time - I lock my computer several times a day, but the problem most happens when I come in at the start of the day. I have updated my video card and monitor drivers. I have two monitors driven by the same graphics card. The computer is a Dell Optiplex 740. When the problem occurs both monitors have a green light on to indicate that they are receiving signal. I've tried unplugging the monitors from the video card and turning the monitors on and off. The screen saver is set to one that is not blank screen. The Windows power settings are set to never turn off the display. When the problem occurs there is no significant disk activity occurring. When the problem occurs I can connect over the network to the hard drive on my computer. When the problem occurs I can't connect over the network with a VNC connection. The VNC client doesn't give an error, but also won't show the screen. The task actually seems to hang as I can see the task, but there is no window for it. This problem occurred when I was running a pre-release version of Windows 7. The hard drive was formatted for release version and the problem still occurs. I've been stopping some of my always running programs to see if one of them may be the culprit, but given the span between failures that will take some time to find the problem if it will even help at all. Some programs I have always running include IE, Firefox, Outlook, Evernote, Kana Reminder, Ditto, Macro Express, Timesnapper, and DisplayFusion. Any ideas appreciated.

    Read the article

  • How to connect SAN from CentOS through two iSCSI Targets

    - by garconcn
    I had asked the similar question before. This time I want to use subnet for two iSCSI Targets, hence I start a new question. I have an old Promise VTrak M500i SAN Server. It comes with 2 iSCSI ports. I want to connect to two LUNs on the SAN server through two separate Targets from CentOS 5.7 64bits server. My network setup is as follows: CentOS server: Management network - 192.168.1.1 Storage network 1 - 192.168.5.2 Storage network 2 - 192.168.6.2 Promise SAN server: Management network - 192.168.1.2 iSCSI Port 1 - 192.168.5.1 iSCSI Port 2 - 192.168.6.1 I have two Logical Drives on this SAN and they are mapped as follows: Index Initiator Name LUN Mapping 0 iqn.2011-11:backup (LD0,0) 1 iqn.2011-11:template (LD1,1) Basically, I want the traffic to iqn.2011-11:backup LUN 0 through 192.168.5.1 network the traffic to iqn.2011-11:template LUN 1 through 192.168.6.1 network I don't use MPIO, just want to separate the traffic to avoid traffic jam. How do I achieve this? I am new to SAN stuff, please explain as much detail as you can. Thank you. The following are what I am doing now. After mapping the LUN to my pre-defined Initiators, the CentOS server can discover both Targets. [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.5.1 192.168.5.1:3260,1 iscsi-1 192.168.6.1:3260,2 iscsi-1 [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.6.1 192.168.6.1:3260,2 iscsi-1 192.168.5.1:3260,1 iscsi-1 [root@centos ~]# /etc/init.d/iscsi start iscsid is stopped Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] Logging in to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] Login to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] successful. Login to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] successful. [ OK ] [root@centos ~]# iscsiadm -m session tcp: [1] 192.168.6.1:3260,2 iscsi-1 tcp: [2] 192.168.5.1:3260,1 iscsi-1 When I check the LUN mapping on the SAN server for the two Logical Drives, both LUNs are connected through Port0-192.168.5.2 with the Initiator defined in CentOS. Assigned Initiator List: Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port1-192.168.5.2 1 I assume the following is what I want: Initiator Name Alias IP Address LUN iqn.2011-11.backup centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.template centos.mydomain.com Port0-192.168.6.2 1

    Read the article

  • Unable to start SQL Server Instance 2008 R2 - DB file corrupt

    - by Velu
    I was not able to start the SQL Server 2008 R2 production DB instance. After reading the log file error message is " The log scan number passed to log scan in database ‘master’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication." After reading several post i realize that my MASTER DB file is corrupted. I have followed the below setup Copy the Master.mdf and Masterlog.ldf file from Template location to My Database Data folder. C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\Templates to D:\MSSQL\MSSQL10_50.MSSQLSERVER\MSSQL\DATA Note: Same error occur when i copy the all DB file like Master, MasterLog, MSDBData, MSDBlog, Model and ModelLog When i run my MSSQLSEVER instance different problem occur. In My server i had only C, D- Drive i dont have the E drive. How can i override these below error path. Error LOG 2012-10-24 02:51:12.79 spid5s Error: 17204, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FCB::Open failed: Could not open file e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). 2012-10-24 02:51:12.79 spid5s Error: 5120, Severity: 16, State: 101. 2012-10-24 02:51:12.79 spid5s Unable to open the physical file "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf". Operating system error 3: "3(The system cannot find the path specified.)". 2012-10-24 02:51:12.79 spid5s Error: 17207, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. 2012-10-24 02:51:12.79 spid5s File activation failure. The physical file name "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf" may be incorrect.

    Read the article

  • Outlook Anywhere inconsistencies with authentication methods

    - by gravyface
    So I've read this question and attempted just about every other workaround I've found online. Problem seems completely illogical to me, anyways: SBS 2011, vanilla install; haven't touched anything in IIS or Exchange outside of what's been done through the checklist (brand new domain, completely new customer) except to import an existing wildcard certificate for *.example.com (which is valid, Remote Web Workplace and Outlook Web Access work fine). On the two test machines and one production machine running a mixture of Windows XP Pro, Windows 7 and Outlook 2003 through to 2010, I've had no problem saving the password after configuring Outlook Anywhere using the wrong authentication method. I repeat, I have had no issues using the wrong authentication method on these test machines; password saves the first time, no problem, can verify it exists in the credentials manager (Start Run control userpasswords2), close Outlook, reboot, go make a sammie, come back, credentials are still saved. When I say wrong, it's because I was choosing NTLM and Exchange (under Exchange Console Server Configuration Client Access) was set by default to use Basic. On two completely different machines setup by a co-worker, they had (under my guidance) used NTLM as well... except that frustratingly, Outlook would always ask for a password. One machine was Windows XP with Outlook 2010, the other was Windows 7 with Outlook 2003. When these two machines were set to use Basic -- the correct settings -- the option to save was there and now works without issue. Puzzled by how my machines could possibly work with the wrong authentication, I then went into one of them and changed the authentication method to Basic. Now here's where it gets a little crazy: if I go under Outlook and change the authentication to use the correct setting (Basic) it fails to save the password and Outlook prompts every time (without a "remember me" checkbox). I have not had a chance to change it to Basic on the other two machines to see if this is just a fluke or not, but something just isn't right here. My two hunches are either a missing/installed KB Update or perhaps a local security policy. I should add that none of the 5 test machines in the equation here have ever been joined to the domain.

    Read the article

  • APC (PHP Cache) Uptime 0 minutes, not caching

    - by Jussi
    My goal is to implement APC for opcode cache for a drupal 6 production site. I have so far tested APC with several php files with and without including other php files with include_once. Also tried to tweak the apc.ini values for shm_size, apc.include_once_override and apc.stat. Restarted apache every time. Resulting in apc.php not showing any changes in any values. (except of course the changed apc.ini values are shown as they should) Every time i refresh the apc.php test page, the start time resets as the current time showing uptime 0 minutes. apc.php -testpage shows: General Cache InformationAPC Version 3.1.9 PHP Version 5.2.10 APC Host xxxx.xx.xx Server Software Apache/2.2.3 (CentOS) Shared Memory 1 Segment(s) with 128.0 MBytes (mmap memory, pthread mutex Locks locking) Start Time 2011/07/26 11:53:56 Uptime 0 minutes File Upload Support 1 Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 1 Request Rate (hits, misses) 2.00 cache requests/second Hit Rate 1.00 cache requests/second Miss Rate 1.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 16 apc.mmap_file_mask /tmp/apcphp5.095eRm apc.num_files_hint 1024 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.slam_defense 0 apc.stat 0 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Host Status Diagrams: Free: 128.0 MBytes (100.0%) Hits: 1 (50.0%) Used: 20.3 KBytes (0.0%) Misses: 1 (50.0%) Detailed Memory Usage and Fragmentation: Fragmentation: 0% phpinfo shows: Server API CGI/FastCGI APC: Version 3.1.9 APC Debugging Enabled MMAP Support Enabled MMAP File Mask /tmp/apcphp5.JkKDk7 Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Jul 21 2011 14:31:12 I followed these steps to find if suexec settings would prevent caching: http://www.litespeedtech.com/support/forum/showthread.php?t=4189 [root@host /]# ps -ef|grep lsphp root 20402 17833 0 11:21 pts/0 00:00:00 grep lsphp [root@host /]# ps -waux root 17833 0.0 0.1 5004 1484 pts/0 S 10:39 0:00 bash ..indicates that there is no lsphp running on the host also I read the following article and comments, concluding that in my case the problem is not the suexec as the user apache is the httpd process owner http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ also suexec command is not recognized when logged and launced as root @ host also i'm almost confident that there is no cPanel running on the host to check if a setting there would reset the running cache process at some interval This leaves me with few clues where to head next. I tried to set (with chown and chgrp) apache as the owner of the apc.php file and some test php files resulting in 500 server error. Is there a way to check if the file permissions prevent the apc stay running? I'm tremendously grateful for any suggestions or help.

    Read the article

  • Magento - Users unable to login from corporate networks with Bluecoat / F5 Load balancers

    - by user1330440
    Hoping someone has come across this issue before with Magento and corporate clients. We have two clients for our Magento site who both have their internal networks setup using bluecoat security devices and F5 load balancers. Some users within these networks are unable to login to Magento - Magento eventually is sending a 302 redirect to /index.php/ when users attempt to log in. Through our testing, the problem appears to be isolated to this setup - we can log into the accounts in question from anywhere outside of these networks without issue, and if the client tries to access the site without going through the F5 load balancer, they are able to log in successfully. Strangely enough, the issue only started occurring for the two sites the day after we introduced a system upgrade which added a new site to the Magento installation. The system upgrade should not have affected any standard login functionality, and as said, the problem does not appear to be with the users in question, but with where the users are accessing the site from. Initially we thought the issue might have something to do with communications between the client's networks and the network which the server was hosted on, so we've tried moving the server to different hosts, but this has not helped. I'm currently waiting for more info from the clients on exact devices / models used in their network setup. I will update this post if more information becomes avaliable. Magento version is enterprise edition of ver. 1.9.0.0 Does anyone know of any tucked away Magento settings that might be able to cause this kind of behavior? Experience with this kind of set-up and ideas for things to look at? All help and ideas for things to follow-up would be appreciated - as this is a current production issue for a large number of users. I will respond asap with any requests for additional information on the topic, but currently am not able to disclose any identifying information on the project in question, and/or the clients experiencing issues. Thanks in advance for any assistance offered :) Note: This question has also been posted on the Magento forums: http://www.magentocommerce.com/boards/viewthread/277917/ And also on Stack overflow (Moved here as a commenter thought this site may be better suited): http://stackoverflow.com/questions/10133978/magento-users-unable-to-login-from-corporate-networks-with-bluecoat-f5-load

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • Virtual Network Interface and NAT disables localhost access for MySQL and Apache

    - by Interarticle
    I'm running an Ubuntu Server 12.04, and recently I configured it to do NAT for my laptop. Since the server has only one NIC, I followed instructions online to create a virtual network device (eth0:0) that has a LAN IP address, then further configured iptables and UFW to allow internet sharing. However, just a few days ago, I discovered that one of the PHP pages hosted on the server failed for no apparent reason. A little digging revealed that the MySQL server started refusing connections from localhost. The same happened with a page (PhpMyAdmin) that was configured to be accessible only from localhost (in Apache2). The error, as shown by $mysql --protocol=tcp -u root -p looks like ERROR 1130 (HY000): Host '<host name of eth0>' is not allowed to connect to this MySQL server However, the funny thing is, I configured the mysql server to allow root access from localhost (only). Moreover, the mysql server listens only on 127.0.0.1:3306, as shown by: sudo netstat -npa | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1029/mysqld which means that the connection could have only come from 127.0.0.1 (Note that MySQL is working because I can still connect to it via unix domain sockets) In effect, it seems that all tcp connections originating from 127.0.0.1 to 127.0.0.1 appear to any local daemon to come from the eth0 IP address. Indeed, apache2 allowed me to access PhpMyAdmin after I added allow <eth0 IP address>. The following are my network configurations (redacted): /etc/hosts: 127.0.0.1 localhost 211.x.x.x <host name of eth0> <server name> #IPv6 Defaults follows .... /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 211.x.x.x netmask 255.255.255.0 gateway 211.x.x.x dns-nameservers 8.8.8.8 # dns-* options are implemented by the resolvconf package, if installed dns-search xxxxxxx.com hwaddress ether xx:xx:xx:xx:xx:xx auto eth0:0 iface eth0:0 inet static address 192.168.57.254 netmask 255.255.254.0 broadcast 192.168.57.255 network 192.168.57.0 /etc/ufw/sysctl.conf: #Uncommented the following lines net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1 /etc/default/ufw: DEFAULT_FORWARD_POLICY="ACCEPT" #Changed DROP to ACCEPT /etc/init/internet-sharing.conf (upstart script I wrote), section pre-start script: iptables -A FORWARD -o eth0 -i eth0:0 -s 192.168.57.22 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE Note again that my problem here is that programs cannot access localhost tcp services, from the server itself, and that access is blocked because the services have access control allowing only 127.0.0.1. I have no problem connecting (as in TCP connections) to services via tcp, even if the services listen only on 127.0.0.1. I do NOT want to connect to the services from another computer.

    Read the article

  • Rails 3 shows 404 error instead of index.html (nginx + unicorn)

    - by Miko
    I have an index.html in public/ that should be loading by default but instead I get a 404 error when I try to access http://example.com/ The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved. This has something to do with nginx and unicorn which I am using to power Rails 3 When take unicorn out of the nginx configuration file, the problem goes away and index.html loads just fine. Here is my nginx configuration file: upstream unicorn { server unix:/tmp/.sock fail_timeout=0; } server { server_name example.com; root /www/example.com/current/public; index index.html; keepalive_timeout 5; location / { try_files $uri @unicorn; } location @unicorn { proxy_pass http://unicorn; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } My config/routes.rb is pretty much empty: Advertise::Application.routes.draw do |map| resources :users end The index.html file is located in public/index.html and it loads fine if I request it directly: http://example.com/index.html To reiterate, when I remove all references to unicorn from the nginx conf, index.html loads without any problems, I have a hard time understanding why this occurs because nginx should be trying to load that file on its own by default. -- Here is the error stack from production.log: Started GET "/" for 68.107.80.21 at 2010-08-08 12:06:29 -0700 Processing by HomeController#index as HTML Completed in 1ms ActionView::MissingTemplate (Missing template home/index with {:handlers=>[:erb, :rjs, :builder, :rhtml, :rxml, :haml], :formats=>[:html], :locale=>[:en, :en]} in view paths "/www/example.com/releases/20100808170224/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/paperclip/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/haml/app/views"): /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/paths.rb:14:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/lookup_context.rb:79:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/base.rb:186:in `find_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:45:in `_determine_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:23:in `render' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/haml-3.0.15/lib/haml/helpers/action_view_mods.rb:13:in `render_with_haml' etc... -- nginx error log for this virtualhost comes up empty: 2010/08/08 12:40:22 [info] 3118#0: *1 client 68.107.80.21 closed keepalive connection My guess is unicorn is intercepting the request to index.html before nginx gets to process it.

    Read the article

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • order of operations for environment variables

    - by alyda
    I want to understand how environment variables are set and reset (overridden). I'm running Apache/2.2.24 (Unix) PHP/5.4.14 on a mac . My theory is this: Environment vars can be set in bash, then they can be overwritten with httpd.conf preceding a VirtualHost directive that precedes php.ini, which can then be overwritten by .htaccess (if allowable) and finally by PHP I tried the following: setting environment variable in bash: I added export ENVIRONMENT='local' to my ~/.bashrc file, restarted apache and did not get any output from print_r($_ENV); (in a simple index.php file at the root of my webserver). I also tried putting ENVIRONMENT='local' into /etc/environment, and restarting apache, nothing, as well as /etc/bashrc, restart apache. still nothing. setting environment variable in httpd.conf: I added SetEnv ENVIRONMENT 'local-httpd to the end of my /etc/apache2/httpd.conf file (but before I load other conf files, such as virtual host [Include /private/etc/apache2/other/*.conf]). I now see the variable in the array print_r($_SERVER); but not print_r($_ENV);. setting environment variable in httpd-vhosts.conf: I added SetEnv ENVIRONMENT 'local-vhost to my /etc/apache2/extra/httpd-vhosts.conf file in my generic directive that points to my default document root. I now see the variable has been overwritten (to local-vhost from local-httpd, so I know where the variable is getting set). setting environment variable in php.ini: while searching for a proper place to put my environment variable, I noticed that variables_order = "GPCS" was set to the production value rather than EGPCS. I changed it, restarted my server and found that I was now getting output for print_r($_ENV); but not my expected custom variable. It also appears that I am not able to set a custom variable in this file. Please tell me if I am wrong setting environment variable in .htaccess: I added SetEnv ENVIRONMENT 'local-htaccess'. This worked as expected, overwriting all other values that were set. setting / overwriting environment variable in PHP: if (...) { putenv('ENVIRONMENT=local'); } I'm asking this question because I have a lot of local and remote testing servers, some of which may or may not allow me access to modify httpd, httpd-vhost, php.ini or environment variables. I want to understand what is best for those difference scenarios (shared hosting, heroku, local servers, etc) I obviously don't know how to properly set the environment variable in bash in a way that php can use it, I'd like to know how to do that (as I think Heroku does something similar with heroku config set...)

    Read the article

  • OS X Snow Leopard 10.6 Refuses to Load Websites the first time intermittently

    - by Brandon
    Many times when I am browsing the web, Snow Leopard will sit and load a site for 20 seconds or more, until it times out and says it cannot be displayed. If I refresh, it loads RIGHT away, every time. The issue is intermittent but happens from once every couple of days to a few times a day. So the long and short of it is this: Aluminum MacBook (Non-Pro) 2.4GHz Core2Duo, 4GB DDR3 I am using 10.6.6 but I have had this issue since 10.6.0 It happens in Firefox, Chrome, and Safari I have flushed my DNS (using the command 'blablabla flush') I am using custom DNS servers which I hoped would fix it but it had no effect* I am running Apache currently but haven't been for most of the time I've reformatted multiple times, always experiencing the issue I am on Cox cable internet, with a Motorola Surfboard & a Belkin F6D4230-4 v1 (Pre?) N wireless router. I've put the router in G only & N only & G+N to no effect It seems to be domain dependant as I can sometimes load the Google cache right away, and sometimes other sites will load but Google will refuse My Powerbook G4 with Leopard, other Windows XP laptops, & my wired Win7 desktop do not suffer from the issue. *I recently started using these to escape the awful Cox redirect page on timeouts I'm almost positive the issue has happened on other networks but I can't recall a specific instance (I have a terrible memory). The problem is intermittent and fixable enough (I just have to wait until it times out and hit refresh one time) but incredibly annoying since I'm constantly reading documentation from a large variety of sites. EDIT: To clarify, this happens with ALL sites, not only specific sites. I haven't been able to detect any pattern to the failures, but one day Google.com will refuse to load while reddit.com will, and the next day vice versa. Keep in mind that waiting for a timeout and hitting refresh loads the page right away, every time. If I don't wait for the timeout, opening more links, hitting refresh, and clicking the link a billion times have no effect. It seems to be domain neutral, affecting sites seemingly at random. It doesn't seem to have anything to do with connection inactivity either, because I will be SSHed into different servers, uploading files, browsing, downloading, etc, and it will just quit loading Jquery.com (for example) until I sit and wait for a timeout. /EDIT This is my last resort. Please, someone, tell me what is happening. Thank you.

    Read the article

  • Why do I need to set up Autologon values in registry twice in before it works and can I fix this?

    - by jJack
    Background: As part an automated testing suite I am building, I need to set up Autologon on my virtual machines 'on demand'. By on demand, I mean that I don't want to necessarily pre-configure my VM or any snapshot to have Autologon set up already, for security reasons and also a huge business case. My solution so far: I'm copying a script to the guest machine and then using Sysinternals PsExec to execute it. The script is: reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultUserName /t REG_SZ /d myusername reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultPassword /t REG_SZ /d myfakepassword reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultDomainName /t REG_SZ /d mydomain reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v ForceAutoLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v AutoAdminLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AutoLogonChecked" /f /ve /d 1 Note: I don't believe AutoLogonChecked is required for machines post Windows 2000 but I'm doing it just in case for now. Maybe ForceAutoLogon isn't either, not sure yet. The Problem: I see PsExec executes this properly and all the values are in the registry, however when I restart the machine, the user isn't automatically logged on...When I run this a second time then restart the machine, the user is finally logged on. A diff between the registry states shows that the first time I run this, it is missing both the "1" for AutoAdminLogon, and also the DefaultPassword key. The second time I execute it, these values are correctly intact as I intended. So, what is going on here? Is this expected? This post claims in the end that it really all just works (the problem was that a logoff script was setting off the values). Doesn't seem to work for me however. Note this seems unique to Windows 7, does not occur in Windows XP Also note that you don't need PsExec to recreate the issue - just modify the registry yourself EDIT/update: Login interactively and run script (so, not executing it remotely), logging off automatically logs me back in (so, it works) remotely execute the script in guest when I'm interactively logged in, logging off automatically logs me back in (so, it works) remotely execute the script in guest when with non-interactive session if I log in afterwards (so, interactive now) then back off, it logs me back in (so, it then works) EDIT/update 2: This only occurs for Win7x86, Win7x64, Win8x64. This does not occur for Windows XP

    Read the article

  • JavaMail application won't send email to external SMTP server

    - by Luiz Cruz
    This is actually a question from an exam, but I believe it could help others troubleshooting a similar situation. In a system, an e-mail needs to be sent to a certain mailbox. The following Java code, which is part of a larger system, was developed for that. Assume that "example.com" corresponds to a valid registered internet domain. public void sendEmail(){ String s1=”Warning”; String b1=”Contact IT support.”; String r1=”[email protected]”; String d1=”[email protected]”; String h1=”mx.intranet”; Properties p1 = new Properties(); p1.put(“mail.host”, h1); Session session = Session.getDefaultInstance(p1, null); MimeMessage message = new MimeMessage(session); try { message.setFrom(new InternetAddress(r1)); message.addRecipient(Message.RecipientType.TO, new InternetAddress(d1)); message.setSubject(s1); message.setText(b1); Transport.send(message); } catch (MessagingException e){ System.err.println(e); } } The execution of this code, within the testing environment of an application server, does NOT work as expected. The mailbox of the "example.com" server never receives the email, even tough all string values in the code are correctly attributed. The output for the command "netstat -np TCP" in the application server during execution is shown bellow: Src Add Src Port Dest Add Dest Port State 192.168.5.5 54395 192.168.7.1 25 SYN_SENT 192.168.5.5 54390 192.168.7.1 110 TIME_WAIT 192.168.5.5 52001 200.218.208.118 80 CLOSE_WAIT 192.168.5.5 52050 200.218.208.118 80 ESTABLISHED 192.168.5.5 50001 200.255.94.202 25 TIME_WAIT 192.168.5.5 50000 200.255.94.202 25 ESTABLISHED With the exception of the lines that were NAT'd, all others are associated with the Java application server, which created them after the execution of the code above. The e-mail server used in this environment is the production server, which is online and does not require any authentication for internal connections. Based on this situation, point out three possible causes for the problem.

    Read the article

  • Windows 7 x64 "upgrade" repair fails

    - by Polynomial
    I've been running into issues with Windows Update, which I can't seem to fix. The hotfixes don't work, nor does the Windows update readyness tool, or the manual SP1 upgrade. I get various esoteric errors which nobody seems to have a fix for. Looks like some of the update cache is corrupt and digital signatures seem to be broken on some packages / Windows Update components. Long story short, I have discovered the only option is to do a repair operation on the OS, to repair everything. It's so corrupt that only a complete replacement will fix it. According to various sources (including MSKB) one can perform a repair by running an in-place upgrade. I've got the Windows 7 Ultimate retail disc, which I've inserted into my machine. I ran setup.exe and went through in the following order: Install now Go online to get latest updates (I've also tried not getting updates) Wait for updates to be downloaded Select Windows 7 Ultimate (x64 architecture) and click next Accept the T&Cs, click next Click Upgrade At this point it spends a minute on the "checking compatibility" screen, after which I get the following error: The following issues are preventing Windows from upgrading. Cancel the upgrade, complete each task, and then restart the upgrade to continue. You can’t upgrade 64-bit Windows to a 32-bit version of Windows. To upgrade, obtain a 64-bit version of the installation disc, or go online to see how to install Windows 7 and keep your files and settings. 32-bit Windows cannot be upgraded to a 64-bit version of Windows. To upgrade, obtain a 32-bit version of the Windows installation disc. It also mentions a warning about potential conflicts with a storage driver and VS2010, but that doesn't seem to be the blocking issue. My currently installed version of Windows is Ultimate 64-bit (absolutely sure of this) and the disc is definitely a x86 / x64 combined Ultimate retail disc. There seem to be a few people who have run into this (e.g. this question), but I've not seen any answers. I've checked the event viewer, but can't spot anything in there that's related. Any idea how I can get this working? P.S: Just to pre-empt the inevitable "are you suuuuuuuuuuuuure it's x64 Ultimate?" questions:

    Read the article

  • How to reference a Domain Controller out of the Local Network?

    - by Adrian
    We have multiple servers scattered over different hosting providers. For learning, experimenting and, ultimately, production purposes, I set one of them as a Domain Controller. That went well, most of our services are now authenticating via AD, which helps us a lot. What I want to do now is to simplify the authentication for the multiple servers, by making each of them look at the Domain Controller. This way, our Devs can log into (Remote Desktop) the multiple servers with the same credentials from AD. I know I have to configure each server to look at the Domain Controller. But when I try to add the Domain Controller to the Computer, it cannot find it, although the Domain Controller address is a valid, reachable internet sub-domain (as in "ad.ourcompany.com"). This is the detailed error message: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain ad.ourcompany.com: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.ourcompany.com Common causes of this error include the following: - The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 109.188.207.9 109.188.207.10 - One or more of the following zones do not include delegation to its child zone: ad.ourcompany.com ourcompany.com com . (the root zone) For information about correcting this problem, click Help. What am I missing? I'm an experienced Dev, but a newbie Sysdamin experimenting with new stuff. Disclaimer All IP addresses and domains/subdomains were changed to preserve security. If by any chance you still can see private information, please let me know so that I can change it.

    Read the article

  • Cant access a remote server due mistake by setting firewall rule

    - by LMIT
    I need help due a my silly mistake! So for long time i have a dedicate server hosted by register.it Usually i access remotly to this server (Windows 2008 server) by Terminal Server. Today i wanted to block one site that continually send request to my server. So i was adding a new rule in the firewall (the native firewall on windows 2008 server), as i did many time, but this time, probably i was sleeping with my brain i add a general rules that stop everything! So i cant access to the server anymore, as no any users can browse the sites, nothing is working because this rule block everything. I know that is a silly mistake, no need to tell me :) so please what i can do ? The only 1 thing that my provider let me is reboot the server by his control panel, but this not help me in any way because the firewall block me again. i have administrator username and password, so what i really can do ? there are some trick some tecnique, some expert guru that can help me in this very bad situation ? UPDATE i follow the Tony suggest and i did a NMAP to check if some ports are open but look like all closed: NMAP RESULT Starting Nmap 6.00 ( http://nmap.org ) at 2012-05-29 22:32 W. Europe Daylight Time NSE: Loaded 93 scripts for scanning. NSE: Script Pre-scanning. Initiating Parallel DNS resolution of 1 host. at 22:32 Completed Parallel DNS resolution of 1 host. at 22:33, 13.00s elapsed Initiating SYN Stealth Scan at 22:33 Scanning xxx.xxx.xxx.xxx [1000 ports] SYN Stealth Scan Timing: About 29.00% done; ETC: 22:34 (0:01:16 remaining) SYN Stealth Scan Timing: About 58.00% done; ETC: 22:34 (0:00:44 remaining) Completed SYN Stealth Scan at 22:34, 104.39s elapsed (1000 total ports) Initiating Service scan at 22:34 Initiating OS detection (try #1) against xxx.xxx.xxx.xxx Retrying OS detection (try #2) against xxx.xxx.xxx.xxx Initiating Traceroute at 22:34 Completed Traceroute at 22:35, 6.27s elapsed Initiating Parallel DNS resolution of 11 hosts. at 22:35 Completed Parallel DNS resolution of 11 hosts. at 22:35, 13.00s elapsed NSE: Script scanning xxx.xxx.xxx.xxx. Initiating NSE at 22:35 Completed NSE at 22:35, 0.00s elapsed Nmap scan report for xxx.xxx.xxx.xxx Host is up. All 1000 scanned ports on xxx.xxx.xxx.xxx are filtered Too many fingerprints match this host to give specific OS details TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 ... ... ... 13 ... 30 NSE: Script Post-scanning. Read data files from: D:\Program Files\Nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 145.08 seconds Raw packets sent: 2116 (96.576KB) | Rcvd: 61 (4.082KB) Question: The provider locally can access by username and password ?

    Read the article

  • Server load spikes several times a day, load average for the past month is 5 times the load average all year

    - by AMF
    My Munin notifications set up for our (Debian) LAMP cluster have been notifying me continuously that our load on our production machine has been at dangerous levels. While the average load all year typically runs between 2 and 8, the load in the past month and only the past month -- has been skyrocketing to 10, 18, and occasionally even 50-60. The spikes last only 5-10 minutes at a time and occur about every 2-3 hours. The spikes do not effect performance only because I have a script that sends traffic off our server to a mirror CDN when the load goes above 10. I've looked for cron jobs that correlate with this timeframe but there is nothing I can see that would cause this. Site traffic is also normal (we receive about 200K visits per day). I'm also trying to think of anything I've changed around the time this problem began, and I really cannot think of anything. This is probably not much to go on. Maybe there is a clue in the top print-out (below) that I'm not seeing. How do I proceed to find the cause? -- Typical top when the load is NOT spiking: top - 11:13:09 up 472 days, 25 min, 1 user, load average: 6.08, 4.29, 3.80 Tasks: 105 total, 1 running, 104 sleeping, 0 stopped, 0 zombie Cpu(s): 41.2%us, 5.8%sy, 0.0%ni, 49.5%id, 2.7%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 3369592k total, 2166980k used, 1202612k free, 559504k buffers Swap: 2650684k total, 1892k used, 2648792k free, 1129116k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32046 apache 15 0 36300 12m 9828 S 20 0.4 0:01.97 apache2 32679 apache 15 0 36568 13m 10m S 19 0.4 0:01.69 apache2 31441 apache 15 0 36616 13m 10m S 19 0.4 0:04.13 apache2 31477 apache 15 0 36596 13m 9.8m S 15 0.4 0:01.99 apache2 31993 apache 15 0 36876 16m 12m S 12 0.5 0:02.01 apache2 31782 apache 15 0 36836 14m 10m S 8 0.4 0:02.17 apache2 32198 apache 15 0 36536 13m 10m S 7 0.4 0:01.59 apache2 880 apache 15 0 36508 9708 6236 S 7 0.3 0:00.42 apache2 31945 apache 17 0 36876 16m 13m S 5 0.5 0:03.17 apache2 32197 apache 16 0 36636 10m 7504 S 5 0.3 0:02.70 apache2 32326 apache 15 0 37024 11m 7632 S 5 0.3 0:02.15 apache2 32565 apache 15 0 37280 13m 9.8m S 5 0.4 0:03.75 apache2 32676 apache 15 0 36896 16m 12m S 4 0.5 0:00.95 apache2 32678 apache 15 0 36536 12m 9692 S 4 0.4 0:02.27 apache2 974 apache 16 0 37064 9888 6016 D 4 0.3 0:00.13 apache2 32150 apache 16 0 36832 13m 10m S 3 0.4 0:01.74 apache2 31780 apache 16 0 36848 11m 7660 S 3 0.3 0:02.87 apache2

    Read the article

  • KeePass justification

    - by Jeff Walker
    I work at a place that tries to take security seriously, but sadly, they often fail. Currently, one of the major ways they fail is password management. I personally have about 20 accounts (my personal user id on lots of machines). For shared "system" accounts, there are about 45 per environment; development, test, and production. I have access to 2 of those, so my personal total is somewhere around 115 accounts. Passwords have to be at least 15 characters with some extensive but standard complexity constraints, and have to be changed every 60 days or so (system accounts every year). They also should not be the same for different accounts, but that isn't enforced. Think DoD-type standards. There is no way to remember and keep up with this. It just isn't humanly possible, as far as I'm concerned. This might be a good justification of a centralized account management system, a la LDAP or ActiveDirectory, but that is a totally different battle. Currently the solution is an Excel spreadsheet. They use Excel to put a password on it, and then most people make a copy and remove the password. This makes my stomach turn. I use KeePass for this problem and it manages all of my account very well. I like the features like auto-typing, grouping, plugins, password generation, etc. It uses AES-256 encryption via the .Net framework, and while not FIPS compliant, it has a very good reputation. The only problem is that they are also very careful about using randomly downloaded software. So we have to justify every piece of software on our workstations. I have been told that they really don't want me to use this, be cause of the "sensitive nature" of storing passwords. sigh My justification has to be "VERY VERY strong". I have been tasked with writing a justification for KeePass, but as I am lazy, I would like any input that I can get from the community. What do you recommend? Is there something out there that is better or more respected than KeePass? Is there any security experts saying interesting things on this topic? Anything will help at this point. Thanks.

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Hi! Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • How to make a 100% UNIQUE dvd? [closed]

    - by sajawikio
    Is it possible to make a DVD which would be next to impossible to replicate exactly even with special equipment? To be clear, not in the sense of making the data itself resistant to piracy - nothing like that. More like as an analogy to antiques - a reproduction of a furniture would be able to be spotted to a trained eye, that it is not the original. Is it possible to do this for a dvd? Like, say a dvd is copied, even by someone who is trying to use even special equipment and is hypothetically dying to copy the dvd exactly for whatever reason - even such copied dvd would be detectable that it is not exactly the original dvd somehow (as if it were a reproduction of an original antique, but not the original), and would be almost or even preferably impossible to actually copy 100% exact the dvd ever again. Just some ideas below on the sort of thing i might go about doing to do this, but really am not sure how or what programs, media, hardware, etc. would do the trick Not sure what would do the trick -- but for instance do there exist any blank dvd's that already come pre-recorded with some sort of serial number or bar code, or other metadata, or an encrypted hash, or something like that? Maybe any blank dvd will do but i should get a special software to extract hardcoded metadata? If so which software? Or special hardware even maybe? Such dvd which the "secret" can be: Well, to know what the "secret" is and if it is present on the disk, it probably should be readable by some software or maybe a particular hardware (i guess preferably only if some sort of key is known and input into the software, even better, only then such secret data on disk can be read, otherwise nothing shows up and it looks like just a regular disk with no secret on it), and Would be impossible to actually replicate, especially not with regular burning hardware and preferably not at all. Other idea: Is there any special software that can direct the write head of laser to physically "mar" the dvd in such a way that, when played in dvd player, makes a particular visual pattern or something like that, also say the mar itself shows up as faint scratch on disk, but would be impossible for someone to do themselves exactly? EDIT: Also to clarify suppose the dvd contains video and music that should be playable on dvd players, maybe a menu too (i.e not a dvd containing software), and also to clarify the question is about how to make dvd 100% unique, not how to make the actual content of dvd protected from "piracy".

    Read the article

  • Deploying Django App with Nginx, Apache, mod_wsgi

    - by JCWong
    I have a django app which can run locally using the standard development environment. I want to now move this to EC2 for production. The django documentation suggests running with apache and mod_wsgi, and using nginx for loading static files. I am running Ubuntu 12.04 on an Ec2 box. My Django app, "ddt", contains a subdirectory "apache" with ddt.wsgi import os, sys apache_configuration= os.path.dirname(__file__) project = os.path.dirname(apache_configuration) workspace = os.path.dirname(project) sys.path.append(workspace) sys.path.append('/usr/lib/python2.7/site-packages/django/') sys.path.append('/home/jeffrey/www/ddt/') os.environ['DJANGO_SETTINGS_MODULE'] = 'ddt.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() I have mod_wsgi installed from apt. My apache/httpd.conf contains NameVirtualHost *:8080 WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi WSGIPythonPath /home/jeffrey/www/ddt <Directory /home/jeffrey/www/ddt/apache/> <Files ddt.wsgi> Order deny,allow Allow from all </Files> </Directory> Under apache2/sites-enabled <VirtualHost *:8080> ServerName www.mysite.com ServerAlias mysite.com <Directory /home/jeffrey/www/ddt/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/jeffrey/www/ddt/logs/apache_error.log CustomLog /home/jeffrey/www/ddt/logs/apache_access.log combined WSGIDaemonProcess datadriventrading.com user=www-data group=www-data threads=25 WSGIProcessGroup datadriventrading.com WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi </VirtualHost> If I am correct, these 3 files above should correctly allow my django app to run on port 8080. I have the following nginx/proxy.conf file proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; Under nginx/sites-enabled server { listen 80; server_name www.mysite.com mysite.com; access_log /home/jeffrey/www/ddt/logs/nginx_access.log; error_log /home/jeffrey/www/ddt/logs/nginx_error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/jeffrey/www/ddt/; } } If I am correct these two files should setup nginx to take requests on the HTTP port 80, but then direct requests to apache which is running the django app on port 8080. If i go to mysite.com, all I see is Welcome to Nginx! Any advice for how to debug this?

    Read the article

  • VMWare tools not installing with an error

    - by JDS
    VMWare tools not installing on Ubuntu 12.04. I'm using Chef to manage the installation, but the Apt commands fail if run manually. I'm using the VMWare tool Debian repo. Example: $ cat /etc/apt/sources.list.d/vmware-tools-source.list deb http://packages.vmware.com/tools/esx/5.0u2/ubuntu precise main When trying to install, most packages seem to go ok, but one, "vmware-tools-foundation", does not. Example: $ apt-get -q -y install vmware-tools-esx-nox=8.6.10-1.precise Reading package lists... Building dependency tree... Reading state information... You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: vmware-tools-esx-kmods-3.2.0-23-generic : Depends: vmware-tools-foundation (>= 8.6.10) but it is not going to be installed vmware-tools-esx-nox : Depends: ...snip list of deps... E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). $ apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vmware-tools-foundation The following NEW packages will be installed: vmware-tools-foundation 0 upgraded, 1 newly installed, 0 to remove and 118 not upgraded. 7 not fully installed or removed. Need to get 0 B/5,886 B of archives. After this operation, 86.0 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 103499 files and directories currently installed.) Unpacking vmware-tools-foundation (from .../vmware-tools-foundation_8.6.10-1.precise_all.deb) ... VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again. dpkg: error processing /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) The key seems to be this error: "VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again." However, I've tryed removing and purging and can't seem to "trick" VMWare tools into thinking the packages are gone. Apt thinks they are gone. Is there some service/file/cache/lock left that VMWare tools sees that makes it think that VMWare tools are still installed? I've googled and googled but there is no answer to this question with my particular circumstances on the interwebs. VMWare's documentation of this error is minimal.

    Read the article

  • Problem with authentication of users via IE when using "host header value"

    - by Richard
    Hi, I'm trying to set multiple web sites up in an IIS 6. I've got a working virtual site residing under the default web site, but if I create a new web site in the IIS and asign it a host header value, let it point to the very same file structure as in the prevoiusly mentioned site and finally asign windows integrated security only to the site - I still cannot log in to the new site using MSIE 6 or 8 but FF 3.5 works fine. In the web log I get these entries if I access the localhost site 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 2 2148074254 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 2009-11-19 09:15:59 W3SVC1 127.0.0.1 GET /client/Default.asp - 80 xxx\Administrator 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 200 0 0 If I however access via the host headre value site I get prompted to login but the login fail and I also get an error "401 1 2148074252" which not present when it succeeds. Can this be the issue? Pre login screen 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 2 2148074254 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 2148074252 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 post login screen (note that win credentials have not been submitted) 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 0 2009-11-19 09:15:59 W3SVC1793297778 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+2.0.50727) 401 1 2148074252 Firefox will try to access using anonymous access and will prompt for login, after submitting win credentials it all works fine. For what reasoon is the IE so stubornly refusing to submit credentials to the "host header value" site? The site is in the Local intranet Zone and login is ticked for that zone. No teaming NIC's no FW, no nothing, I'm cluless :( /Richard

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >