Search Results

Search found 17958 results on 719 pages for 'local delivery'.

Page 410/719 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Confirm disk is broken when it passes all diagnostics

    - by Halfgaar
    I have a system with a potentially broken disk, but the disk passes all manner of diagnostics. I have been unable to confirm that the disk is broken. What are my options? I could just replace the disk, but because this situation is very similar to another more severe situation I have (long story), I'd like to actually make a proper diagnosis as opposed to randomly binning hardware. The issue and history is this: I had a Debian Linux PC (500 MHz P3) acting as router, nagios and munin. It crashed every couple of weeks. No logs or dmesg could be obtained (because it's an old Compaq that only boots when you configure it as keyboardless, making connecting a keyboard later, once it's booted, impossible). At the time, I just replaced the computer with another Compaq (P4 2.4 GHz) because I thought the hardware was faulty. However, it still crashed every couple of weeks. the difference is that on this computer, I can still SSH into it. It gives all kinds of errors on hda. I'd like to confirm that the disk is broken, but nothing I do confirms this: SMART error logs shows no errors. Normally when a disk starts acting up, SMART my pass, but it still records a read-error in the error log. SMART self-test (smartctl -t long /dev/sda) completes without errors. re-allocated sector count (a tell-tale parameter) has been 31 all its life, even when the disk was still in use in my desktop PC years ago, and it still is. The figure never changed. dd if=/dev/sda of=/dev/null bs=4096 passes with flying colors. What else can I do to assess the health of the drive? Again, this is not about making this router fully functional again, this is a disk forensic question, because it just so happens that I have another server that potentially has the same problem, and knowing the answer to this will possibly help me greatly. For the record, below are logs and such. This is the smartctl -a output: smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.7 and 7200.7 Plus family Device Model: ST3120026A Serial Number: 5JT1CLQM Firmware Version: 3.06 User Capacity: 120,034,123,776 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 2 Local Time is: Mon Jul 1 21:18:33 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 24) The self-test routine was aborted by the host. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 85) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 050 046 006 Pre-fail Always - 47766662 3 Spin_Up_Time 0x0003 097 096 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 31 7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 820305 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46373 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 605 194 Temperature_Celsius 0x0022 036 065 000 Old_age Always - 36 195 Hardware_ECC_Recovered 0x001a 050 046 000 Old_age Always - 47766662 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 196 000 Old_age Always - 6 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 80% 46361 - # 2 Extended offline Completed without error 00% 46358 - # 3 Short offline Completed without error 00% 12046 - # 4 Extended offline Completed without error 00% 10472 - # 5 Short offline Completed without error 00% 10471 - # 6 Short offline Completed without error 00% 10471 - # 7 Short offline Completed without error 00% 6770 - # 8 Extended offline Aborted by host 90% 5958 - # 9 Extended offline Aborted by host 90% 5951 - #10 Short offline Completed without error 00% 5024 - #11 Extended offline Aborted by host 80% 5024 - #12 Short offline Completed without error 00% 3697 - #13 Short offline Completed without error 00% 237 - #14 Short offline Completed without error 00% 145 - #15 Short offline Completed without error 00% 69 - #16 Extended offline Completed without error 00% 68 - #17 Short offline Completed without error 00% 66 - #18 Short offline Completed without error 00% 49 - #19 Short offline Completed without error 00% 29 - #20 Short offline Completed without error 00% 29 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And this is the dmesg error when it has crashed (which repeats for a bunch of different sectors): [1755091.211136] sd 0:0:0:0: [sda] Unhandled error code [1755091.211144] sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [1755091.211151] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 08 fe ad 38 00 00 08 00 [1755091.211166] end_request: I/O error, dev sda, sector 150908216

    Read the article

  • Exchange Disconnecting on EHLO with remote telnet

    - by Timothy Baldridge
    When I go to the local terminal on my Exchange box (SBS 2008) I can do this: telnet 127.0.0.1 25 220 Exchange banner here EHLO example.com 250 Server name However when I go from another box, or from the actual IP of the server I get this: telnet 192.168.21.20 25 220 Exchange banner here EHLO example.com 421 4.4.1 Connection timed out Connection to host lost. The odd thing is, this server is currently in production and working fine (receiving mail for our entire domain). But my C# programs can't send mail to it (they get this same error). Any ideas?

    Read the article

  • Jenkins swarm-plugin jar file, won't run in background

    - by JeanMertz
    We're working on an automation script for our Jenkins slaves on a local Unix server. To connect the slaves to the Jenkins master, we use the swarm plugin. Setting up the master was easy, and connecting clients is also easy with a single command. However, I am trying to get the slave command (a java application) to run in the background without stalling the current process, this doesn't seem to work. I've created an init.d file and added it to update-rc.d but that doesn't work. #!/bin/bash /usr/bin/java -jar /root/swarm-client-1.7-jar-with-dependencies.jar -executors 4 I've also tried to run it with an ampersand & to start the process in the background, but that doesn't work either because - from looking at the source - the jar file actually boots another process that is then started in the foreground. Any ideas on how to make this jar file start without stopping the bootstrap script?

    Read the article

  • Oracle VM VirtualBox 4.0 Now Available

    - by Paulo Folgado
    Delivering on Oracle's commitment to open source, Oracle VM VirtualBox 4.0 is now available, further enhancing the popular, open source, cross-platform virtualization software.   "Oracle VM VirtualBox 4.0 is the third major product release in just over a year, and adds to the many new product releases across the Oracle Virtualization product line, illustrating the investment and importance that Oracle places on providing a comprehensive desktop to datacenter virtualization solution," says Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, Oracle. "With an improved user interface and added virtual hardware support, customers will find Oracle VM VirtualBox 4.0 provides a richer user experience." Part of Oracle's comprehensive portfolio of virtualization solutions, Oracle VM VirtualBox enables desktop or laptop computers to run multiple guest operating systems simultaneously, allowing users to get the most flexibility and utilization out of their PCs, and supports a variety of host operating systems, including Windows, Mac OS X, most popular flavors of Linux (including Oracle Linux), and Oracle Solaris. Oracle VM VirtualBox 4.0 delivers increased capacity and throughput to handle greater workloads, enhanced virtual appliance capabilities, and significant usability improvements. Support for the latest in virtual hardware, including chipsets supporting PCI Express, further extends the value delivered to customers, partners, and developers. Highlights of Oracle VM VirtualBox 4.0 include New Open Architecture - Oracle and community developers can now create extensions that customize Oracle VM VirtualBox and add features not previously available.Enhanced Usability - A new scalable display mode enables users to view more virtual displays on their existing monitors. Improvements to VM management, including visual VM previews, an optional attributes display, and easy launch shortcut creation enables administrators and power users to customize the interface to make it as simple or as comprehensive as required.Increased Capacity and Throughput - A new asynchronous I/O model for networked (iSCSI) and local storage delivers significant storage related performance improvements, while new optimizations allow larger datacenter-class workloads, such as Oracle's middeware, to be run on 32-bit Windows hosts for testing and demo purposes. Powerful Virtual Appliance Sharing Capabilities - Enhanced support for standards-compliant OVF appliances and added support for OVA format descriptors. All information about a VM may be stored in a single folder to facilitate easier direct sharing among VMs. Support for Latest Virtual Hardware - A new, modern virtual chipset supporting PCI Express and other hardware enhancements including high-definition audio devices helps ensure support for the most demanding virtual workloads.

    Read the article

  • Apache: Virtual Host and .htacess for URL Rewriting not working

    - by parth
    I have configured a virtual host in my local machine and every thing is working fine. Now I want to use SEO friendly urls. To achieve this I have used the .htaccess file. My virtual host configuration is: <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> and my .htaccess file has: AllowOverride All RewriteEngine On RewriteBase /ypp/ RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 The above .htaccess setting is not working. After that I modified my virtual host setting and it is working. The new virtual host setting is: <VirtualHost *:80> RewriteEngine On RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined <Directory "C:/xampp/htdocs/ypp"> AllowOverride All </Directory> </VirtualHost> Please let me know where I am going wrong in the .htacess file for url rewriting. I do not want to use the settings in virtual host, since for every change I have restart apache.

    Read the article

  • How would you shorten 5,000+ URLs? [closed]

    - by Tyler J Fisher
    How would you go about shortening approximately 5,000 permalinks? The links point to a remote media archiving server, and are unlikely to change. Example URLs: rtsp://foo-1.bar.com/xx/xx/xx/xx.rm http://media.foo.org/xx/xx/xx.mp4 The URLs are going to be stored in a local MySQL database, as such it's crucial that the URLs are in a manageable form (i.e bit.ly or ow.ly). There are bulk URL shortening services, but those only allow shortening of 100 links/day, which isn't technically feasible so I need to think of something else.

    Read the article

  • Surgemail DNS lookup failure

    - by Spencer Ruport
    Just curious if anyone has any experience with Surgemail. I've set it up a couple times and never had an issue but my latest install keeps leaving outgoing messages in the queue with the error "DNS Lookup Failed". I double checked that the local DNS server is running and even tried switching the IPs to my ISP's DNS servers but still no go. [DNS] Ok(avge) Bad(avge) 76.227.63.137: 0(0.0s) 5(31.0s) 76.227.63.254: 0(0.0s) 1(0.0s) Anyone have any ideas why this might be happening?? Thanks.

    Read the article

  • Coders For Charities

    - by Robz / Fervent Coder
    Last weekend I had the opportunity to give back to the community doing what I love. As geeks we don’t usually have this opportunity. The event is called Coders 4 Charities (C4C) and it’s a grueling weekend of coding for nearly 30 hours over the weekend. When you finish you get to present to the charity and all of the other groups what you have completed. From the site: Coders For Charities is a 3-day charity event that pairs charities and local software developers. Charities often do not have the funds to implement a new website or intranet or database solution. Software developers often do not volunteer for charities because their skills do not apply. This event is the perfect marriage of these two needs; software developers volunteering their time to help charities better serve their community though the latest technology! The actual event was lined with multiple charities and about 50 developers, designers, business analysts, etc, each working with a different charity to come up with a solution that they could implement in less than 3 days. C4C provided a place and food for us so that we wouldn’t have to leave much during the time we had to implement our solution. They also provided games like Rock Band so we could get away and clear our minds for a few moments if necessary. I don’t think we made it down there to play, but the food and drinks were a huge help for us. The charity we we picked was Harvest Home. They had a need for an online intranet site where they could track membership and gardening. Over the next few days we worked on a site we could give them. Below is a screen shot with private data marked out. It was an awesome and humbling experience to be able to give back to a charity and I’m happy I was a part of it. I would definitely do it again. How often do we get to use our abilities to volunteer our time to a charity?

    Read the article

  • Postfix on Snow Leopard unable to send MIME emails, including header contents in message body

    - by devvy
    I configured postfix on snow leopard by adding the following line to /etc/hostconfig: MAILSERVER=-YES- I then configured postfix to relay through my ISP's SMTP server. I added the following two lines in their respective places within /etc/postfix/main.cf: myhostname = 1and1.com relayhost = shawmail.vc.shawcable.net I then have a simple PHP mail function wrapper as follows: send_email("[email protected]", "[email protected]", "Test Email", "<p>This is a simple HTML email</p>"); echo "Done"; function send_email($from,$to,$subject,$message){ $header="From: <".$from."> "; $header.= 'MIME-Version: 1.0' . " "; $header.= 'Content-type: text/html; charset=iso-8859-1' . " "; $send_mail=mail($to,$subject,$message,$header); if(!$send_mail){ echo "ERROR"; } } With this, I am receiving an e-mail that appears to be improperly formatted. The message header is showing up in the body of the e-mail. The raw message content is as follows: Return-Path: <[email protected]> Delivery-Date: Tue, 27 Apr 2010 18:12:48 -0400 Received: from idcmail-mo2no.shaw.ca (idcmail-mo2no.shaw.ca [64.59.134.9]) by mx.perfora.net (node=mxus2) with ESMTP (Nemesis) id 0M4XlU-1NCtC81GVY-00z5UN for [email protected]; Tue, 27 Apr 2010 18:12:48 -0400 Message-Id: <[email protected]> Received: from pd6ml3no-ssvc.prod.shaw.ca ([10.0.153.149]) by pd6mo1no-svcs.prod.shaw.ca with ESMTP; 27 Apr 2010 16:12:47 -0600 X-Cloudmark-SP-Filtered: true X-Cloudmark-SP-Result: v=1.0 c=1 a=VphdPIyG4kEA:10 a=hATtCjKilyj9ZF5m5A62ag==:17 a=mC_jT1gcAAAA:8 a=QLyc3QejAAAA:8 a=DGW4GvdtALggLTu6w9AA:9 a=KbDtEDGyCi7QHcNhDYYwsF92SU8A:4 a=uch7kV7NfGgA:10 a=5ZEL1eDBWGAA:10 Received: from unknown (HELO 1and1.com) ([24.84.196.104]) by pd6ml3no-dmz.prod.shaw.ca with ESMTP; 27 Apr 2010 16:12:48 -0600 Received: by 1and1.com (Postfix, from userid 70) id BB08D14ECFC; Tue, 27 Apr 2010 15:12:47 -0700 (PDT) To: [email protected] Subject: Test Email X-PHP-Originating-Script: 501:test.php Date: Tue, 27 Apr 2010 18:12:48 -0400 X-UI-Junk: AutoMaybeJunk +30 (SPA); V01:LYI2BGRt:7TwGx5jxe8cylj5nOTae9JQXYqoWvG2w4ZSfwYCXmHCH/5vVNCE fRD7wNNM86txwLDTO522ZNxyNHhvJUK9d2buMQuAUCMoea2jJHaDdtRgkGxNSkO2 v6svm0LsZikLMqRErHtBCYEWIgxp2bl0W3oA3nIbtfp3li0kta27g/ZjoXcgz5Sw B8lEqWBqKWMSta1mCM+XD/RbWVsjr+LqTKg== Envelope-To: [email protected] From: <[email protected]> MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 Message-Id: <[email protected]> Date: Tue, 27 Apr 2010 15:12:47 -0700 (PDT) <p>This is a simple HTML email</p> And here are the contents of my /var/log/mail.log file after sending the email: Apr 27 15:29:01 User-iMac postfix/qmgr[705]: 74B1514EDDF: removed Apr 27 15:29:30 User-iMac postfix/pickup[704]: 25FBC14EDF0: uid=70 from=<_www> Apr 27 15:29:30 User-iMac postfix/master[758]: fatal: open lock file pid/master.pid: unable to set exclusive lock: Resource temporarily unavailable Apr 27 15:29:30 User-iMac postfix/cleanup[745]: 25FBC14EDF0: message-id=<[email protected]> Apr 27 15:29:30 User-iMac postfix/qmgr[705]: 25FBC14EDF0: from=<[email protected]>, size=423, nrcpt=1 (queue active) Apr 27 15:29:30 User-iMac postfix/smtp[747]: 25FBC14EDF0: to=<[email protected]>, relay=shawmail.vc.shawcable.net[64.59.128.135]:25, delay=0.21, delays=0.01/0/0.1/0.1, dsn=2.0.0, status=sent (250 ok: Message 25784419 accepted) Apr 27 15:29:30 User-iMac postfix/qmgr[705]: 25FBC14EDF0: removed Two other people in the office have followed the exact same process and are running the exact same script, version of snow leopard, php, etc. and everything is working fine for them. I've even copied their config files to my machine, restarted postfix, restarted apache, all to no avail. Does anyone know what steps I could take to resolve the issue? This is boggling my mind... Thanks

    Read the article

  • No External Network Access Through Ubuntu VPN

    - by trobrock
    I have setup pptpd as my VPN server on Ubuntu Server 9.04, I am able to connect to the VPN from the client and can access the server's local network, but I am unable to connect to the external network via the VPN. If I login to the server via SSH: $ ping google.com PING google.com (74.125.67.100) 56(84) bytes of data. 64 bytes from gw-in-f100.google.com (74.125.67.100): icmp_seq=1 ttl=49 time=65.9 ms 64 bytes from gw-in-f100.google.com (74.125.67.100): icmp_seq=2 ttl=49 time=63.2 ms 64 bytes from gw-in-f100.google.com (74.125.67.100): icmp_seq=3 ttl=49 time=63.9 ms 64 bytes from gw-in-f100.google.com (74.125.67.100): icmp_seq=4 ttl=49 time=66.0 ms If I connect to the VPN and ping locally: $ ping google.com ping: cannot resolve google.com: Unknown host I have a feeling it is some routing issue on the server but I am unsure.

    Read the article

  • Auto-mount in fstab no longer working until manually running 'sudo mount -a'

    - by Brett Alton
    I have 3 SMB shared drives I need to connect to for work purposes. I had Ubuntu 10.10 Maverick and had all my drives loaded into fstab to be auto-mounted. Everything worked fine for a while but just before I upgraded to 11.04 Natty, the fstab auto-mount stopped working. Unfortunately I don't know what changed I made to my machine or what update installed that made this occur. /etc/fstab {snip} //192.168.7.3/apache_proj/ /home/brett/Desktop/apache smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //192.168.7.3/apache_54321/ /home/brett/Desktop/54321 smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //freenas.local/shared/ /home/brett/Desktop/shared smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //lamp/www/ /home/brett/Desktop/lamp smbfs username={snip},password={snip},rw,iocharset=utf8,uid=1000,gid=1000 0 0 When the machine boots, I run this command to get them to mount: $ sudo umount /home/brett/Desktop/54321 /home/brett/Desktop/shared /home/brett/Desktop/apache; sudo mount -a [sudo] password for brett: umount: /home/brett/Desktop/54321: not mounted umount: /home/brett/Desktop/shared: not mounted umount: /home/brett/Desktop/apache: not mounted Warning: mapping 'guest' to 'guest,sec=none' Warning: mapping 'guest' to 'guest,sec=none' Warning: mapping 'guest' to 'guest,sec=none' mount error: could not resolve address for lamp: No address associated with hostname (I run that umount as a just-in-case). I looked through dmesg and some error logs and couldn't see why fstab was failing on my mounts. I see that my 'lamp' directive is failing, but that's because the machine is currently down.

    Read the article

  • Does Test Driven Development (TDD) improve Quality and Correctness? (Part 1)

    - by David V. Corbin
    Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In this posting, I will by sharing my experiences with Test Driven Development; both its benefits and limitations. To start this topic, we need to agree on what TDD is. The first is to define each of the three words as used in this context. Test - An item or action which measures something in some quantifiable form. Driven - The primary motivation or focus of a series of activities (process) Development - All phases of a software project/product from concept through delivery. The above are very simple definitions that result in the following: "TDD is a process where the primary focus is on measuring and quantifying all aspects of the creation of a (software) product." There are many places where TDD is used outside of software development, even though it is not known by this name. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without "quality and correctness" in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test). Returning to software development, let us look at some common scenarios. While these items are generally applicable regardless of platform, language and tools; the remainder of this post will utilize Microsoft Visual Studio and Team Foundation Server (TFS) for examples. It should be realized that everyone does at least some aspect of TDD. At the most rudimentary level, getting a program to compile involves a "pass/fail" measurement (is the syntax valid) that drives their ability to proceed further (run the program). Other developers may create "Unit Tests" in the belief that having a test for every method/property of a class and good code coverage is the goal of TDD. These items may be helpful and even important, but really only address a small aspect of the overall effort. To see TDD in a bigger view, lets identify the various activities that are part of the Software Development LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are: Requirements Gathering Architecture Design Implementation Quality Assurance Can each of these items be subjected to a process which establishes metrics (quantified metrics) that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do? For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them - in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle. Returning for a moment back to our "education example", one must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the "workflow") uses a completely invalid approach, yet still comes up with the right number. The "Wrong Process"/"Right Answer" is probably the single biggest problem in software development. Even very simple items can suffer from this. As an example consider the following code for a "straight line" calculation....Is it correct? (for Integral Points)         int Solve(int m, int b, int x) { return m * x + b; }   Most people would respond "Yes". But let's take the question one step further... Is it correct for all possible values of m,b,x??? (no fair if you cheated by being focused on the bolded text!)  Without additional information regarding constrains on "the possible values of m,b,x" the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result! To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is "Correct" and has "Quality". Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) "know" what the bounds are, and that the code will be correct. As we all know, requirements change, "code reuse" causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects. This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply: Each item should be tested. The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item. We also add concepts that expand the scope and alter the style by recognizing: There are many things beside "lines of code" that benefit from testing (measuring/evaluating in a formal way) Correctness and Quality can not be solely measured by "correct results" In the future parts, we will examine in greater detail some of the techniques that can be applied to each of these areas....

    Read the article

  • Emacs: methods for debugging python

    - by Tom Willis
    I use emacs for all my code edit needs. Typically, I will use M-x compile to run my test runner which I would say gets me about 70% of what I need to do to keep the code on track however lately I've been wondering how it might be possible to use M-x pdb on occasions where it would be nice to hit a breakpoint and inspect things. In my googling I've found some things that suggest that this is useful/possible. However I have not managed to get it working in a way that I fully understand. I don't know if it's the combination of buildout + appengine that might be making it more difficult but when I try to do something like M-x pdb Run pdb (like this): /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ Where .../bin/python is the interpreter buildout makes with the path set for all the eggs. ~/bin/pdb is a simple script to call into pdb.main using the current python interpreter HellooKitty:hydrant twillis$ cat ~/bin/pdb #! /usr/bin/env python if __name__ == "__main__": import sys sys.version_info import pdb pdb.main() HellooKitty:hydrant twillis$ .../bin/devappserver is the dev_appserver script that the buildout recipe makes for gae project and .../parts/hydrant-app is the path to the app.yaml I am first presented with a prompt Current directory is /Users/twillis/bin/ C-c C-f Nothing happens but HellooKitty:hydrant twillis$ ps aux | grep pdb twillis 469 100.0 1.6 168488 67188 s002 Rs+ 1:03PM 0:52.19 /usr/local/bin/python2.5 /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ twillis 477 0.0 0.0 2435120 420 s000 R+ 1:05PM 0:00.00 grep pdb HellooKitty:hydrant twillis$ something is happening C-x [space] will report that a breakpoint has been set. But I can't manage to get get things going. It feels like I am missing something obvious here. Am I? So, is interactive debugging in emacs worthwhile? is interactive debugging a google appengine app possible? Any suggestions on how I might get this working?

    Read the article

  • Detecting man-in-the-middle attacks?

    - by Ilari Kajaste
    There seem to be many possible ways to create man-in-the-middle attacks on public access points, by stealing the access point's local IP address with ARP spoofing. The possible attacks range from forging password request fields, to changing HTTPS connections to HTTP, and even the recently discovered possibilit of injecting malicious headers in the beginning of secure TLS connections. However, it seems to be claimed that these attacks are not very common. It would be interesting to see for myself. What ways are there to detect if such an attack is being attempted by someone on the network? I guess getting served a plain HTTP login page would be an obvious clue, and of course you could run Wireshark and keep reading all the interesting ARP traffic... But an automated solution would be a tiny bit more handy. Something that analyzes stuff on the background and alerts if an attack is detected on the network. It would be interesting to see for myself if these attack are actually going on somewhere.

    Read the article

  • Setting alias for DynDNS domain

    - by metalball
    Hey all, I've created DynDNS domain for testing my local sites, and i'm having trouble with pointing to root domain. From my registrar (GoDaddy) I've created a CNAME for www to point my example.dyndns.com so going to url www.example.com I'm reaching my site. But if I'm going to example.com I'm reaching to the IP of the A record. I can't set the IP for the A record to be my IP because I have dynamic IP, and it changes constatly, and I can't point the A record to domain, only IP. When trying to create CNAME record @ to point example.dyndns.com I'm getting error "A record of a different type exists for the hostname @, could not create CNAME" The only record using the '@' host are NS record, which I can't delete, and when tried to set another NS record with @ point to example.dyndns.com, I've lost connection to my site :) So what can I do to get example.com url reach my site? Thanx!

    Read the article

  • Proxification rulte for System process

    - by kseen
    I'm trying to configure Microsoft Visual Studio 2010 remote debugging and ran into issue: while connecting to remote computer running MSVSMON, client computer sends SYN request for connection. It makes it under the System process (as I see it in TCPView). As every network apps should be configured to use proxy in our network, I'm trying to add devenv.exe to proxification rules to make its traffic goes thru LAN's proxy server. It doesn't help. So my question is how can I make that low-level-system traffic will go through local area network proxy server?

    Read the article

  • Improper output in SSH session on OSX using FreeSSHd on Windows with cygwin bash/sh shell

    - by Tyler Clendenin
    I am testing out running an SSH server on a local Windows VM. I have installed FreeSSHd and set the command shell to "c:\cygwin\bin\sh --login -i" (bash as well) with "Use new console engine" unchecked. (When it was enabled no output would show through the ssh connection anyway) The shell seems to work, but when connecting from my OS-X terminal using ssh all of the shell results comes out ill formatted. $ ls -al total 17 drwxr-xr-x+ 1 SYSTEM Administrators 4096 Feb 2 01:00 . drwxrwxrwt+ 1 Administrator Administrators 0 Feb 2 01:01 .. -rw------- 1 SYSTEM Administrators 128 Feb 2 01:30 .bash_history -rwxr-xr-x 1 SYSTEM Administrators 1150 Feb 2 00:55 .bash_profile -rwxr-xr-x 1 SYSTEM Administrators 3754 Feb 2 00:55 .bashrc -rwxr-xr-x 1 SYSTEM Administrators 1461 Feb 2 00:55 .inputrc Any ideas on why this is happening, how I can fix this?

    Read the article

  • Can connect to DNS addresses typed in the URL but not by IP addresses

    - by Ben
    I just changed over my modem to bridged mode, and changed my wireless router to PPPoE. My PC IP address is reserved and forwards port 80 to my computer's IP address based on my MAC address. I have a problem, however. I cannot access my local webserver by public IP address or my router 192.168.0.1 wirelessly from any other computer or iPad. I can, however, connect by this PC which is connected to the wireless router via ethernet. Via wireless, it says it cannot connect, however DNS addresses work (e.g. google.com, etc.) Any ideas?

    Read the article

  • Can't make SELinux context types permanent with semanage

    - by Safado
    I created a new folder at /modevasive to hold my mod_evasive scripts and for the Log Directory. I'm trying to change the context type to httpd_sys_content_t so Apache can write to the folder. I did semanage fcontext -a -t "httpd_sys_content_t" /modevasive to change the context and then restorecon -v /modevasive to enable the change, but restorecon didn't do anything. So I used chcon to change it manually, did the restorecon to see what would happen and it changed it back to default_t. semanage fcontext -l gives: /modevasive/ all files system_u:object_r:httpd_sys_content_t:s0` And looking at /etc/selinux/targeted/contexts/files/file_contexts.local gives /modevasive/ system_u:object_r:httpd_sys_content_t:s0 So why does restorecon keep setting it back to default_t?

    Read the article

  • Slower than expected 802.11n wireless network speeds

    - by Ian
    I have two ASUS laptops running Windows 7 connected wirelessly via 802.11n at 150 Mbit, as reported by Task Manager. The router is Netgear WNDR3700. When testing the wireless connection speed using iperf, I'm not getting nearly 150 Mbit: C:\>iperf -c 10.0.0.123 -t 30 ------------------------------------------------------------ Client connecting to 10.0.0.123, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [148] local 10.0.0.116 port 53819 connected with 10.0.0.123 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0-30.0 sec 41.2 MBytes 11.5 Mbits/sec That's a typical result. Running parallel client threads does not increase the overall total speed. Why would I only be getting 11.5 Mbit on a 150 Mbit connection?

    Read the article

  • Mexico leading in Business Transformation Strategies:

    - by [email protected]
    By [email protected] on April 15, 2010 8:31 AM By John Burke Group Vice President Oracle Applications Business Unit I recently completed a business tour in Mexico, and was surprised by both the economic vibrancy of the country and the thought leadership expressed by many of the customers I met. An example of the economic vibrancy of the country: across the street from my hotel was the local Bentley dealership, Coach Store, Yves Saint Laurent and of course a Starbucks. I only made it to Starbucks. Both the Coach Store and YSL had a line of folks waiting to get in... As for thought leadership, there were several illustrations only on the first day. I had the opportunity to meet with a branch of the Mexican Federal Government. Their questions were not about clerical task automation, far from it! We discussed citizen on-line access to fees and services - for example looking up the duty on an international goods shipment, or tracking that my taxes have been received, or the status of my request for a certain service. Eligibility, policies and status. Having an integrated rules or policy automation system that would allow businesses and citizens to access accurate information and ensure the proper collection of fees and payment for 3rd party provided services. Then in the afternoon, I met with the owner of a roofing company (note: most roofs in Mexico are flat and made of cement). This CEO started discussing how he wanted to transform his business from a cement products company to a service company and market 5-10-15 year service contracts which would guarantee the structural integrity of the roof and of course that the roof would remain waterproof. Although his products were guaranteed, they required an annual inspection and most home owners never schedule that inspection until it is too late and water damage has occurred. These emergency calls reduce his margin and reduce customer satisfaction. This lead to a discussion of business models in general and why long term differentiation can only come from service, not just for the music or news industries, but also for roofing companies! I completely agreed with the transformational concepts described in both meetings and quickly understood why there is a Bentley dealership near my hotel.

    Read the article

  • Subdomains for different applications on Windows Server 2008 R2 with Apache and IIS 7 installed

    - by Yusuf
    I have a home server, on which I have installed Apache, and several other applications that have a Web GUI (JDownloader, Free Download Manager). In order to access each of these apps (whether be it from the local network or the Internet), I have to enter a different port, e.g., http://server:8085 or http://xxxx.dyndns.org:8085 for Apache http://server:90 or http://xxxx.dyndns.org:90 for FDM http://server:8081 or http://xxxx.dyndns.org:8081 for JDownloader I would like to be able to access them using sub-domains, e.g, http://apache.server or http://apache.xxxx.dyndns.org for Apache, http://fdm.server or http://fdm.xxxx.dyndns.org for FDM, http://jdownloader.server or http://jdownloader.xxxx.dyndns.org for JDownloader First of all, would it be possible like I want it, i.e., both from LAN and Internet, and if yes, how? Even if it's possible only for Internet, I would like to know how to do it, if there's a way.

    Read the article

  • Block diagram containing computer buses,motherboard..

    - by learnerforever
    Hi all, I am trying to understand computer architecture. In particular: - Physical view i.e. what all is packed inside the motherboard and what all outside - Conceptual view. how processor,memory,peripherals are connected. I am getting confused among various buses like local bus,PCI bus, SCSI bus,ISA bus,USB bus. I am looking for block diagrams. How is the USB port connected to processor ultimately? through PCI bus etc? Why do we have so many buses? What was it like before SCSI/IDE came? Does the diagram at : http://www.vaughns-1-pagers.com/computer/pc-block-diagram.htm look correct? It shows no connection between PCI bus and SCSI bus. Is that correct? I would greatly appreciate any other link especially of block diagrams/anatomy and not just textual writeup. Thanks,

    Read the article

  • How to enable mod_info in Apache?

    - by Amit Nagar
    I gone through the apache guide to enable to mod_info. As per doc: To configure mod_info, add the following to your httpd.conf file. Location /server-info SetHandler server-info /Location You may wish to use mod_access inside the directive to limit access to your server configuration information: Location /server-info SetHandler server-info Order deny,allow Deny from all Allow from yourcompany.com Location Once configured, the server information is obtained by accessing http://your.host.dom/server-info In my case the this link is not giving any info. Http 404 NOT FOUND error Is there anything I need to install as mod_info.c or something ? Is there anything i need to put as AddModule or something ? In Error log : File does not exist: /usr/local/apache2/htdocs/example1/server-info I have 3 virtual host. One of this as default which use example1 as Docroot dir. I am not sure where this page (server-info) should be ? in case of server-status, it's working fine

    Read the article

  • Can't rdp into new ( or old ) Azure VM

    - by Raif
    I have an Azure account with a VM on it. I haven't used it in about 8 months. I tried to connect today but it wont take my creds. Now I'm not entirely sure that I have my password correct, pretty sure but not entirely. So I created a new VM and set the password. Clicked the Connect button on the portal window, tried to connect and was rejected using the password I know to be correct. I have disabled my local machine firewall and antivirus.

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >