Search Results

Search found 5908 results on 237 pages for 'cody short'.

Page 210/237 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • IP Micro-outages, telephone micro-outages, and CATV micro-outages

    - by Michael Graff
    This is a long and complicated question, mostly because it has been going on for 2.5 years without a solution in sight. It also is only one-third computer related, the other two-thirds are cable TV and cable-phone related. Background I have COX Communications for a cable provider, and we get Internet, digital cable TV, and digital phone service through them. The Internet is a SB5101 right now, and has been a DPC2100 and SB5120 in the past. Same results. The phone service is provided through a telephone interface mounted on the outside of the house (not classic VoIP) and the CATV is through a Scientific Atlanta receiver without DVR. I do have a TiVo connected to the CATV box. Symptoms The CATV shows "blocking" -- sometimes very very short duration where a few blocks appear on the screen. Sometimes it lasts long enough that the video "pauses" for 2-5 seconds, and rarely but not unseen the audio also fails. The CATV decoder box shows no correctable (FEC) or uncorrectable errors. That is, all BER counters are zero for the video stream. The Internet shows "micro-outages" where it appears that sent packets are not making it out, but I continue to receive packets from local modems. That is, pings stop coming back, but I continue to see modems broadcast for DHCP, and sometimes they ask more than once. The cable modem shows no errors during this time, but cable modems lie like you would not believe. It is actually possible to unplug the coax from the modem for 20 seconds and it reports NO ERRORS to the provider's tools. The phone service cuts out for 1-3 seconds, infrequently. When this happens, I hear NOTHING (not even comfort noise) and the remote side hears a "click" as if I were getting a call waiting message. However, there is no call incoming, other than the one I'm currently on of course. Things SEEM to happen more frequently when the temperature outside swings from cold to warm, so fall/spring seems worse than summer/winter. All micro-outages occur between once or twice a day (which I could ignore) to 10 times per hour. All SNR, signal levels, noise levels, etc. show very close to optimal when measured. COX's diagnosis This is a continual pain for me. Over the last 2.5 years, they have opened, "fixed" something, and closed the tickets. They close it without confirming that it is indeed better, and when I reopen they cannot do that, but instead they open a new ticket and send yet another low-level tech out to do the same signal tests and report that all is OK. I've finally gotten a line tech who has a clue and is motivated enough to pursue this with me. We have tried things like switching the local nodes over to UPS and generator power, but this does not trigger the noise. We have tried replacing all cabling, the tap outside my house, the modem, the CATV decoder -- all without resolution. Recently they have decided it is both my computer or switch, my TiVo, and my phone that are all broken and causing this issue. My debugging steps I spent the worse day of my TV-watching life yesterday and part of today. I watched live TV without the TiVo. I witnessed blocking, but it did "feel different." and was actually more severe. Some days it is better, some days it is worse, so perhaps this was just a very bad day. Today, I connected the TiVo to my DVD player, and ran two very long movies through it. I saw no blocking at all during nearly 6 hours of video. Suggestions? Does anyone have any suggestions on what to do next? I understand perhaps only the IP side can be addressed here, but it is one of the more limiting debugging options.

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • All client browsers repeatedly asking for NTLM authentication when running through local proxy server

    - by Marko
    All client browsers repeatedly asking for NTLM authentication when running through local proxy server. When pointing browsers through the local proxy to the internet, some but not all clients are being repeatedley prompted to authenticate to the proxy server. I have inspected the headers using firefox live headers as well as fiddler, and in all cases the authentication prompts happen when requesting SSL resources. an example of this would be as follows: GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: gmail.google.com GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: gmail.google.com Proxy-Authorization: NTLM TlRMTVNTUAABAAAAB7IIogkACQAvAAAABwAHACgAAAAFASgKAAAAD1dJTlhQMUdGTEFHU0hJUDc= GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Proxy-Authorization: NTLM TlRMTVNTUAADA (more stuff goes here I cut it short) Host: gmail.google.com At this point the username and password prompt has appeared in the browser, it does not matter what is typed into this box, correct credentials, random nonsense the browser does not accept anything in this box it will continue to popup. If I press cancel, I sometimes get a http 407 error, but on other occasions I click cancel the website proceeds to download and show normally. This is repeatable with some clients running through my proxy server, but in other cases it does not happen at all. In the cases where a client computer works normally, the only difference I can see is that the 3rd request for SSL resource comes back with a 200 response, see below: CONNECT gmail.google.com:443 HTTP/1.0 User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; MALC) Proxy-Connection: Keep-Alive Content-Length: 0 Host: gmail.google.com Pragma: no-cache Proxy-Authorization: NTLM TlRMTVNTUAADAAAAGAAYAIAAAA A SSLv3-compatible ClientHello handshake was found. I have tried resetting user accounts as well as computer accounts in Active Directory. User accounts and passwords that are being used are correct and the passwords have been reset so they are not out of sync. I have removed the clients and even the proxy server from the domain, and rejoined them. I have installed a complete separate proxy server and get exactly the same problem when I point clients to a different proxy server on a different IP address.

    Read the article

  • Inconsistent file downloads of (what should be) the same file

    - by Austin A.
    I'm working on a system that archives large collections of timetstamped images. Part of the system deals with saving an image to a growing .zip file. This morning I noticed that the log system said that an image was successfully downloaded and placed in the zip file, but when I downloaded the .zip (from an apache alias running on our server), the images didn't match the log. For example, although the log said that camera 3484 captured on January 17, 2011, when I download from the apache alias, the downloaded zip file only contains images up to January 14. So, I sshed onto the server, and unzipped the file in its own directory, and that zip file has images from January 14 to today (January 17). What strikes me as odd is that this should be the exact same file as the one I downloaded from the apache alias. Other experiments: I scp-ed the file from the server to my local machine, and the zip file has the newer images. But when I use an SCP client (in this case, Fugu for OSX), I get the zip file for the older images. In short: unzipping a file on the server or after downloading through scp or after downloading through wget gives one zip file, but unzipping a file from Chrome, Firefox, or SCP client gives a different zip file, when they should be exactly the same. Unzipping on the server... [user@server ~]$ cd /export1/amos/images/2011/84/3484/00003484/ [user@server 00003484]$ ls -la total 6180 drwxr-sr-x 2 user groupname 24 Jan 17 11:20 . drwxr-sr-x 4 user groupname 36 Jan 11 19:58 .. -rw-r--r-- 1 user groupname 6309980 Jan 17 12:05 2011.01.zip [user@server 00003484]$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg extracting: 20110114_143554.jpg replace 20110114_143554.jpg? [y]es, [n]o, [A]ll, [N]one, [r]ename: y extracting: 20110114_143554.jpg extracting: 20110114_153458.jpg (...bunch of files...) extracting: 20110117_170459.jpg extracting: 20110117_173458.jpg extracting: 20110117_180501.jpg Using the wget through apache alias. local:~ user$ wget http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip --12:38:13-- http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip => `2011.01.zip' Resolving example.com... ip.ip.ip.ip Connecting to example.com|ip.ip.ip.ip|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 6,327,747 (6.0M) [application/zip] 100% [=====================================================================================================>] 6,327,747 1.03M/s ETA 00:00 12:38:56 (143.23 KB/s) - `2011.01.zip' saved [6327747/6327747] local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (... same as before...) extracting: 20110117_183459.jpg Using scp to grab the zip local:~ user$ scp user@server:/export1/amos/images/2011/84/3484/00003484/2011.01.zip . 2011.01.zip 100% 6179KB 475.3KB/s 00:13 local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (...same as before...) extracting: 20110117_183459.jpg Using Fugu to download 2011.01.zip from /export1/amos/images/2011/84/3484/00003484/ gives images 20110113_090457.jpg through 201100114_010554.jpg Using Firefox to download 2011.01.zip from http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip gives images 20110113_090457.jpg through 201100114_010554.jpg Using Chrome gives same results as Firefox. Relevant section from apache httpd.conf: # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" Alias /zipfiles/ /export1/amos/images/

    Read the article

  • e2fsck / resize2fs problems

    - by BlakBat
    I've got 6 drives (each 1.5T, all same model and firmware revision) that are part of a RAID5 array. The RAID5 makes a LVM volume group and a logical group. The latter contains only one ext3 partition. I've recently ran: e2fsck -f /dev/vg03/lv01 && resize2fs -M /dev/vg03/lv01 which exited without an error. Now when I try to mount /dev/vg03/lv01 I get: EXT3-fs error (device dm-0): ext3_check_descriptors: Block bitmap for group 30533 not in group (block 1000532368)! EXT3-fs: group descriptors corrupted! How do I get out of this predicament? This is all the info I can currently give you: fdisk -l /dev/sd[cdefgh] shows (correctly) that they are "Linux raid autodetect" but fdisk now shows: fdisk -l /dev/md0 Disk /dev/md0: 7501.5 GB, 7501495664640 bytes ... Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table (instead of a LVM type partition) fdisk -l /dev/vg03/lv01 Disk /dev/vg03/lv01: 7501.5 GB, 7501491732480 bytes ... Disk identifier: 0x00000000 Disk /dev/vg03/lv01 doesn't contain a valid partition table (instead of a ext3 type partition) I've tried: e2fsck -fy /dev/vg03/lv01 e2fsck 1.41.12 (17-May-2010) e2fsck: Group descriptors look bad... trying backup blocks... Block bitmap for group 30533 is not in group. (block 1000532368) Relocate? yes Inode bitmap for group 30533 is not in group. (block 1000532369) Relocate? yes Pass 1: Checking inodes, blocks, and sizes Relocating group 30533's block bitmap to 1000524246... Error allocating 1 contiguous block(s) in block group 30533 for inode bitmap: Could not allocate block in ext2 filesystem e2fsck: aborted Extra information I can give you: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid5 sdg1[0] sdh1[5] sdf1[4] sde1[3] sdc1[2] sdd1[1] 7325679360 blocks level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU] bitmap: 1/175 pages [4KB], 4096KB chunk unused devices: Lastly, all smartctl tests (short and extendend) showed no errors on any of the disks. Should I try to resize2fs to grow /dev/vg03/lv01 and redo a e2fsck ? Should I cfdisk /dev/md0 and /dev/vg03/lv01 back to their real types? Thanks in advance for all and any help. 2011-09-20 UPDATE I issued the following commands and was able to remount the partition, but by viewing the size (df) of before and after, it seems that 1Tb of data have gone missing. By checking the MD5SUMS (from an old backup) of some files with the "same" files from the remounted partition, some errors have been detected. Commands issued to remount the partition were: dumpe2fs /dev/vg03/lv01 Block count: 1000491435<br /> Block size: 4096<br /> tune2fs -O ^has_journal /dev/vg03/lv01 resize2fs -p /dev/vg03/lv01 dumpe2fs /dev/vg03/lv01 Block count: 1831418880<br /> Block size: 4096<br /> mount -o ro,noatime /dev/vg03/lv01 /mnt/raid OK... but files have been damaged / gone missing.

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • Alias wordpress folder from within another website

    - by Bretticus
    I have a little dilemma. I wrote a custom PHP MVC framework and built a CMS on top of it. I decided to give nginx+fpm a whirl too. Which is the root of my dilemma. I was asked to incorporate a wordpress blog into my website (yah.) It has much content and it's not feasible in the short amount of time I have to bring all the content into my CMS. Because of using Apache for years, I'm, admittedly, a little lost using nginx. My website has the file path: /opt/directories/mysite/public/ The wordpress files are located at: /opt/directories/mysite/news/ I know I just need to setup location(s) that targets /news[/*] and then forces all matching URI's to the index.php within. Can someone point me in the right direction perhaps? My configuration is below: server { listen 80; server_name staging.mysite.com index index.php; root /opt/directories/mysite/public; access_log /var/log/nginx/mysite/access.log; error_log /var/log/nginx/mysite/error.log; add_header X-NodeName directory01; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { try_files $uri $uri/ /index.php?route=$uri&$args; } location ~ /news { try_files $uri $uri/ @news; } location @news { fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_split_path_info ^(/news)(/.*)$; fastcgi_param SCRIPT_FILENAME /opt/directories/mysite/news/index.php; fastcgi_param PATH_INFO $fastcgi_path_info; } include fastcgi_params; include php.conf; location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } My php.conf file: location ~ \.php { fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_pass unix:/tmp/php-fpm.sock; # If you must use PATH_INFO and PATH_TRANSLATED then add # the following within your location block above # (make sure $ does not exist after \.php or /index.php/some/path/ will not match): #fastcgi_split_path_info ^(.+\.php)(/.+)$; #fastcgi_param PATH_INFO $fastcgi_path_info; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } fastcgi_params file: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; Thanks, in large part, to @Kromey, I have adjusted my location /news/ but I am still not getting the desired result. I was able to learn to tack a ~ my /news location as I discovered that my php location was being matched first. With this setup, I now get a 200 status, but the page is blank. Any ideas?

    Read the article

  • Capistrano + Nginx + Passenger = 403

    - by slimchrisp
    I asked this over at stackoverflow as well, but still haven't received any answers that have helped me to solve this problem. I have spent almost a week at this point trying to solve the issue, and I'm just not making any headway. It seems that this issue is pretty common, but none of the solutions I found online work for me. A buddy of mine is actually creating the same setup, and he is having the same issue. After a few days stuck with the 403 error I started over using this tutorial: http://blog.ninjahideout.com/posts/a-guide-to-a-nginx-passenger-and-rvm-server I had hoped starting from scratch using this tutorial would work, but no dice. Either way, if you view the tutorial you can see what steps I have taken. Here is essentially what I have going on. I have a VPS account on linode.com Server OS is Ubuntu 10.04 Local OS (shouldn't matter, but just so you know) used to deploy with Capistrano is Snow Leopard 10.6.6 I use RVM on the server. Version is 1.2.2 I was previously on ruby-1.9.2-p0 [ i386 ], but per the tutorial listed above I switched to ree-1.8.7-2010.02 [ i386 ]. Running 'which ruby' from the command line verifies that I am using 1.8.7 with the following output: /usr/local/rvm/rubies/ree-1.8.7-2010.02/bin/ruby passenger -v prints the following: Phusion Passenger version 3.0.2 Running 'nginx -v' gives me a message that the command nginx could not be found. The server is definitely there and running as I can use nginx to serve static files, but this could have something to do with my problem. I have two users dealing with the install. root which I used to install everything, and deployer which is a user I created specifically to for deploying my applications My web app directory is in the deployer user's home directory as follows: /home/deployer/webapps/mysite.com/public Per Capistrano default deploy, a symbolic link called current is created in the public folder, and points to /home/deployer/webapps/mysite.com/public/releases/most_current_release I have chmodded the deployer directory recursively to 777 /opt/nginx permissions: rwxr-xr-x /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2 permissions: rwxrwsrwx My nginx config file has gone through just short of eternity variations, but currently looks like this: ================================================================================== worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2; passenger_ruby /usr/local/rvm/bin/passenger_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { # listen *:80; server_name mysite.com www.mysite.com; root /home/deployer/webapps/mysite.com/public/current; passenger_enabled on; passenger_friendly_error_pages on; access_log logs/mysite.com/server.log; error_log logs/mysite.com/error.log info; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ================================================================================== I bounce nginx, hit the site, and boom. 403, and logs say directory index of /home/deployer... is forbidden As others with a similar problem have said, you can drop an index.html into the public/releases/current_release and it will render. But rails no worky. That's basically it. At this point I have just about completely exhausted every possible solution attempt I can think of. I am a programmer and definitely not a sysadmin, so I am 99% sure this has something to do with permissions that I have hosed, but for the life of me I just can't figure out where. If anyone can help I would really really appreciate it. If there's any specific permission things you want me to check (ie groups/permissions), can you please include the commands to do so as well. Hopefully this will help others in the future who read this post. Let me know if there is any other information I can provide, and thanks in advance!!!

    Read the article

  • Vim: Context sensitive code completion for PHP

    - by eddy147
    Vim gives me too much options when I use code completion. In a class, and type $class- it gives me about a zillion options, so not only from the class itself but also from php, all globals ever created, in short: a mess. I only want to have the options from the class itself (or the parent subtype class it extends from), so context or scope sensitive code completion, just like Netbeans for example. How can I do that? My current configuration is this: I am using ctags, and created 1 ctags file for our (big) application in the root. This is the .ctags file I used to create the ctags file: -R -h ".php" --exclude=.svn --languages=+PHP,-JavaScript --tag-relative=yes --regex-PHP=/abstract\s+class\s+([^ ]+)/\1/c/ --regex-PHP=/interface\s+([^ ]+)/\1/c/ --regex-PHP=/(public\s+|static\s+|protected\s+|private\s+)\$([^ \t=]+)/\2/p/ --regex-PHP=/const\s+([^ \t=]+)/\1/d/ --regex-PHP=/final\s+(public\s+|static\s+|abstract\s+|protected\s+|private\s+)function\s+\&?\s*([^ (]+)/\2/f/ --PHP-kinds=+cdf --fields=+iaS This is the .vimrc file: " autocomplete funcs and identifiers for languages autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType python set omnifunc=pythoncomplete#Complete autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS autocmd FileType html set omnifunc=htmlcomplete#CompleteTags autocmd FileType css set omnifunc=csscomplete#CompleteCSS autocmd FileType xml set omnifunc=xmlcomplete#CompleteTags autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType c set omnifunc=ccomplete#Complete " exuberant ctags " the magic is the ';' at end. it will make vim tags file search go up from current directory until it finds one. set tags=projectrootdir/tags; map <F8> :!ctags " TagList " :tag getUser => Jump to getUser method " :tn (or tnext) => go to next search result " :tp (or tprev) => to to previous search result " :ts (or tselect) => List the current tags " => Go back to last tag location " +Left click => Go to definition of a method " More info: " http://vimdoc.sourceforge.net/htmldoc/tagsrch.html (official documentation) " http://www.vim.org/tips/tip.php?tip_id=94 (a vim tip) let Tlist_Ctags_Cmd = "~/bin/ctags" let Tlist_WinWidth = 50 map <F4> :TlistToggle<cr> "see http://vim.wikia.com/wiki/Make_Vim_completion_popup_menu_work_just_like_in_an_IDE " will change the 'completeopt' option so that Vim's popup menu doesn't select the first completion item, but rather just inserts the longest common text of all matches :set completeopt=longest,menuone " will change the behavior of the <Enter> key when the popup menu is visible. In that case the Enter key will simply select the highlighted menu item, just as <C-Y> does :inoremap <expr> <CR> pumvisible() ? "\<C-y>" : "\<C-g>u\<CR>" " inoremap <expr> <C-n> pumvisible() ? '<C-n>' : \ '<C-n><C-r>=pumvisible() ? "\<lt>Down>" : ""<CR>' inoremap <expr> <M-,> pumvisible() ? '<C-n>' : \ '<C-x><C-o><C-n><C-p><C-r>=pumvisible() ? "\<lt>Down>" : ""<CR>'

    Read the article

  • WNDR3700 Router + Cisco SG200-08 + LACP + Dual Uplink

    - by kobaltz
    Background I have a storage server that has several virtual machine images stored on them. I would store them locally, but I have limited space on my desktop (using SSD storage). I would like to increase the bandwidth between the desktop and the storage server by using two NICs on each computer. My original configuration allowed about 55MBps between the desktop and storage server. This storage server also has several TBs of documents, pictures, movies, vms, and ISO/programs. The storage server has 8 1.5TB hard drives in a RAID 10 configuration with a hardware RAID controller. The benchmarks on the RAID 10 are about 300MBps. Configuration In short, I am trying to bridge my switch and router. The switch is a small 8 port Cisco smart switch that supports 802.3ad LACP. I have two computers plugged into the switch, each with 2 Intel Gigabit NICs. The first computer is a Windows 7 machine that has the Intel ANS software installed. I have LACP configured with the computer and now show 3 NICs (2 Physical + 1 TEAM Virtual @ 2Gbps). It looks like this computer is configured correctly. I trunked the two ports that this computer is plugged into with the switch's web interface. The second computer is a homebrew storage box running debian. I also have the bonding enabled on this machine and the switch configured with LACP. Without having the WNDR3700 router in the picture yet, I am able to communicate between the Windows 7 machine and the debian box since they both have static IP addresses. With LACP enabled on both machines I am getting about 106-108MBps speeds. Issue I plug in a network cable from the switch into the router and enable DHCP on the desktop. I saw no need to have a static address on the desktop. My transfer rates are still from 106MBps-108MBps. While this is still a boost, I am trying to figure out how to get about 140-180MBps. I am thinking that I need to increase the bandwidth from the router to the switch. My switch allows 4 groups for port trunking. I plugged in a second network cable from the router to the switch. My question is, what is the proper way to fix this issue. Should I port trunk the two ports that are going from the switch to the router? Keep in mind that the router is a WNDR3700 and is unsure whether or not it supports LACP. I do have OpenWRT installed on the router, but it still wasn't clear in any documentation that I found if it supported 802.3ad LACP standards. I am also wondering if there needs to be anything changed within the Cisco settings. [Edit] - Corrected some numbers, wasn't really paying attention. It looks like the speeds though at least two NICs are bonded with LACP is still reaching the max bandwidth of one port. Is there a way to configure the switch so that I can increase this bandwidth? Also, on the storage server, I had a couple of extra NICs laying around and threw them on there as well. Another EDIT and More Findings I happened to look at the traffic of each individual NIC and think that I see the problem. I tested with a simple transfer for a 4GB file. I noticed that only one of the NICs was taking the load of the traffic. I then copied the file back to the Storage Server and noticed that the other NIC was sending out the traffic. I have 802.3ad LACP enabled on the two NICs and I see that it gets enabled dynamically on the switch's interface. Should I be using Static Link Aggregation?

    Read the article

  • My 2009 MacBook Logic board failed - options to proceed and how difficult?

    - by user181061
    Scannerz just gave my MacBook logic board a big fat F! I upgraded from Snow Leopard to Mountain Lion about 3 weeks ago. The system was running short of memory so I upgraded it. The system was running fine for about 2 weeks. Yesterday the thing started acting erratic. A lot of spinning beach balls, delays, and then some errors saying files couldn't be read to or from the drive. I figured the drive was going because the system is over 3 years old. I ran Scannerz on it and it indicated a lot of errors and irregularities. I rescanned it in cursory mode, and none of them were repeatable, just showing up all over the place in different regions of the scan. I went through the docs and they implied either an I/O cable was bad, a connection was damaged, or the logic board was bad. I tossed on my backup of Snow Leopard that I cloned from the original hard drive because I figured Mountain Lion was to blame and booted from the USB drive with the clone on it. It wasn't. I performed scans on every single port, and errors and irregularities that couldn't be repeated were showing up on every single one of them. I then, for kicks, put a CD into the CD player. Scannerz doesn't test optical drives but I figured surely that will work. No it won't. More spinning beach balls and messages telling me it can't be read. It was working fine 3 days ago. I know a lot of people don't like MacBook's, but mine's been great, at least until now. It was working great even with Mountain Lion after the upgrade. The system is a mid-2009 MacBook. In my opinion, it's a complete waste to toss this system. The display is too good, the keyboard works great, and it still looks good, plus this type of MacBook still uses the FireWire 400 port and I use that for Time Machine backups. I've tried reseating the RAM, it didn't do anything. I shut the system down and put in the old RAM, booted to Snow Leopard, and the problems persist. Here are my questions: The Scannerz documentation somewhere said something about the Airport card not being seated properly, but when I go to iFixit, it's apparent, at least I think it's apparent, that this isn't a slot type Airport card that the user can easily install or remove. If the cables or connections to the Airport card are bad, could they be causing this problem. How about any other connections that can be intermittent, failing or erratic? Any type of resets that I could possibly do to get rid of this? For any of those that have replaced a logic board on a MacBook, if this really is the culprit, are there any "gotcha's" I need to be aware of? As an FYI, I replaced the hard drive on an old iBook @500MHz that I had a long time ago, and I replaced the drive on a 1.33GHz PowerBook about 6 years ago. You have to be careful, but using some of the info on web sites like iFixit it's not that hard. Time consuming, but not that hard. The Intel based MacBook's to me look like they're easier to service than either of those. I'm thinking about getting a unit off of eBay that matches mine but has something else wrong with it, like a busted display. I REFUSE to buy a new system. A guy at my office has a 2007 Mac Pro and he can't upgrade to Mountain Lion because his system is "obsoleted." That's ridiculous. If you pay nearly $7,500 for a system it shouldn't be trash just because Apple decides they don't have enough money (sorry for the soap box, but it's true, IMO!) Any input is appreciated.

    Read the article

  • Optimise Apache for EC2 micro instance

    - by Shiyu Sekam
    I'm running apache2 on a EC2 micro instance with ~600 mb RAM. The instance was running for almost a year without problems, but in the last weeks it just keeps crashing, because the server reached MaxClients. The server basically runs few websites, one wordpress blog(not often used), company website(most used) and 2 small sites, which are just internal. The database for the blog runs on RDS, so there's no Mysql running on this web server. When I came to the company, the server already was setup and is running apache + mod_php + prefork. We want to migrate that in the future to a nginx + php-fpm, but it still needs further testing. So for now I have to stick with the old setup. I also use CloudFlare DDOS protection in front of the server, because it was attacked a couple of the times in the last weeks. My company don't want to pay money for a better web server at this point, so I have to stick with the micro instance also. Additionally the code for the website we run is really bad and slow and sometimes a single page load can take up to 15 seconds. The whole website is dynamic and written in PHP, so caching isn't really an option here. It's a customized search for users. I've already turned off KeepAlive, which improved the performance a little bit. My prefork config looks like the following: StartServers 2 MinSpareServers 2 MaxSpareServers 5 ServerLimit 10 MaxClients 10 MaxRequestsPerChild 100 The server just becomes unresponsive after a while running and I've run the following command to see how many connections there are: netstat | grep http | wc -l 75 Trying to restart apache helps for a short moment, but after that a while the apache process(es) become unresponsive again. I've the following modules enabled(output of apache2ctl -M) Loaded Modules: core_module (static) log_config_module (static) logio_module (static) version_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) authz_host_module (shared) deflate_module (shared) dir_module (shared) expires_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK apache2.conf # Security ServerTokens OS ServerSignature On TraceEnable On ServerName "web.example.com" ServerRoot "/etc/apache2" PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive off User www-data Group www-data AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> <Directory /> Options FollowSymLinks AllowOverride None </Directory> DefaultType none HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel warn EnableSendfile On #Listen 80 Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf Include /etc/apache2/ports.conf LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include /etc/apache2/conf.d/*.conf Include /etc/apache2/sites-enabled/*.conf Vhost of main site <VirtualHost *:80> ServerName www.example.com ## Vhost docroot DocumentRoot /srv/www/jenkins/Web ## Directories, there should at least be a declaration for /srv/www/jenkins/Web <Directory /srv/www/jenkins/Web> AllowOverride All Order allow,deny Allow from all </Directory> ## Load additional static includes ## Logging ErrorLog /var/log/apache2/www.example.com.error.log LogLevel warn ServerSignature Off CustomLog /var/log/apache2/www.example.com.access.log combined ## Rewrite rules RewriteEngine On RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^.*$ http://www.example.com%{REQUEST_URI} [R=301,L] ## Server aliases ServerAlias www.example.invalid ServerAlias example.com ## Custom fragment <Location /srv/www/jenkins/Web/library> Order Deny,Allow Deny from all </Location> <Files ~ "^\.(.+)"> Order deny,allow deny from all </Files> </VirtualHost>

    Read the article

  • Mod_Rewrite w Apache mod_jrun22.so & ColdFusion 9 on cPanel

    - by Eddie B
    How can I utilize mod_rewrite at either the httpd.conf level or per-directory level when mod_jrun22 seems to have short-stopped the rewrite process for ColdFusion pages? I have a ColdFusion 9 based site running on Centos 5.8 w cPanel. cPanel uses EasyApache 3 to manage virtual host containers and as such the conf for mod_jrun22.so, /usr/local/apache/conf/includes/pre_main_global.conf, is loaded prior to the main httpd.conf with the domain specific rules for the container. My assertion is that .cfm pages are failing to be rewritten due to the mod_jk22.so module having priority in the directive chain. To note, I also have a WordPress blog in the site where the rewrites appear to be working fine. For example the following code to remove the index file works fine for php and fails with cfm ... .htaccess under /blog/ : This works Options -Indexes -Multiviews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /blog/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog/index.php [L] </IfModule> .htaccess under / : This does not work as expected. Apache serves the page. ASSERT: This would redirect to domain.com/ without index.cfm Options -Indexes -Multiviews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.cfm$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.cfm [L] </IfModule> .htaccess under / : This works I'm presuming this is working because the redirect is to another .cfm page and a 404 handler in Application.cfc ... Options -Indexes -Multiviews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^.*\.cfm$ - [L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{ENV:REDIRECT_STATUS} =404 RewriteRule . /404.cfm$ [L] </IfModule> I've attempted numerous different methods to rewrite .cfm urls ... Adding [PT], [L], [R], [NS], Moving the script to Directory blocks under httpd.conf --- all with the same results ... either the rewrite doesn't work or Apache crashes in an endless loop ... Any help would be greatly appreciated. Below is a single-visit rewrite log snippet for a request to /index.cfm ... the pass-through is taking effect before the rewrite ... cat rewrite_dump_mod | grep index.cfm [perdir /home/foo/public_html/] strip per-dir prefix: /home/foo/public_html/index.cfm -> index.cfm [perdir /home/foo/public_html/] applying pattern '^.*\.cfm$' to uri 'index.cfm' [perdir /home/foo/public_html/] pass through /home/foo/public_html/index.cfm [perdir /home/foo/public_html/] strip per-dir prefix: /home/foo/public_html/index.cfm -> index.cfm [perdir /home/foo/public_html/] applying pattern '^.*\.cfm$' to uri 'index.cfm' [perdir /home/foo/public_html/] pass through /home/foo/public_html/index.cfm * UPDATE * I've managed to figure this out ... it took a while ... Options -Indexes -Multiviews +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^.*/index\.cfm RewriteRule ^(.*)index.cfm http://%{HTTP_HOST}/$1 [R=301,L] </IfModule>

    Read the article

  • USB connection is unstable with Nexus S 2.3.4 on AMD 64 running 64-bit Windows 7, but works with 32-bit Windows Vista

    - by Mike
    The USB connection is unstable with Nexus S (Android 2.3.4) on AMD 64 running 64-bit Windows 7, but it works with 32-bit Windows Vista. Problem Description: On the 64-bit Windows 7 machine my Nexus S appears to connect, but then it disconnects moments later. Neither accessing USB storage or loading an Android application package file (APK) using the Android Debug Bridge (ADB) work. On 32-bit Windows Vista using the same USB cable, USB storage works. I haven't tried the ADB on 32-bit Windows Vista. Reproduction steps for USB storage: (I have provided the reproduction steps for USB storage and not ADB, because if one isn't working, then the other isn't working either and the USB storage reproduction steps are shorter to document.) Connect the USB cable to the Nexus S and my Windows 7 machine. Effect: The "USB Mass Storage, USB Connected" dialog appears with the button "Turn on USB storage." Click "Turn on USB Storage" Effect: The "working circle" appears. A dialog briefly appears saying "USB storage in use," then it either returns me to Step 1 (now that I am running 2.3.4) or is replaced with the Nexus S's application homepage (while I was running 2.3.3). I'm not sure if the version matters, but I mention it for completeness. On the 32-bit Windows Vista machine the connection is stable. I am able to navigate through the Nexus S file system create, read, update, and delete files, etc. I haven't tried connecting with the ADB. Troubleshooting summary: Tried and failed: Uninstalling and reinstalling the Android USB drivers including removing the files. Uninstalling my custom software Pulling the Nexus S's battery Restarting the Nexus S Restarting 64-bit Windows 7 Changing USB ports on the 64-bit Windows 7 box Compared the dates and file size on the DLLs in my google-usb_driver\amd64 directory and the windows\System32 directory. They match. The sizes for the google-usb_driver\i386 directory do not match (expected). Turning off Debugging mode on the Nexus S does not resolve the problem. Searching Google. Tried and succeeded: Connecting to another machine (Windows Vista) using the same USB cable and Nexus S phone. Troubleshooting observations: I notice that uninstalling the device drivers and deleting the files, then reinstalling the drivers, then rebooting 64-bit Windows 7 then unplugging the Nexus S, then plugging it back in occasionally helps for a short amount of time (minutes to hours, not days). When it is working, I can both access the Nexus S's drive and load/test applications using the ADB. I have observed some wonky behavior in the Device Manager that I haven't tracked down. Sometimes the black Nexus S image appears in the list of devices. Sometimes the image displays as a computer with a green ISA card. Sometimes it neither appears on the top level of devices nor under “other devices,” but it does appear under "disk drives" as "Android UMS Composite USB Device." System configuration: The Nexus S is running Android OS 2.3.4's "Settings\about phone\System updates" indicates that it is up to date as of May 21st 2011. Both 32-bit Windows Vista and 64-bit Windows 7 are up to date. The Windows Vista system is running on an Intel 32-bit processor. Windows 7 is running on an AMD 64-bit processor. I have done Android development on both systems, but I usually develop on the 64-bit Windows 7 machine.

    Read the article

  • How broken is routing strategy that causes a martian packet (so far only) during tracepath?

    - by lkraav
    I believe I've achieved a table that routes packets from and to eth1/192.168.3.x through 192.168.3.1, and packets from and to eth0/192.168.1.x through 192.168.1.1 (helpful source). Question: when doing tracepath from 192.168.3.20 (from within vserver), I'm getting kernel: [318535.927489] martian source 192.168.3.20 from 212.47.223.33, on dev eth0 at or near the target IP, while intermediary hops go without (log below). I don't understand why this packet is arriving on eth0, instead of eth1, even after reading this: Note that you may see packets from non-routable IP addresses when running the traceroute or tracepath commands. While packets cannot be routed to these routers, packets sent between 2 routers only need to know the address of the next hop within the local networks, which could be a non-routable address. Can someone explain that paragraph in human language? Based on short initial trials so far, everything else seems to work without causing martians. Is this contained to the nature of tracepath operation or do I have some other bigger routing problem that will cause work traffic breakage? Side note: is it possible to inspect martian packet with tcpdump or wireshark or anything of the sort? I'm have not been able to get it to show up on my own. vserver-20 / # tracepath -n 212.47.223.33 1: 192.168.3.2 0.064ms pmtu 1500 1: 192.168.3.1 1.076ms 1: 192.168.3.1 1.259ms 2: 90.191.8.2 1.908ms 3: 90.190.134.194 2.595ms 4: 194.126.123.94 2.136ms asymm 5 5: 195.250.170.22 2.266ms asymm 6 6: 212.47.201.86 2.390ms asymm 7 7: no reply 8: no reply 9: no reply ^C Host routing: $ sudo ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 3: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:24:1d:de:b3:5d brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 scope global eth0 4: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:46:46:a3:6a brd ff:ff:ff:ff:ff:ff inet 192.168.3.2/27 scope global eth1 inet 192.168.3.20/27 brd 192.168.3.31 scope global secondary eth1 # linux-vserver instance $ sudo ip route default via 192.168.1.1 dev eth0 metric 3 unreachable 127.0.0.0/8 scope host 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.2 192.168.3.0/27 dev eth1 proto kernel scope link src 192.168.3.2 $ sudo ip rule 0: from all lookup local 32764: from all to 192.168.3.0/27 lookup dmz 32765: from 192.168.3.0/27 lookup dmz 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table dmz default via 192.168.3.1 dev eth1 metric 4 192.168.3.0/27 dev eth1 scope link metric 4 Gateway routing # ip route 10.24.0.2 dev tun0 proto kernel scope link src 10.24.0.1 10.24.0.0/24 via 10.24.0.2 dev tun0 192.168.3.0/24 dev br-dmz proto kernel scope link src 192.168.3.1 192.168.1.0/24 dev br-lan proto kernel scope link src 192.168.1.1 $ISP_NET/23 dev eth0.1 proto kernel scope link src $WAN_IP default via $ISP_GW dev eth0.1 Additional background Options for non-virtualized network interface isolation?

    Read the article

  • MSBuild: convert relative path in imported project to absolute path.

    - by Ergwun
    Short version: I have an MSBuild project that imports another project. There is a property holding a relative path in the imported project that is relative to the location of the imported project. How do I convert this relative path to be absolute? I've tried the ConvertToAbsolutePath task, but this makes it relative to the importing project's location). Long version: I'm trying out Robert Koritnik's MSBuild task for integrating nunit output into Visual Studio (see this other SO question for a link). Since I like to have all my tools under version control, I want the target file with the custom task in it to point to the nunit console application using a relative path. My problem is that this relative path ends up being made relative to the importing project. E.g. (in ... MyRepository\Third Party\NUnit\MSBuild.NUnit.Task.Source\bin\Release\MSBuild.NUnit.Task.Targets): ... <PropertyGroup Condition="'$(NUnitConsoleToolPath)' == ''"> <NUnitConsoleToolPath>..\..\..\NUnit 2.5.5\bin\net-2.0</> </PropertyGroup> ... <Target Name="IntegratedTest"> <NUnitIntegrated TreatFailedTestsAsErrors="$(NUnitTreatFailedTestsAsErrors)" AssemblyName="$(AssemblyName)" OutputPath="$(OutputPath)" ConsoleToolPath="$(NUnitConsoleToolPath)" ConsoleTool="$(NUnitConsoleTool)" /> </Target> ... The above target fails with the error that the file cannot be found (that is the nunit-console.exe file). Inside the NUnitIntegrated MSBuild task, when the the execute() method is called, the current directory is the directory of the importing project, so relative paths will point to the wrong location. I tried to convert the relative path to absolute by adding these tasks to the IntegratedTest target: <ConvertToAbsolutePath Paths="$(NUnitConsoleToolPath)"> <Output TaskParameter="AbsolutePaths" PropertyName="AbsoluteNUnitConsoleToolPath"/> </ConvertToAbsolutePath> but this just converted it to be relative to the directory of the project file that imports this target file. I know I can use the property $(MSBuildProjectDirectory) to get the directory of the importing project, but can't find any equivalent for directory of the imported target file. Can anyone tell me how a path in an imported file that is supposed to be relative to the directory that the imported file is in can be made absolute? Thanks!

    Read the article

  • Winforms TabControl causing spurious Paint events for UserControl

    - by Tom Bushell
    For our project, we've written a WinForms UserControl for graphing. We're seeing some strange behavior when our control is sited in a TabControl - our control continuously fires Paint events, even when there is absolutely no activity by the user. We only see this in the TabControl. When we site our control in other containers such as Forms or Splitters, Paint is only fired when you'd expect e.g. when the control is first displayed, etc. Can anyone suggest why this might be happening? Here's a stack trace from a breakpoint in our control's Paint handler, if that's any help. OverlordFrontEnd.exe!OverlordFrontEnd.MainForm.graphControl_Paint(object sender = BI_BaseGraphXY.BaseGraphXY}, System.Windows.Forms.PaintEventArgs e = {ClipRectangle = {X=0,Y=0,Width=1031,Height=408}}) Line 422 C# System.Windows.Forms.dll!System.Windows.Forms.Control.OnPaint(System.Windows.Forms.PaintEventArgs e) + 0x73 bytes BI_AppCore.dll!BI_BaseGraphXY.BaseGraphXY.OnPaint(System.Windows.Forms.PaintEventArgs e = {ClipRectangle = {X=0,Y=0,Width=1031,Height=408}}) Line 377 + 0xb bytes C# System.Windows.Forms.dll!System.Windows.Forms.Control.PaintTransparentBackground(System.Windows.Forms.PaintEventArgs e, System.Drawing.Rectangle rectangle, System.Drawing.Region transparentRegion = null) + 0x16c bytes System.Windows.Forms.dll!System.Windows.Forms.Control.PaintBackground(System.Windows.Forms.PaintEventArgs e = {ClipRectangle = {X=0,Y=0,Width=1029,Height=406}}, System.Drawing.Rectangle rectangle, System.Drawing.Color backColor, System.Drawing.Point scrollOffset) + 0xbc bytes System.Windows.Forms.dll!System.Windows.Forms.Control.PaintBackground(System.Windows.Forms.PaintEventArgs e, System.Drawing.Rectangle rectangle) + 0x63 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.OnPaintBackground(System.Windows.Forms.PaintEventArgs pevent) + 0x59 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.PaintWithErrorHandling(System.Windows.Forms.PaintEventArgs e = {ClipRectangle = {X=0,Y=0,Width=1029,Height=406}}, short layer, bool disposeEventArgs = false) + 0x74 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.WmPaint(ref System.Windows.Forms.Message m) + 0x1ba bytes System.Windows.Forms.dll!System.Windows.Forms.Control.WndProc(ref System.Windows.Forms.Message m) + 0x33e bytes System.Windows.Forms.dll!System.Windows.Forms.Control.ControlNativeWindow.OnMessage(ref System.Windows.Forms.Message m) + 0x10 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.ControlNativeWindow.WndProc(ref System.Windows.Forms.Message m) + 0x31 bytes System.Windows.Forms.dll!System.Windows.Forms.NativeWindow.Callback(System.IntPtr hWnd, int msg = 15, System.IntPtr wparam, System.IntPtr lparam) + 0x5a bytes [Native to Managed Transition] [Managed to Native Transition] System.Windows.Forms.dll!System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(int dwComponentID, int reason = -1, int pvLoopData = 0) + 0x24e bytes System.Windows.Forms.dll!System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(int reason = -1, System.Windows.Forms.ApplicationContext context = {Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.WinFormsAppContext}) + 0x177 bytes System.Windows.Forms.dll!System.Windows.Forms.Application.ThreadContext.RunMessageLoop(int reason, System.Windows.Forms.ApplicationContext context) + 0x61 bytes System.Windows.Forms.dll!System.Windows.Forms.Application.Run(System.Windows.Forms.ApplicationContext context) + 0x18 bytes Microsoft.VisualBasic.dll!Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.OnRun() + 0x81 bytes Microsoft.VisualBasic.dll!Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.DoApplicationModel() + 0xef bytes Microsoft.VisualBasic.dll!Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(string[] commandLine) + 0x2c0 bytes OverlordFrontEnd.exe!OverlordFrontEnd.Program.Main() Line 36 + 0x10 bytes C# [Native to Managed Transition] [Managed to Native Transition] mscorlib.dll!System.AppDomain.ExecuteAssembly(string assemblyFile, System.Security.Policy.Evidence assemblySecurity, string[] args) + 0x3a bytes Microsoft.VisualStudio.HostingProcess.Utilities.dll!Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() + 0x2b bytes mscorlib.dll!System.Threading.ThreadHelper.ThreadStart_Context(object state) + 0x66 bytes mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state) + 0x6f bytes mscorlib.dll!System.Threading.ThreadHelper.ThreadStart() + 0x44 bytes

    Read the article

  • Integrating POP3 client functionality into a C# application?

    - by flesh
    I have a web application that requires a server based component to periodically access POP3 email boxes and retrieve emails. The service then needs to process the emails which will involve: Validating the email against some business rules (does it contain a valid reference in the subject line, which user sent the mail, etc.) Analysing and saving any attachments to disk Take the email body and attachment details and create a new item in the database Or update an existing item where the reference matches the incoming email subject line What is the best way to approach this? I really don't want to have to write a POP3 client from scratch, but I need to be able to customize the processing of emails. Ideally I would be able to plug in some component that does the access and retrieval for me, returning arrays of attachments, body text, subject line, etc. ready for my processing... [ UPDATE: Reviews ] OK, so I have spent a fair amount of time looking into (mainly free) .NET POP3 libraries so I thought I'd provide a short review of some of those mentioned below and a few others: Pop3.net - free - works OK, very basic in terms of functionality provided. This is pretty much just the POP3 commands and some base64 encoding, but it's very straight forward - probably a good introduction Pop3 Wizard - commercial / some open source code - couldn't get this to build, missing DLLs, I wouldn't bother with this C#Mail - free - works well, comes with Mime parser and SMTP client, however the comments are in Japanese (not a big deal) and it didn't work with SSL 'out of the box' - I had to change the SslStream constructor after which it worked no problem OpenPOP - free - hasn't been updated for about 5 years so it's current state is .NET 1.0, doesn't support SSL but that was no problem to resolve - I just replaced the existing stream with an SslStream and it worked. Comes with Mime parser. Of the free libraries, I'd go for C#Mail or OpenPOP. I looked at a few commercial libraries: Chillkat, Rebex, RemObjects, JMail.net. Based on features, price and impression of the company I would probably go for Rebex and may in the future if my requirements change or I run into production issues with either of C#Mail or OpenPOP. In case anyone's needs it, this is the replacement SslStream constructor that I used to enable SSL with C#Mail and OpenPOP: SslStream stream = new SslStream(clientSocket.GetStream(), false, delegate(object sender, X509Certificate cert, X509Chain chain, SslPolicyErrors errors) { return true; });

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • Debugging XSLT with extension objects in Visual Studio 2010

    - by Alex Ciminian
    I'm currently working on a project that involves a lot of XSLT transformations and I really need a debugger (I have XSLTs that are 1000+ lines long and I didn't write them :-). The project is written in C# and makes use of extension objects: xslArg.AddExtensionObject("urn:<obj>", new <Obj>()); From my knowledge, in this situation Visual Studio is the only tool that can help me debug the transformations step-by-step. The static debugger is no use because of the extension objects (it throws an error when it reaches elements that reference their namespace). Fortunately, I've found this thread which gave me a starting point (at least I know it can be done). After searching MSDN, I found the criteria that makes stepping into the transform possible. They are listed here. In short: the XML and the XSLT must be loaded via a class that has the IXmlLineInfo interface (XmlReader & co.) the XML resolver used in the XSLTCompiledTransform constructor is file-based (XmlUriResolver should work). the stylesheet should be on the local machine or on the intranet (?) From what I can tell, I fit all these criteria, but it still doesn't work. The relevant code samples are posted below: // [...] xslTransform = new XslCompiledTransform(true); xslTransform.Load(XmlReader.Create(new StringReader(contents)), null, new BaseUriXmlResolver(xslLocalPath)); // [...] // I already had the xml loaded in an xmlDocument // so I have to convert to an XmlReader XmlTextReader r = new XmlTextReader(new StringReader(xmlDoc.OuterXml)); XsltArgumentList xslArg = new XsltArgumentList(); xslArg.AddExtensionObject("urn:[...]", new [...]()); xslTransform.Transform(r, xslArg, context.Response.Output); I really don't get what I'm doing wrong. I've checked the interfaces on both XmlReader objects and they implement the required one. Also, BaseUriXmlResolver inherits from XmlUriResolver and the stylesheet is stored locally. The screenshot below is what I get when stepping into the Transform function. First I can see the stylesheet code after stepping through the parameters (on template-match), I get this: If anyone has any idea why it doesn't work or has an alternative way of getting it to work I'd be much obliged :). Thanks, Alex

    Read the article

  • What is in your Mathematica tool bag?

    - by Timo
    We all know that Mathematica is great, but it also often lacks critical functionality. What kind of external packages / tools / resources do you use with Mathematica? I'll edit (and invite anyone else to do so too) this main post to include resources which are focused on general applicability in scientific research and which as many people as possible will find useful. Feel free to contribute anything, even small code snippets (as I did below for a timing routine). Also, undocumented and useful features in Mathematica 7 and beyond you found yourself, or dug up from some paper/site are most welcome. Please include a short description or comment on why something is great or what utility it provides. If you link to books on Amazon with affiliate links please mention it, e.g., by putting your name after the link. Packages: LevelScheme is a package that greatly expands Mathematica's capability to produce good looking plots. I use it if not for anything else then for the much, much improved control over frame/axes ticks. David Park's Presentation Package ($50 - no charge for updates) Tools: MASH is Daniel Reeves's excellent perl script essentially providing scripting support for Mathematica 7. (This is finally built in as of Mathematica 8 with the -script option.) Resources: Wolfram's own repository MathSource has a lot of useful if narrow notebooks for various applications. Also check out the other sections such as Current Documentation, Courseware for lectures, and Demos for, well, demos. Books: Mathematica programming: an advanced introduction by Leonid Shifrin (web, pdf) is a must read if you want to do anything more than For loops in Mathematica. Quantum Methods with Mathematica by James F. Feagin (amazon) The Mathematica Book by Stephen Wolfram (amazon) (web) Schaum's Outline (amazon) Mathematica in Action by Stan Wagon (amazon) - 600 pages of neat examples and goes up to Mathematica version 7. Visualization techniques are especially good, you can see some of them on the author's Demonstrations Page. Mathematica Programming Fundamentals by Richard Gaylord (pdf) - A good concise introduction to most of what you need to know about Mathematica programming. Undocumented (or scarcely documented) Features: How to customize Mathematica keyboard shortcuts. See this question. How to inspect patterns and functions used by Mathematica's own functions. See this answer How to achieve Consistent size for GraphPlots in Mathematica? See this question.

    Read the article

  • Question About Example In Robert C Martin's _Clean Code_

    - by Jonah
    This is a question about the concept of a function doing only one thing. It won't make sense without some relevant passages for context, so I'll quote them here. They appear on pgs 37-38: To say this differently, we want to be able to read the program as though it were a set of TO paragraphs, each of which is describing the current level of abstraction and referencing subsequent TO paragraphs at the next level down. To include the setups and teardowns, we include setups, then we include the test page content, and then we include the teardowns. To include the setups, we include the suite setup if this is a suite, then we include the regular setup. It turns out to be very dif?cult for programmers to learn to follow this rule and write functions that stay at a single level of abstraction. But learning this trick is also very important. It is the key to keeping functions short and making sure they do “one thing.” Making the code read like a top-down set of TO paragraphs is an effective technique for keeping the abstraction level consistent. He then gives the following example of poor code: public Money calculatePay(Employee e) throws InvalidEmployeeType { switch (e.type) { case COMMISSIONED: return calculateCommissionedPay(e); case HOURLY: return calculateHourlyPay(e); case SALARIED: return calculateSalariedPay(e); default: throw new InvalidEmployeeType(e.type); } } and explains the problems with it as follows: There are several problems with this function. First, it’s large, and when new employee types are added, it will grow. Second, it very clearly does more than one thing. Third, it violates the Single Responsibility Principle7 (SRP) because there is more than one reason for it to change. Fourth, it violates the Open Closed Principle8 (OCP) because it must change whenever new types are added. Now my questions. To begin, it's clear to me how it violates the OCP, and it's clear to me that this alone makes it poor design. However, I am trying to understand each principle, and it's not clear to me how SRP applies. Specifically, the only reason I can imagine for this method to change is the addition of new employee types. There is only one "axis of change." If details of the calculation needed to change, this would only affect the submethods like "calculateHourlyPay()" Also, while in one sense it is obviously doing 3 things, those three things are all at the same level of abstraction, and can all be put into a TO paragraph no different from the example one: TO calculate pay for an employee, we calculate commissioned pay if the employee is commissioned, hourly pay if he is hourly, etc. So aside from its violation of the OCP, this code seems to conform to Martin's other requirements of clean code, even though he's arguing it does not. Can someone please explain what I am missing? Thanks.

    Read the article

  • Attempting to extract a pattern within a string

    - by Brian
    I'm attempting to extract a given pattern within a text file, however, the results are not 100% what I want. Here's my code: import java.util.regex.Matcher; import java.util.regex.Pattern; public class ParseText1 { public static void main(String[] args) { String content = "<p>Yada yada yada <code> foo ddd</code>yada yada ...\n" + "more here <2004-08-24> bar<Bob Joe> etc etc\n" + "more here again <2004-09-24> bar<Bob Joe> <Fred Kej> etc etc\n" + "more here again <2004-08-24> bar<Bob Joe><Fred Kej> etc etc\n" + "and still more <2004-08-21><2004-08-21> baz <John Doe> and now <code>the end</code> </p>\n"; Pattern p = Pattern .compile("<[1234567890]{4}-[1234567890]{2}-[1234567890]{2}>.*?<[^%0-9/]*>", Pattern.MULTILINE); Matcher m = p.matcher(content); // print all the matches that we find while (m.find()) { System.out.println(m.group()); } } } The output I'm getting is: <2004-08-24> bar<Bob Joe> <2004-09-24> bar<Bob Joe> <Fred Kej> <2004-08-24> bar<Bob Joe><Fred Kej> <2004-08-21><2004-08-21> baz <John Doe> and now <code> The output I want is: <2004-08-24> bar<Bob Joe> <2004-08-24> bar<Bob Joe> <2004-08-24> bar<Bob Joe> <2004-08-21> baz <John Doe> In short, the sequence of "date", "text (or blank)", and "name" must be extracted. Everything else should be avoided. For example the tag "Fred Kej" did not have any "date" tag before it, therefore, it should be flagged as invalid. Also, as a side question, is there a way to store or track the text snippets that were skipped/rejected as were the valid texts. Thanks, Brian

    Read the article

  • realtime logging

    - by Ion Todirel
    I have an application which has a loop, part of a "Scheduler", which runs at all time and is the heart of the application. Pretty much like a game loop, just that my application is a WPF application and it's not a game. Naturally the application does logging at many points, but the Scheduler does some sensitive monitoring, and sometimes it's impossible just from the logs to tell what may have gotten wrong (and by wrong I don't mean exceptions) or the current status. Because Scheduler's inner loop runs at short intervals, you can't do file I/O-based logging (or using the Event Viewer) in there. First, you need to watch it in real-time, and secondly the log file would grow in size very fast. So I was thinking of ways to show this data to the user in the realtime, some things I considered: Display the data in realtime in the UI Use AllocConsole/WriteConsole to display this information in a console Use a different console application which would display this information, communicate between the Scheduler and the console app using pipes or other IPC techniques Use Windows' Performance Monitor and somehow feed it with this information ETW Displaying in the UI would have its issues. First it doesn't integrate with the UI I had in mind for my application, and I don't want to complicate the UI just for this. This diagnostics would only happen rarely. Secondly, there is going to be some non-trivial data protection, as the Scheduler has it's own thread. A separate console window would work probably, but I'm still worried if it's not too much threshold. Allocating my own console, as this is a windows app, would probably be better than a different console application (3), as I don't need to worry about IPC communication, and non-blocking communication. However a user could close the console I allocated, and it would be problematic in that case. With a separate process you don't have to worry about it. Assuming there is an API for Performance Monitor, it wouldn't be integrated too well with my app or apparent to the users. Using ETW also doesn't solve anything, just a random idea, I still need to display this information somehow. What others think, would there be other ways I missed?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >