Search Results

Search found 2954 results on 119 pages for 'rick green turbo'.

Page 24/119 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Find the product key I entered for MS Office on Mac

    - by Rick Reynolds
    I have several legal license keys for Office:mac 2008. I want to do a quick audit of the two machines I've installed office on and verify which license keys are being used where. But I don't see the license key anywhere on the about dialog (or elsewhere). I've seen other postings on the 'net directing me to look at various .plist files, but those only give me the "Product ID" which is different from the license key (which MS calls the "Product Key" on the little sticker). Is there a way outside of calling MS to correlate the Product Key (which is required for installation and is the real license key) to the Product ID I see in the app itself?

    Read the article

  • What can cause SQL 2008 Transaction Log Shipping to stop functioning?

    - by Rick
    I read somewhere that doing a backup or when Maintenance Plan runs can cause Log Shipping to stop functioning. Is this true? What should we watch out for once our Transaction Log Shipping is in place that could stop it? A Log Shipping test we were doing between two databases on the same SQL 2008 server appeared to stop working without any error. When we checked the History of the LSRestore_* job it was always ignoring the new *.trn files. Any suggestions? Thanks.

    Read the article

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

  • Generic Post Script driver for Windows Vista x64?

    - by Rick
    I have an old HP parallel port printer that is not supported by Vista. No drivers I've found online work with it. As a last ditch effort, I was hoping to find some generic postscript drivers for Vista x64 in the hopes that the printer will accept those commands. Does anyone know where I could come by such drivers?

    Read the article

  • How do I prevent Outlook users voting multiple times

    - by Rick
    Using the voting buttons in Outlook 2007, apparently users can submit votes as many times as they like. (I have just now verified this behaviour.) Is there a way to restrict the uses to just one vote each? I've found a script online which claims to do this, but there's no way I can use a script like that across our company. I'm hoping it's just a configuration setting either in the outbound email, or in the user's Outlook client.

    Read the article

  • Problem with Graphics Card, Power Supply or Mother Board?

    - by Rick Siegert
    I have a problem that is driving me to the edge. My graphics card periodically looses power for a moment, then comes back. Once in a while it takes much longer, like 5 minutes. I have always tried rebooting during that period, since I don't know then. Black screen, with a no power message across my monitor. All equipment is only a few months old. The Motherboard is a few months old, MSI N9A2 Platinum Revision 1 (AMD). The Video Card is a Gigabyte Radeon HD 4850 1GB. The power supply is an Ultra 700w My OS is Xp Pro, sp3 Any ideas or suggestions how to solve this

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • IIS7 301 permanent redirect from billarga.com.au to billarga.com

    - by Rick
    Using IIS7 GUI, I have placed a 301 permanent redirection from billarga.com.au to billarga.com and left the other behaviours unchecked (want relative redirect). As soon as I apply the redirect on the .com.au, the same redirect appears for the .com domain. Why? I don't understand why changing one will do the same to the other. Has it got something to do with both using the same bindings? Each domain uses two bindings, one with www and the other without. My aim is to condense all traffic and google listings to the one domain, but still be able to use the .com.au for appearance purposes in the url (for aussies). Any help with this is appreciated!

    Read the article

  • Linux Software RAID: How to fsck on hard drive?

    - by Rick-Rainer Ludwig
    We have a Linux server running with Software RAID1. We see some issues in /var/log/messages like: unreadable sector. I want to perform a complete fsck on the drive to get some more information, but a fsck /dev/md0 brings a clean due to the Software RAID layer in between. How can I check the real hard drive? Do I need to disassemble the whole RAID? How do I deal with the inconsistency in the partition due to the additional Software RAID header? Does anyone have a good idea for this?

    Read the article

  • Handling site not found and page not found with dynamic mass virtual hosting

    - by Rick Moynihan
    I have recently setup mass virtual hosting in Apache so that all we need to do is create a directory to create a new vhost. We're then also using wildcard DNS to map all subdomains to the server running our Apache instance. This works excellently, however I'm now having trouble configuring it to fail-over to an appropriate default/error-page when the vhost directory does not exist. The problem appears to be conflated between by my desire to handle the two error conditions: vhost not found i.e. there was no directory found matching the host supplied in the HTTP host header. I'd like this to display an appropriate site not found error page. The 404 page not found condition of the vhost. Additionally I have a specialised "api" vhost in its own vhost block. I've tried a number of variations and none seem to exhibit the behaviour I want. Here's what I'm working with right now: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/site-not-found ServerName sitenotfound.mydomain.org ErrorDocument 500 /500.html ErrorDocument 404 /500.html </VirtualHost> <VirtualHost *:80> ServerName api.mydomain.org DocumentRoot /var/www/vhosts/api.mydomain.org/current # other directives, e.g. setting up passenger/rails etc... </VirtualHost> <VirtualHost *:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/vhosts/%0/current # other directives ... e.g proxy passing to api etc... ErrorDocument 404 /404.html </VirtualHost> My understanding is that the first vhost block is used as the default, so I have this here as my catch all site. Next I have my API vhost, and then finally my mass vhost block. So for a domain that doesn't match the first two ServerName's and has no corresponding directory in /var/www/vhosts/ I'd expect it to fall-over to the first vhost, however with this setup, all domains resolve to my default site-not-found. Why is this? By putting the mass-vhost block first, I can get the mass-vhosts to resolve properly, but not my site-not-found vhost... and in this case I can't seem to find a way to distinguish between a page-level 404 in the vhost, and the case where the VirtualDocumentRoot fails to find a vhost directory (this appears to use the 404 also). Any help out of this bind is much appreciated!

    Read the article

  • How do you import Firefox/Chrome bookmarks into Google Bookmarks?

    - by Rick
    How do you import Firefox/Chrome bookmarks into Google Bookmarks? It looks like Google Bookmarks has some wonderful features, but it doesn't let people import their existing bookmarks from their browsers be it Firefox, Chrome or Internet Explorer. There used to be workarounds for this, but no more: http://googlesystem.blogspot.com/2011/01/google-bookmarks-import-without-google.html Can anyone think of a good way to pull this off?

    Read the article

  • can't connect to vsftpd from outside network

    - by rick
    i know this has been asked many times before, but nothing seems to resolve my issue. i have vsftpd running on ubuntu 10.04. i can connect with ftp localhost on the machine. i can connect from another machine in my network. i just cannot connect from outside. the machine is behind an airport extreme managed by airport utility on a mac. 21 is open as per nmap: macmini:~$ nmap localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-10 23:49 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.00045s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 997 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 631/tcp open ipp netstat says 21 is listening: macmini:~$ netstat -lep --tcp | grep ftp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 *:ftp *:* LISTEN iptables: macmini:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination when i try to connect from my external IP (or a dyndns name which resolves there) it times out. ("control connection timed out") as i know very little about networking, i feel like something may jump out as clearly wrong?

    Read the article

  • Cloud based backup solutions based on open standards?

    - by Rick
    I am looking for a solution to backup and consolidate important media from a couple Windows laptops and Mac laptop. I would like a solutions that based on open standards, so my data isn't trapped by proprietary formats and proprietary protocols. I would like the ability to switch clients or change providers in the future. For example, something like Jungle Disk plus S3 sounds like a great option. However, I am having trouble confirming how or if this can be setup meeting this criteria. Are there any real or de-facto standards for treating S3 as a filesystem? If so, what Windows and Mac clients support these standards?

    Read the article

  • SQL Server 2000 + ASP.NET: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

    - by Rick
    I just migrated a development workstation FROM: Windows XP Pro SP3 with IIS 6 TO: Vista Enterprise 64bit with IIS 7 Since the move, one of my pages that accesses an SQL Server 2000 database is receiving the following error from my ASP.NET 2.0 web page: "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'." I have: enabled Windows Authentication in IIS and web.config disabled Anonymous Authentication in IIS set up Impersonation to run as the authenticated user verified that the logged in user (in this case, me) has access to the appropriate database on the SQL Server verified that my login and impersonation information is correct in the ASP.NET page by checking User.Identity.Name and System.Security.Principal.WindowsIdentity.GetCurrent().Name (both display my username) My connection string using SqlConnection is "Server={SERVER_NAME};Database={DB_NAME};Integrated Security=SSPI;Trusted_Connection=True;" Why is it trying to login with NT AUTHORITY\ANONYMOUS LOGIN? I have to assume it's some setting or web.config entry specific to IIS7 since it worked fine before the migration. NOTE: The SQL Server is Windows authentication only - no mixed mode or SQL only.

    Read the article

  • Can you change the icon of a pinned IE 9 web application? And how do you do it?

    - by Rick Roth
    In IE 9 you have the ability to click and drag an open browser tab to the Windows 7 taskbar and pin the shortcut to the taskbar. This has the effect of creating a pseudo-application experience where the shortcut can have it's own custom jumplist and is not grouped with other IE 9 browser tabs on the taskbar. Windows uses the "shortcut icon" or "favicon" defined in the HTML for the icon on the taskbar. If no shortcut icon is defined, then the generic IE shortcut icon is used. If you have a bunch of these shortcuts pinned to the taskbar that don't have different icons it can be confusing to the user which is which. Can you change the icon of a pinned IE 9 web application? And how do you do it?

    Read the article

  • Only half of RAM is recognized by BIOS

    - by Rick Crawford
    I got a Gigabyte GA-P35-DS3 mainboard. Some time ago I noticed that Windows only showed 2GB instead of 4GB. I don't know exactly what caused it anyway. I tried putting in each of the 4 x 1GB RAM modules one by one, and tried every slot one by one, until every stick and slot worked. However, then I tried adding one more at a time, and it kept showing 1GB, until I put in all 4, where it only showed 2 GB instead of 4 (in BIOS and windows 7 64bit). I tried replacing the BIOS battery since I've read that low battery could cause it. It didn't help though. I also bought 4GB new RAM (yes, it's supported, I checked it), and it's still the same, it only shows 2GB (or 3GB, when I put in 4 of the new and 2 of the old). I also did the latest BIOS update, and used default BIOS settings, but nothing of that helped. When my PC boots it shows "RAM modules used 2 and 3", when 4 sticks are in - or "0 and 1", when only 2 are in.

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by rick
    Firstly, I know that this question has been asked a million times, and I have read everything I can find and still cannot fix the problem. i am encountering this issue when ssh'ing in from my mac to my Ubuntu server on a fresh install of Ubuntu (I reinstalled because of this issue). I have SSH portmapped to 7070 because my ISP is blocking 22. On the client: bash: ssh -p 7070 -v [email protected] debug1: Reading configuration data /etc/ssh_config debug1: Connecting to address.org port 7070. debug1: Connection established. debug1: identity file /home/me/.ssh/identity type -1 debug1: identity file /home/me/.ssh/id_rsa type 1 debug1: identity file /home/me/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host Here's what I have done to try to resolve the issue: Made sure my maxstartups is ok: bash: grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 Made sure hosts.deny is clear of denials. Made sure hosts.allow has my client IP. Clear out known_hosts on client Changed ownership of /var/run to root Made sure etc/run/ssh is Made sure /var/empty exists Reinstall openssh-server Reinstall ubuntu When I run telnet localhost, I get this: telnet localhost Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused When I run /usr/sbin/sshd -t Could not load host key: /etc/ssh/ssh_host_rsa_key Could not load host key: /etc/ssh/ssh_host_dsa_key When I regenerate the keys with ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key I get the same error. I am pretty sure this is the issue. Can anyone help?

    Read the article

  • Is there a convenient way to manually copy the Log Shipping *.trn files from one SQL 2008 server to

    - by Rick
    We have a remote SQL 2008 server (ServerB) that needs to keep a warm (15 minute interval is OK) copy of production data (ServerA). ServerA is also SQL 2008. Log Shipping looks like it will do the job. We can only get to the destination ServerB with remote desktop. Is there a way to set this up when we can't get to both servers from one Management Studio? We want to be able to temporarily (until a VPN is setup between our network and the ServerB network) manually export a small .trn file, copy it via remote desktop to ServerB and then manually import those transactions from the .trn file. My supervisor says he saw a post saying this is possible. We were just trying to avoid doing a full database backup and copying that every time. Thanks in advance for any suggestions on this.

    Read the article

  • Reg Expression htaccess RewriteRule

    - by Rick
    I am new to using regular expressions for rewriting URL's in htaccess I need to redirect mysite.com/123 to mysite.com/, IF cookie named 'ref' is set. my current htaccess is: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_COOKIE} ref=true [NC] RewriteRule ^/([0-9]+)/$ http://www.mysite.com </IfModule> The goal is that when someone enters site with: mysite.com/111(some number) that they are redirected to the home page of the site after the cookie is set. Be nice... I'm new! ;o)

    Read the article

  • Convert partition to Virtual Disk image

    - by Rick
    I have a 250 GB HD with a XP partition. I partitioned the XP Box to 112 GB, since the max Virtual PC can load is 127 GB. I have a new motherboard and can't load into the partition, so I am using Windows 7. I have tried using WinImage to create the image but it creates an image of the whole disc (250 GB) and will not load on Virtual PC cause of the size limit. What would be best to convert to VHD correctly?

    Read the article

  • EC2 instances keep becoming inaccessible via SSH, can I use elastic loadbalancer to check SSH connectivity?

    - by Rick
    This is mainly an issue for my development ec2 server as it seems that my instance keeps becoming inaccessible via SSH. It happened yesterday so I killed that one and started a new one and happened again later today. The server still works, my web application is accessible in a web browser but whenever I try to connect via SSH I get a pemrission denied (public key) error message in my terminal. I am 100% sure I am doing nothing wrong as I can create a new instance of the exact same AMI (its a personal custom AMI), change absolutely nothing, including using the same .pem key, and then am able to SSH into that new instance using the exact same command as before (just changing the IP address). I understand that ec2 can have issues but having this happen every day seems a bit odd.. I am using an m2.xlarge instance so I don't know if these tend to be unstable, in the past I have used a small instance and had it running for months with no problems which is why I find this so odd. I am looking into using loadbalancing but it seems the only "health" checks they offer is for http or tcp so I'm not sure if I can make it monitor for SSH connectivity. This is important for development as I may make 1-2 new pushes of an application a day and use SSH to do this. I have a designer that needs to have the app always accessible as he works with the front-end files to test output with the live application. Anyways, any advice / info is appreciated

    Read the article

  • DNS problems on CentOS fresh install

    - by Rick Koshi
    I'm having some DNS issues on a new box I'm installing with CentOS 6.2. I am able to look up names using nslookup, dig, or host. I am able to ping machines by name or by IP address. However, when I try other tools, such as ssh, wget, or yum, they are unable to resolve names. For example: # wget http://www.google.com --2012-03-08 14:48:06-- http://www.google.com/ Resolving www.google.com... failed: Name or service not known. wget: unable to resolve host address `www.google.com' # ssh www.google.com ssh: Could not resolve hostname www.google.com: Name or service not known # ping -c 1 www.google.com PING www.l.google.com (74.125.113.106) 56(84) bytes of data. 64 bytes from vw-in-f106.1e100.net (74.125.113.106): icmp_seq=1 ttl=46 time=43.6 ms --- www.l.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 59ms rtt min/avg/max/mdev = 43.665/43.665/43.665/0.000 ms # host www.google.com www.google.com is an alias for www.l.google.com. www.l.google.com has address 74.125.113.99 www.l.google.com has address 74.125.113.103 www.l.google.com has address 74.125.113.104 www.l.google.com has address 74.125.113.105 www.l.google.com has address 74.125.113.106 www.l.google.com has address 74.125.113.147 My /etc/nsswitch.conf file is the default, including this (standard) line: hosts: files dns /etc/resolv.conf is as set up by DHCP: ; generated by /sbin/dhclient-script nameserver 192.168.1.254 192.168.1.254 is a working DNS server (my DSL modem, working for years with other machines) Anyone know why ping would work, but ssh/wget would fail? Per NcA's suggestion, I tried changing /etc/resolv.conf to point to 8.8.8.8. Oddly enough, this does make it work. Obviously, my DSL modem is responding to DNS requests in some way that some parts of Linux's resolution system don't like. Looking at the tcpdump, I am unable to see what the difference is. Certainly, both servers are sending the same addresses. Here's the output from tcpdump -nn -X with the server set to the DNS server on the DSL modem. It's clearly replying with the correct addresses, but ssh/wget don't seem happy with it for some reason: 15:53:52.133580 IP 192.168.1.254.53 > 192.168.1.2.54836: 33157 7/0/0 CNAME www.l.google.com., A 74.125.115.105, A 74.125.115.106, A 74.125.115.147, A 74.125.115.99, A 74.125.115.103, A 74.125.115.104 (148) 0x0000: 4500 00b0 e33a 0000 ff11 53b1 c0a8 01fe E....:....S..... 0x0010: c0a8 0102 0035 d634 009c 7528 8185 8180 .....5.4..u(.... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0007 acd0 0008 0377 7777 016c c010 .........www.l.. 0x0050: c02c 0001 0001 0000 0001 0004 4a7d 7369 .,..........J}si 0x0060: c02c 0001 0001 0000 0001 0004 4a7d 736a .,..........J}sj 0x0070: c02c 0001 0001 0000 0001 0004 4a7d 7393 .,..........J}s. 0x0080: c02c 0001 0001 0000 0001 0004 4a7d 7363 .,..........J}sc 0x0090: c02c 0001 0001 0000 0001 0004 4a7d 7367 .,..........J}sg 0x00a0: c02c 0001 0001 0000 0001 0004 4a7d 7368 .,..........J}sh 15:53:52.135669 IP 192.168.1.254.53 > 192.168.1.2.54836: 65062- 0/0/0 (32) 0x0000: 4500 003c e33b 0000 ff11 5424 c0a8 01fe E..<.;....T$.... 0x0010: c0a8 0102 0035 d634 0028 98f9 fe26 8000 .....5.4.(...&.. 0x0020: 0001 0000 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 gle.com..... I'm not enough of an expert to know if this is malformed in some way, but ping seems to do the right thing with it. For comparison, here's the same thing when querying 8.8.8.8: 15:57:27.990270 IP 8.8.8.8.53 > 192.168.1.2.49028: 59114 7/0/0 CNAME www.l.google.com., A 74.125.113.105, A 74.125.113.103, A 74.125.113.106, A 74.125.113.147, A 74.125.113.104, A 74.125.113.99 (148) 0x0000: 4500 00b0 5530 0000 2f11 6453 0808 0808 E...U0../.dS.... 0x0010: c0a8 0102 0035 bf84 009c 39f8 e6ea 8180 .....5....9..... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 516a 0008 0377 7777 016c c010 ....Qj...www.l.. 0x0050: c02c 0001 0001 0000 0116 0004 4a7d 7169 .,..........J}qi 0x0060: c02c 0001 0001 0000 0116 0004 4a7d 7167 .,..........J}qg 0x0070: c02c 0001 0001 0000 0116 0004 4a7d 716a .,..........J}qj 0x0080: c02c 0001 0001 0000 0116 0004 4a7d 7193 .,..........J}q. 0x0090: c02c 0001 0001 0000 0116 0004 4a7d 7168 .,..........J}qh 0x00a0: c02c 0001 0001 0000 0116 0004 4a7d 7163 .,..........J}qc 15:57:28.018909 IP 8.8.8.8.53 > 192.168.1.2.49028: 31984 1/1/0 CNAME www.l.google.com. (102) 0x0000: 4500 0082 7b1b 0000 2f11 3e96 0808 0808 E...{.../.>..... 0x0010: c0a8 0102 0035 bf84 006e c67e 7cf0 8180 .....5...n.~|... 0x0020: 0001 0001 0001 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 517f 0008 0377 7777 016c c010 ....Q....www.l.. 0x0050: c030 0006 0001 0000 0258 0026 036e 7334 .0.......X.&.ns4 0x0060: c010 0964 6e73 2d61 646d 696e c010 0016 ...dns-admin.... 0x0070: 91f3 0000 0384 0000 0384 0000 0708 0000 ................ 0x0080: 003c .< I still don't know why the server's reply is adequate for ping but not for ssh/wget. If anyone has ideas, I'd be happy to hear them. For now, though, I can either refer to an outside DNS server or set up my own server on the new box. It's a workaround that seems like it should be unnecessary, but will allow me to proceed.

    Read the article

  • Mass editing videos on Ubuntu?

    - by rick
    Hi, I'm trying to add a watermark and a credits image to all of my old videos. I downloaded them off YouTube so they are all flv (H.264?). Is there some software that will allow me do simple edits in batches? I know a little bit of Python and tried looking at some of the library but they all seem like overkill (and way above my head). So is there a solution besides getting some software and going through all my videos and doing it manually? They are all mostly the same length, but it would be nice to specify a relative position for my credits. e.g. show a static image for 10 seconds when the video is at 95%

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >