Search Results

Search found 37260 results on 1491 pages for 'command query responsibil'.

Page 528/1491 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Imagemagick convert with resample option

    - by coneybeare
    I am creating thumbnails from much larger images and have been using this command successfully for some time: convert FILE -resize "64x" -crop "64x64+0+16" +repage -strip OUTFILE I also do some other processing that is not relevant to the question. I realized that this does not adjust the resolution at all, so if I use a 300dpi image, it ends up displaying really small on some devices. I want to resample it to 72x72 so I have been trying with this command: convert FILE -resize "64x" -crop "64x64+0+16" +repage -strip -resample 72x72 OUTFILE And expected the 64x64 image at 300dpi to be resampled to a 64x64 image at 72dpi, but instead, I am getting a very funny size and density. Here is "identify" output for the original and post-processed file WITHOUT the resample: coneybeare $ convert "aa.jpg" -crop "64x64+0+16" +repage -strip "aa.png" coneybeare $ for image in `find . -type f`; do identify $image; identify -verbose $image | egrep "^ Resolution"; done ./aa.jpg JPEG 1130x1695 1130x1695+0+0 8-bit DirectClass 1.492MiB 0.000u 0:00.000 Resolution: 300x300 ./aa.png PNG 64x64 64x64+0+0 8-bit DirectClass 7.46KiB 0.000u 0:00.000 Resolution: 118.11x118.11 And here is the "identify output for the command WITH the resample: coneybeare $ convert "aa.jpg" -crop "64x64+0+16" +repage -strip -resample 72x72 "aa.png" coneybeare $ for image in `find . -type f`; do identify $image; identify -verbose $image | egrep "^ Resolution"; done ./aa.jpg JPEG 1130x1695 1130x1695+0+0 8-bit DirectClass 1.492MiB 0.000u 0:00.000 Resolution: 300x300 ./aa.png PNG 15x15 15x15+0+0 8-bit DirectClass 901b 0.000u 0:00.000 Resolution: 28.34x28.34 So, the question is: What am I doing wrong and how can I fix it so the end result is a 64x64 cropped thumbnail image at 72dpi?

    Read the article

  • ipv6 reverse DNS delegation

    - by user1709492
    I currently have 2001:1973:2303::/48 assigned to me and i'll be assigning /64's to customer's I'd like to have 1 zonefile for the /48 where i can essentially point / redirect query to different nameservers. Example ( Desired effect ) 2001:1973:2303:1234::/64 -> ns1.example.com, ns2.example.com 2001:1973:2303:2345::/64 -> ns99.example2.com, ns100.example2.com 2001:1973:2303:4321::/64 -> ns1.cust1.com, ns2.cust1.com Current /48 zonefile $TTL 3h $ORIGIN 3.0.3.2.3.7.9.1.1.0.0.2.ip6.arpa. @ IN SOA ns3.example.ca. ns4.example.ca. ( 2011071030 ; serial 3h ; refresh after 3 hours 1h ; retry after 1 hour 1w ; expire after 1 week 1h ) ; negative caching TTL of 1 hour IN NS ns3.example.ca. IN NS ns4.example.ca. 1234 IN NS ns1.example.com. NS ns2.example.com. 2345 IN NS ns99.example2.com. NS ns100.example2.com. 4321 IN NS ns1.cust1.com. NS ns2.cust1.com. Where am i going wrong ? My request seems simple to me atleast. To put it in terms of firewalling i want to redirect traffic client queries 2001:1973:2303:4321::1 - ns3.example.ca sees the request and redirects the query to ns1.cust1.com - ns1.cust1.com answers the query with omg.itworks.ca ( provided ns1.cust1.com is properly configured.

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    Dear ladies and sirs. The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • How do I get ftype & assoc to match Windows Explorer?

    - by Gauthier
    I changed the association to use upon launching a .py file, via Windows Explorer: Tools - Folders - File types. Then browse to .py. Change the association to Wordpad. Now when I type the name of a py file in the command line, Wordpad opens it. But assoc and ftype in the command line still return the following: C:\> assoc .py .py = Python.File C:\> ftype Python.File Python.File = "C:\Program\Python27\python.exe" "%1" %* How come the association is working, but assoc and ftype are not aware of it? I did restart the prompt. More info from my registry: HKEY_CLASSES_ROOT\.py = Python.File HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.py\Application = wordpad.exe HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.py\OpenWithProgids\Python.File = HKEY_LOCAL_MACHINE\SOFTWARE\Classes\.py\(Standard) = Python.File More registry: HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command\(Standard) = "C:\Program\Python27\python.exe" "%1" %*` I suppose this is what is showing up in ftype Python.File. But it does not seem to get used. (I am doing this for testing, so I can eventually choose my default version of Python easily).

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • Transfering Files to server IP and port

    - by Mason
    I need to transfer files from my local computer on windows 7 to a server running linux. I access the server with putty through ssh at a specific IPv4 address and port number. I've attempted using the pscp command from my local computer but was denied access by the server. "Fatal: Network error: Connection refused" c:>pscp test.csv userid@**IPv4_Addres***:Port# /path/destination_file_name. Either the server blocks all pscp attempts from unauthorized users (most likely my laptop included) or I used the command incorrectly. If you have experience using this command, where exactly will the file get transfered to, I'm assuming that the path destination starts at my home directory in the server. Also if you have any other alternative methods of transfering the files let me know. Update 1 I have also tried using WinSCP however I got permission denied for that as well, it looks like the server will not let me upload or save files. Solved I had a complete lapse of memory and forgot about sudo (spent too much time with scripts the last 2 months), so I was able to change the permissions to allow external editing. Thanks for all the help guys!

    Read the article

  • no A record show in the answer section in dig results

    - by eric low
    To check the record for the domain, run dig with domain name as the parameter. dig example.com any I get the below result. Why there is no A record show in the result. What did i do wrong during the setup. Please advice what suppose to look into it. Hope everyone can help me to resolve the case asap. ; <<>> DiG 9.9.3-P2 <<>> example.com any ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44674 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 4, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN ANY ;; ANSWER SECTION: example.com. 3489 IN MX 100 biz.mail.com. example.com. 3482 IN NS ns1.domain.com. example.com. 3482 IN NS ns2.domain.com. ;; AUTHORITY SECTION: example.com. 3482 IN NS ns2.domain.com. example.com. 3482 IN NS ns1.domain.com. ;; Query time: 0 msec ;; SERVER: xxx.252.xxx.xxx#53(xxx.252.xxx.xxx) ;; WHEN: Wed Oct 30 04:48:34 CDT 2013 ;; MSG SIZE rcvd: 349

    Read the article

  • Computing Number of Bits in Public Key

    - by eb80
    I am working with DKIM and trying to compute the public key size of some DKIM signatures. I know from tools that Gmail's is now 2048, but how could I have figured this out myself (i.e., what exact Linux commands and why)? user@host$ dig txt 20120113._domainkey.gmail.com ; <<>> DiG 9.8.3-P1 <<>> txt 20120113._domainkey.gmail.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52228 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;20120113._domainkey.gmail.com. IN TXT ;; ANSWER SECTION: 20120113._domainkey.gmail.com. 300 IN TXT "k=rsa\; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1Kd87/UeJjenpabgbFwh+eBCsSTrqmwIYYvywlbhbqoo2DymndFkbjOVIPIldNs/m40KF+yzMn1skyoxcTUGCQs8g3FgD2Ap3ZB5DekAo5wMmk4wimDO+U8QzI3SD0" "7y2+07wlNWwIt8svnxgdxGkVbbhzY8i+RQ9DpSVpPbF7ykQxtKXkv/ahW3KjViiAH+ghvvIhkx4xYSIc9oSwVmAl5OctMEeWUwg8Istjqz8BZeTWbf41fbNhte7Y+YqZOwq1Sd0DbvYAD9NOZK9vlfuac0598HY+vtSBczUiKERHv1yRbcaQtZFh5wtiRrN04BLUTD21MycBX5jYchHjPY/wIDAQAB" ;; Query time: 262 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Nov 19 10:52:06 2012 ;; MSG SIZE rcvd: 462

    Read the article

  • .htaccess modify rules and redirect if there's .php in the url

    - by Ron
    Hello everyone. I got the following code in my .htaccess: Options +FollowSymlinks RewriteBase /temp/test/ RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^about/(.*)/$ $1.php [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/downloadfile/$ file-download.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/$ download-donate.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/$ download.php?product=$1&version=$2 [L] RewriteRule ^newsletter-confirm/(.*)/$ newsletter-confirm.php?email=$1 [L] RewriteRule ^newsletter-remove/(.*)/$ newsletter-remove.php?email=$1 [L] RewriteRule ^(.*)/screenshots/$ screenshots.php?product=$1 [L] RewriteRule ^(.*)/(.*)/$ products.php?product=$1&page=$2 [L] RewriteRule ^schedule-manager/$ products.php?product=schedule-manager&page=view [L] RewriteRule ^visual-command-line/$ products.php?product=visual-command-line&page=view [L] RewriteRule ^windows-hider/$ products.php?product=windows-hider&page=view [L] RewriteRule ^(.*)/$ $1.php [L] RewriteRule ^products/$ products.php [L] everything work perfect. I would like to know how can I modify it so it will be less lines. I am pretty sure I can atleast remove 4-5 lines, but I dont know how. (merge the schedule-manager, visual-command-line and windows-hider, and some more). I know that the order of the rules is important, this order works - although I have no idea why, I just played with the rules until it worked. If you think that there'll be a bug with the following order please tell me where. Another thing - I would like to redirect for example www.myweb.com/products.php to www.myweb.com/products/ (I mean that the URL in the address bar will change). I dont know if the redirect can go along with my rewrite rules. Thank you.

    Read the article

  • How to connect to DB2 when the password ends with '!' in Windows

    - by AngocA
    I am facing a problem to use the DB2 tools when using generic account with a generated password which ends with the Bang sign '!' to connect to DB2 database. I am not allowed to change the password because it is already used by other processes. I know the user is valid and I can connect to the database with its credentials, but not from all db2 tools. When using the Control Center it is okay. When using the Command Editor (GUI) or the Command Windows, I got this error message: connect to WAREHOUS user administrator using ! SQL0104N An unexpected token "!" was found following "<identifier>". Expected tokens may include: "NEW". SQLSTATE=42601 Let's say that my password is: pass@! I am trying to use c:\>db2 connect to sample user administrator using "pass@!" or c:\>db2 connect to sample user administrator using pass@! And it both cases I got the same error message. I could change the way I connect but it is not useful for me, for example: c:\>db2 connect to sample user administrator Enter current password for administrator: But I cannot use it from a batch file easily. I would like to know how can I connect from the Command Editor, in order to use this user from the Graphical Tools. BTW, I know that the Control Center is deprecated.

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • ping incorrectly pinging 127.0.0.1

    - by AlexW
    I've got an odd DNS issue. I'm running a dual ipv4/ipv6 environment on Linux. Pinging some sites results in ping pinging 127.0.0.1. e.g. #> ping authserver.mojang.com PING authserver.mojang.com (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=3 ttl=64 time=0.058 ms --- authserver.mojang.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.043/0.048/0.058/0.010 ms Dig, however correctly returns the following: # dig authserver.mojang.com ; <<>> DiG 9.9.3-P2 <<>> authserver.mojang.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15800 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;authserver.mojang.com. IN A ;; ANSWER SECTION: authserver.mojang.com. 5 IN A 54.235.119.47 ;; Query time: 14 msec ;; SERVER: 2001:4860:4860::8888#53(2001:4860:4860::8888) ;; WHEN: Sat Nov 09 15:34:40 GMT 2013 ;; MSG SIZE rcvd: 66 I'm confused! My web browser returns the correct website, and the same computer booted into Windows also works correctly.

    Read the article

  • Having two IP Routes/Gateways of last Resort on an HP Switch

    - by SteadH
    We have an HP Layer 3 Switch that is doing IP routing between vlans. The general set up is that the switch has an IP address on each VLAN and IP routing is enabled. On our servers VLAN, we have a firewall that has a connection to the outside world. To set a IP route on the HP router, we use IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.1 where 192.168.2.1 is the address of our firewall, and the zeros essentially mean to route all traffic that the switch doesn't know what to do with out the firewall as a gateway. We're in the middle of an ISP and firewall change. I set up the new firewall and ran the IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.254 (the address of the new firewall). Things started working nicely. When I reviewed the configuration of the switch though, I noticed that it did not replace the previous ip route command, but just added another route. Now, I know how to remove the old firewall route (no ip route 0.0.0.0 0.0.0.0 192.168.2.1), but what is the effect of having these two 0.0.0.0 routes? Is it switch implosion? Will a server just respond back over the route it receives the request from? I've read elsewhere that having two default gateways is an impossibility by definition, but I'm curious about this situation that our switch allowed. Thanks!

    Read the article

  • xrandr fails when 3rd monitor has higher resolution

    - by Pi3cH
    I tried many combinations with xrandr command under lubuntu 12.04 to setup my three monitors DVI (DELL 1) left detected as HDMI1 HDMI (LG E2290) middle detected as HDMI2 VGA (DELL 2) right detected as VGA1 I can get the display with fix 1280x1024 on all the monitors. But once I setup 1280x1024 + 1920x1080 + 1280x1024, I get blank screen on all the monitors. Sometimes it throws crts fail error instead of blanking out. Anyone have similar issues? any solutions/workarounds? P.S. I can setup two monitors using 1280x1280 and 1920x1080 P.S.S. HDMI2 required at least 1920x1080 to display sharp picture. Outputs (it seems graphic card supports up to 8192x8192): xrandr Screen 0: minimum 320 x 200, current 3840 x 1024, maximum 8192 x 8192 VGA1 connected 1280x1024+2560+0 (normal left inverted right x axis y axis) 338mm x 270mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 HDMI1 connected 1280x1024+0+0 (normal left inverted right x axis y axis) 376mm x 301mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 HDMI2 connected 1280x1024+1280+0 (normal left inverted right x axis y axis) 477mm x 268mm 1920x1080 60.0 + 1680x1050 60.0 1280x1024 75.0 60.0* 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) 3 CRTs (0,1 VGA, 0,1,2 for other HDMI) xrandr --verbose VGA1 connected 1280x1024+2560+0 (0x47) normal (normal left inverted right x axis y axis) 338mm x 270mm Identifier: 0x42 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 0 CRTCs: 0 1 ... HDMI1 connected 1280x1024+0+0 (0x47) normal (normal left inverted right x axis y axis) 376mm x 301mm Identifier: 0x43 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 1 CRTCs: 0 1 2 ... HDMI2 connected 1280x1024+1280+0 (0x47) normal (normal left inverted right x axis y axis) 477mm x 268mm Identifier: 0x44 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 2 CRTCs: 0 1 2 Below command fails: xrandr --output VGA1 --auto --output HDMI2 --auto --left-of VGA1 --output HDMI1 --auto --left-of HDMI2 Below command passes: xrandr --output VGA1 --auto --output HDMI2 --mode 1280x1024 --rate 60.0 --left-of VGA1 --output HDMI1 --auto --left-of HDMI2 Graphic card VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09)

    Read the article

  • SQL Server Unattended Install through SSH

    - by Samuel
    I'm trying to install SQL Server from the command line through Cygwin open-ssh. The install works when I log onto the server as Administrator and execute the script through a Cygwin shell, but the install doesn't work when I SSH into the machine using Administrator's credentials and run the exact same command. I've already verified that the SSHD process is running as the Admistrator, and I've verified that the install script is indeed starting under Administrator. Is there something different with the terminal in SSH vs. the Cygwin terminal on the machine that would cause this problem? Specifically what's failing is Sql Server install runs for a while then hangs with a MSI error 1622. "Error opening installation log file. Verify that the specified log file location exists and is writable." If I run both installs, I've noticed that they have different authentication id's in ProcMon, but they have the exact same command line parameters. There has to be something in SSH that is causing permissions issues... Any ideas?

    Read the article

  • Adding file to /etc/cron.d doesn't make it run (ubuntu 10.04)

    - by tom
    If I scp a cron file into /etc/cron.d it doesn't run unless I edit the file and change the command. Then crond seems to pick up the cron file. How can I make cron reload its cron files in ubuntu 10.04? 'touch'ing the file doesn't work nor does 'restart cron' or 'reload cron'. My cron file is set to run every minute and logs to a file. Nothing ends up in the log file until I edit the command, and there's no entry for it in /var/log/syslog I'm stumped. Here's my cron file saved to /etc/cron.d/runscript # Runs the script every minute. This is safe because it will exit with success if it's already running * * * * * www-data if [ -f /usr/local/bin/thing ]; then exec /usr/bin/php /usr/local/bin/thing mode:prod -a 14 -d >> /var/log/thing/mything.log 2>&1; else echo `date +'[%D %T]'` "Thing not deployed. Command not run\n" >> /var/log/thing/mything.log; fi &

    Read the article

  • Is there a local yubnub.org replacement?

    - by Justin Keogh
    I use yubnub very often... every google search I do by just (in firefox) "ctrl-t" - (now in the url bar) "y g searchterms" [Enter] "y" in this case is a search keyword I added by right clicking in the yubnub.org command box it's really fast, and I just do it automatically now... but the problem is now I am stuck with whatever the yubnub command that I am so used to using does. I cant change it... for example, what if I dont want to use google... but I still want to use the "g" command to search? or say I want to use google's https search... ect... I suppose this would be kinda trivial to implement locally... but I would hate to re-invent the code if it's allready done and in use... ideas? Also a local yubnub.org replacement would save me the DNS lookup and traffic to yubnub.org. I dont expect to be able to import all commands from yubnub.org but that would be cool if possible.

    Read the article

  • What is wrong with my DNS entries?

    - by matheus
    I have some problems with a domain not working as expected. My registrar's controlpanel shows these records for mydomain.eu: www A 111.222.333.444 * A 111.222.333.444 I use the nameservers of my registrar. I get a correct answer if i do dig www.mydomain.eu dig whatever.mydomain.eu I can also ping/visit website etc with those adresses. But, dig mydomain.eu wont resolve to anything. I just get this: ; <<>> DiG 9.6-ESV-R1 <<>> mydomain.eu ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46837 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;mydomain.eu. IN A ;; AUTHORITY SECTION: mydomain.eu. 1799 IN SOA ns1.binero.se. registry.binero.se. 1281647822 3600 240 1209600 3600 ;; Query time: 77 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Thu Jan 6 01:36:31 2011 ;; MSG SIZE rcvd: 83 The same A-record setup work for another domain/server ip, but that domain has other nameservers. What am I missing here?

    Read the article

  • Shortcut To Full Screen App In Lion

    - by omghai2u
    I postponed getting OSX Lion for as long as I possibly could. Now that I have it, I'm having lots of difficulties getting it to perform how I want. On Snow Leopard my typical setup for working was 4 spaces. I'd keep a Windows VM open on Space #4 full-screened, a Linux open on space #3, and I'd do other stuff on spaces #1 and #2. My keyboard shortcut allowed me to switch between my Windows work (Command + 4) to my Linux work (Command + 3) very quickly, and without the need for my hands to leave the keyboard (or effectively to even quit typing). Productivity was good. I see that on Lion a full-screened VM (and yes, they need to be full screened, Fusion's Unity won't cut it for what I need to do) is its own separate Desktop. I have set up 4 desktops and made my keyboard shortcuts to move between them Command + # just as before. But how do I get my full-screened VM to be one of those already existing desktops? Or, rather, how do I make a short-cut for the full-screened app?

    Read the article

  • Port to use CentOS init.d functions

    - by jcalfee314
    What are good equivalent centos commands using functions in /etc/init.d/functions such as daemon to perform the following tasks? STARTCMD='start-stop-daemon --start --exec /usr/sbin/swapspace --quiet --pidfile /var/run/swapspace.pid -- -d -p' STOPCMD='start-stop-daemon --stop --oknodo --quiet --pidfile /var/run/swapspace.pid' It looks like daemon will work for the start command and killproc is used for the stop command. . /etc/init.d/functions pushd /usr/sbin daemon --pidfile /var/run/swapspace.pid /usr/sbin/swapspace . /etc/init.d/functions killproc -p $(cat /var/run/swapspace.pid) Would the --oknodo be needed in the CentOS env (the swap file is really only boot-time)? "oknodo - Return exit status 0 instead of 1 if no actions are (would be) taken." I don't see quiet in daemon or killproc, I can't imagine that it would matter though. The original start-stop-daemon for swapspace seems to have both -p and --pidfile (the same command). That must be an error. Did I miss anything? Any idea why daemon not create the pid file?

    Read the article

  • Recovering data from an external hard drive

    - by CCallaghan
    I have a WD Elements 2GB hard drive (formatted NTFS). I accidentally kicked out the USB cable while writing data to the disk, and now I can't access most of the data. Although this was ostensibly my backup drive, there is a great deal of important material on there which was only on there. I realise how idiotic this makes me. (So, formatting is not an option.) Things I've tried/information I've gathered: Windows Explorer will recognise the drive itself. However, it will not access most directories therein (and will sometimes crash when exploring). I can access all of the directories through the command line, but the dir command will often report that it can't read any files in most of the directories. The situation was similar when I hooked it up to an Ubuntu machine: the file explorer crashed, but I could access directories - but not files in those directories - via terminal commands. Several files I tried to copy out either resulted in an I/O error being reported or resulted in the command line crashing. The Disk Management utility on Windows reports a healthy disk formatted as NTFS and not RAW. It also indicates the correct amount of space used up and its capacity (so it seems that the files are not deleted). I've tried to run chkdsk, but that hangs on Step 2 (checking indexes) at 74%. Step 1 reported no bad sectors. I tried Recuva, but that didn't seem to work (stalled at 0% for half an hour). I should also note that the disk doesn't seem to be spinning smoothly; it seems to be chopping back, like it's reading the same sector over and over again. I noticed this after I kicked out the cable. Any help would be greatly appreciated. Update: It would seem the problem has taken a turn for the worse. The external hard drive now shows up on my computer as a local disk and is not mountable by Linux.

    Read the article

  • named responding recursive on norecurse queries

    - by Keks
    I have a server on which named is running. It is intercepted with another named server which it is not aware of. Querying the first named server results in timeouts. The server tries to resolve the query recursively. During that the firewall redirects the DNS Request from the first named server to the second one (the query from the first one is addressed to a e.g. a root server and has its "Recursion desired" bit set to 0). Despite that the second named responds to this request with a entirely or at least 1 level more resolved response than the first named server expects. So it ends up with a timeout even though it got a correct name server or even the full IP for the queried domain. In the first case the first name server tries to follow the authority domain ignoring the coresponding glue record and ends up in a loop it aborts: queried: google.com -> got from named#2: ns1.google.com -> ignore glue record and query: ns1.google.com -> got authority from named#2: google.com In the second case it ignores the answer section with the correct IP and instead tries to follow the name servers from the authority section, which ends up in the same dead end as case 1. So how can it be that the second named responds with recursive results even though the bit was explicitly set to 0 in the request from the first named?

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >