Search Results

Search found 3252 results on 131 pages for 'agent smith'.

Page 60/131 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Increase backup speed, Backup Exec 2010 - QNAP TS419U+ NAS

    - by user99912
    We have a QNAP NAS and the network shares are being backed up by Backup Exec 2010 over SMB. We can't install the remote agent on the NAS as it has an ARM processor and, as far as I am aware, there is no compatible agent. Do you have any suggestions on any faster method of backing up these shares as opposed to the current scenario? Currently the network bandwidth is not the issue, it seems that this access method is just not able to go any quicker. We've also added the NAS shares to the start of the selection list, but we're still running into 18 hours total backup time (total amount of data on the NAS is roughly 650GB). Any comments and/or suggestions welcome. EDIT: Data is being pulled from the NAS by Backup Exec to a LTO4 tape drive

    Read the article

  • Fingerprint of PEM ssh key

    - by Unknown
    I have a PEM file which I add to a running ssh-agent: $ file query.pem query.pem: PEM RSA private key $ ssh-add ./query.pem Identity added: ./query.pem (./query.pem) $ ssh-add -l | grep query 2048 ef:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX ./query.pem (RSA) My question is how I can get the key fingerprint I see in ssh-agent directly from the file. I know ssh-keygen -l -f some_key works for "normal" ssh keys, but not for PEM files. If I try ssh-keygen on the .pem file, I get: $ ssh-keygen -l -f ./query.pem key_read: uudecode PRIVATE KEY----- failed key_read: uudecode PRIVATE KEY----- failed ./query.pem is not a public key file. This key starts with: -----BEGIN RSA PRIVATE KEY----- MIIEp.... etc. as opposed to a "regular" private key, which looks like: -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC,E15F2.... etc.

    Read the article

  • SQL Server Agents jobs and turning off the server

    - by Tim Joseph
    I'm really new to SQL Agent jobs, but I am attempting to build up a maintainance regime for a server that will be turned off and on again at unknown intervals. It may run without being shutdown for a month, or it might only be turned on 9-5... we don't know and the client can't tell us because they don't know. So what I'm wondering is, what do I need to do to get SQL Server to run monthly and daily jobs either when they are due, or if the due date is missed, get them to be run when the server is next powered on. I could come up with a mish-mash of periodic jobs and 'on-power-up' jobs, but if there is something more elegant that would be wonderful. Obviously I'll need to ensure the SQL Server Agent is configure to start when the computer is powered up, but what else?

    Read the article

  • multiple puppet masters set up using inventory

    - by Oli
    I have managed to set up multiple puppet masters with one puppet master acting as a CA and clients are able to get a certificate from this CA server but use their designated puppet master to get their manifests. See this question for more info.. multiple puppet masters. However, there are a couple of things I have had to do to get this working correctly and have an error which I'll get to. First of all, to get inventory working for a puppet-client (PC) connecting to its designated puppet-master (PM), I had to copy the CA certs on PM1 to the PM2 ca directory. I ran this command: scp [email protected]:/var/lib/puppet/ssl/ca/* [email protected]:/var/lib/puppet/ssl/ca/. Once i have done that, I was able to uncomment the SSLCertificateChainFile, SSLCACertificateFile & SSLCARevocationFile section of my rack.conf VH file on the PM2. Once I had done this, inventory started to work. Does this sound an acceptable way to do things? Secondly, in the puppet.conf file, I am setting the designated PM server for that client. Unless there is a better way, this is how it'll work in my production setup. So PC1 will talk to PM1 and PC2 will talk to PM2. This is where I have an error. When PC2 first requests a cert from the CA on PM1, the cert appears and then I sign the cert on the CA on PM1. When I then do a puppet agent --test on PC2 (which has server = PM2 in puppet.conf), I get this error: Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: puppet-master2.test.net(10.1.1.161) access to /certificate_revocation_list/ca [find] at :112 However, if I change the PC2 puppet.conf file and specify server = PM1 and the rerun puppet agent --test, i do not get any errors. I can then revert the change in the puppet.conf file back to server = PM2 and everything seems to run normally. Do I have to set up some kind of ProxyPassMatch on PM2 for requests made from clients to /certificate_revocation_list/* and redirect them to PM1? Or how can I fix this error? Cheers, Oli

    Read the article

  • Exchange Server 2007 Transport Errors -- SMTP Session Hangs

    - by devviedev
    Hello, I was doing some changes to the server (writing an Transport Agent). After trying to install it, I started to get some errors. Now, when connecting to the SMTP server the session hangs just after finishing the DATA section. I'm not sure what happened, I disabled my transport agent and uninstalled it, then restarted the server. The problem persists. In the Event Viewer, four of the same errors show up: Source: FSCTransportScanner Category: Scan Error Event ID: 5021 Description: Unable to retrieve internet monitor interface. What could have happened?

    Read the article

  • OSSEC is not running

    - by batman
    I have an two ec2 instances. In one I have installed ossec server and in other I have installed ossec agent. Here are my server config INBOUND (security group/firewall) : port:514 source:0.0.0.0/0 port:1514 source:0.0.0.0/0 But it seems to be not working. In my agent log file I keep on getting: 2012/08/28 06:52:52 ossec-agentd: INFO: Using IPv4 for: x.x.x.x.x.x . 2012/08/28 06:53:13 ossec-agentd(4101): WARN: Waiting for server reply (not started). Tried: 'x.x.x.x.x'. Edit: Running sudo netstat --inet -nlp | grep ossec. I'm getting: udp 0 0 0.0.0.0:1514 0.0.0.0:* 26027/ossec-remoted Where I'm making the mistake?

    Read the article

  • Do these 3 crashes have something in common?

    - by David U
    I'm running OS X 10.6.8 on a Mac Mini. I tried to install 3 applications today and all 3 installations failed. I am wondering if the failures have something in common. First I installed GraphViz. The installation succeeded, but when I try to open any .dot file, I get a dialog that says GraphViz has quit unexpectedly. Next I installed Doxygen. It installed, but when I try to launch it I get a dialog that tells me Doxywizard quit unexpectedly. After some googling I thought perhaps my system lacked QT, and that was the problem. I downloaded the Qt 4.8.4 packages and installed them. But when I try to launch qtdemo.app, or any of the other apps that came with the qt installation, I get a dialog that says I can't open the app because it's not supported on this type of Mac. I have crash logs from GraphViz and Doxygen. They're long and I think it unnecessary to post them unless they would help someone determine my problem. Thanks Excerpt from System Log, added later: 12/13/12 5:26:21 PM [0x0-0x4f04f].com.apple.DiskImageMounter[1322] 2012-12-13 17:26:21.927 DiskImages UI Agent[1333:903] *** -[NSMachPort handlePortMessage:]: dropping incoming DO message because the connection or ports are invalid 12/13/12 5:30:31 PM [0x0-0x1a01a].org.mozilla.firefox[824] [ConvConfHandler] isPreferred contentType: application/x-apple-diskimage 12/13/12 5:35:32 PM DiskImages UI Agent[1384] *** -[NSMachPort handlePortMessage:]: dropping incoming DO message because the connection or ports are invalid 12/13/12 5:35:32 PM [0x0-0x5a05a].com.apple.DiskImageMounter[1376] 2012-12-13 17:35:32.988 DiskImages UI Agent[1384:903] *** -[NSMachPort handlePortMessage:]: dropping incoming DO message because the connection or ports are invalid 12/13/12 6:07:33 PM DisplayLinkUserAgent[772] (00116500.405)-[DLDistributedNotificationCenter stream:handleEvent:] reconnected. 12/13/12 6:07:33 PM [0x0-0x6c06c].backupd-helper[1446] Not starting Time Machine backup after wake - less than 60 minutes since last backup completed. 12/13/12 6:08:43 PM Installer[1403] PackageKit: *** Missing bundle identifier: /Library/Receipts/BrotherPPD.pkg 12/13/12 6:08:48 PM Installer[1403] PackageKit: *** Missing bundle identifier: /Library/Receipts/NeoOffice-2.2.3-Intel.pkg 12/13/12 6:08:48 PM Installer[1403] PackageKit: *** Missing bundle identifier: /Library/Receipts/NeoOffice-2.2.3-Patch-2-Intel.pkg 12/13/12 6:08:48 PM Installer[1403] PackageKit: *** Missing bundle identifier: /Library/Receipts/NeoOffice-2.2.5-Intel.pkg 12/13/12 6:08:48 PM Installer[1403] PackageKit: *** Missing bundle identifier: /Library/Receipts/NeoOffice.pkg 12/13/12 6:08:48 PM Installer[1403] PackageKit: *** Missing bundle identifier: /Library/Receipts/PIXMA iP6000D 290.pkg 12/13/12 6:14:39 PM com.apple.launchd.peruser.501[359] ([0x0-0x70070].com.att.graphviz[2047]) Job appears to have crashed: Bus error 12/13/12 6:14:41 PM ReportCrash[2056] Saved crash report for Graphviz[2047] version 2.28 (2.28.0) to /Users/duzzell/Library/Logs/DiagnosticReports/Graphviz_2012-12-13-181441_Amun.crash 12/13/12 6:15:19 PM com.apple.launchd.peruser.501[359] ([0x0-0x74074].org.doxygen[2070]) Job appears to have crashed: Bus error 12/13/12 6:15:19 PM ReportCrash[2056] Saved crash report for Doxywizard[2070] version 1.8.2 (???) to /Users/duzzell/Library/Logs/DiagnosticReports/Doxywizard_2012-12-13-181519_Amun.crash

    Read the article

  • excel 2010 search function?

    - by Tom
    can a cell A1:A200 be searched for a "name" then once found, imput the cell location into a formula? such as find "tom"(a1:a200), [found location at cell a22] IF(a22),=IF(MINUTE(Auto_Agent!G27)+(SECOND(Auto_Agent!G27))=0,"",(MINUTE(Auto_Agent!G27)*60+(SECOND(Auto_Agent!G27)))) the problem I'm having is each time I import data names can be in different cell locations depending on who is working that day. example: Agent: Tom 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 () 9/19/2012 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Agent: Bill 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 () 9/19/2012 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05

    Read the article

  • How can I diagnose cache misses when using Apache as a reverse proxy?

    - by johnstok
    I have set up Apache 2.2 as a reverse proxy with the following configuration: # jBoss proxying ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /foo http://localhost:9080/foo ProxyPassReverse /foo http://localhost:9080/foo ProxyPassReverseCookiePath /foo /foo # Reverse proxy caching CacheEnable disk /foo # Compression SetOutputFilter DEFLATE BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE\s(7|8) !no-gzip !gzip-only-text/html DeflateCompressionLevel 9 Header append Vary User-Agent env=!dont-vary However, in a number of cases where I expect a cached response to be returned the request is sent through to the origin server at localhost:9080. Responses have a HTTP Vary header of 'Accept-Encoding,User-Agent' which is to be expected given the mod_deflate configuration. How can I determine why Apache is unable to serve a response from the cache?

    Read the article

  • How can I anonymize my browser useragent, yet still be counted as a FF/Ubuntu user?

    - by Rory
    I read about EFF's Panopticlick project to see how unique your webbrowser's headers are. I would like to anonymize that a bit. My current User Agent is Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.7) Gecko/20100106 Ubuntu/9.10 (karmic) Firefox/3.5.7 I would like to make that more anonymous, however I still wanted to be counted as a Firefox and Ubuntu user. How can I change my User Agent in Firefox? What should I change it to so that it's less unique, but will be counted as a Firefox user and a Ubuntu user on web analytics software? I know that there is no guarantee that I will be counted a Firefox/Ubuntu user, just something that 'works most of the time' would be sufficent.

    Read the article

  • Infected Win XP disk not openable on a Vista computer

    - by Retired57
    How can I read a WinXP partioned disk that is infected and shows up on a USB connection to Win Vista computer as a single partition that Vista cannot see the contents? The XP computer HDD is partioned into 4 partitions. It became infected, and all attempts to clean it have failed. Applications begin to launch, but are then shut down by the infecting agent. Using a major Anti-virus Co. boot disk (which was unable to connect to the Web, probably because the infecting agent stopped it) with virus definitions dated after the disk became infected, the resultant scan showed no infection. I bought a USB cable to connect the IDE drive to my Vista computer, but when I open Win Explorer, it sees the disk, but does not show any contents. It indicates it is a single partition that is valid. However in all the ways I have tried it does not show drive contents. Any suggestions on what to do next?

    Read the article

  • View a pdf with quick webview though apache proxy

    - by Musa
    I have a site(IIS) that is accessed via a proxy in apache(on an IBM i). This site serves PDFs which has quick web view and if I access a pdf directly from the IIS server the PDFs starts to display immediately but if I go through the proxy I have to wait until the entire pdf downloads before I can view it. In the apache config file I use ProxyPass /path/ http://xxx.xxx.xxx.xxx/ <LocationMatch "/path/"> Header set Cache-Control "no-cache" </LocationMatch> I tried adding SetEnv proxy-sendcl to LocationMatch directive this had no effect. The PDFs that view quickly makes a lot of partial requests This is the initial request and response headers GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip HTTP/1.1 200 OK Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 15330238 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET This is a partial request and response GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept: */* Referer: http://xxx.xxx.xxx.xxx/xxxx.PDF Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip Range: bytes=0-32767 HTTP/1.1 206 Partial Content Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 32768 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Range: bytes 0-32767/15330238 Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET These are the headers I get if I go through he proxy GET /path/xxx.PDF HTTP/1.1 Host: domain:xxxx Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 HTTP/1.1 200 OK Date: Mon, 25 Aug 2014 13:28:42 GMT Server: Microsoft-IIS/7.5 Content-Type: application/pdf Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes ETag: "b6262940bbecf1:0"-gzip X-Powered-By: ASP.NET Cache-Control: no-cache Expires: Thu, 24 Aug 2017 13:28:42 GMT Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=300, max=100 Connection: Keep-Alive Transfer-Encoding: chunked I'm guessing its because the proxy uses Transfer-Encoding: chunked but I'm not sure and wasn't able to turn it off to check. Browser Chrome 36.0.1985.143 m Using the native PDF viewer Any help to get the pdf quick web view through the proxy working would be appreciated.

    Read the article

  • Can I re-attach SSH key forwarding through a disconnected Screen session?

    - by David Mackintosh
    I have a laptop on which I have pageant (the PuTTy SSH key agent) running. If I ssh to a system and launch screen, the ssh key forwarding works properly. However, if I disconnect from that screen session, log off, then later reconnect -- the key forwarding doesn't work any more. I am presuming that this is because when I reconnect the key forwarding is set up on different ports for the new ssh session than was the old one. Is there a way to teach an individual screen window to reconnect to the agent forwarding so that I can use my key to forward again?

    Read the article

  • Is it possible to mod_rewrite BASED on the existence of a file/directory and uniqueID? [closed]

    - by JM4
    My site currently forces all non www. pages to use www. Ultimately, I am able to handle all unique subdomains and parse correctly but I am trying to achieve the following: (ideally with mod_rewrite): when a consumer visits www.site.com/john4, the server processes that request as: www.site.com?Agent=john4 Our requirements are: The URL should continue to show www.site.com/john4 even though it was redirected to www.site.com?index.php?Agent=john4 If a file (of any extension OR a directory) exists with the name, the entire process stops an it tries to pull that file instead: for example: www.site.com/file would pull up (www.site.com/file.php if file.php existed on the server. www.site.com/pages would go to www.site.com/pages/index.php if the pages directory exists). Thank you ahead of time. I am completely at a crapshot right now.

    Read the article

  • Linux/Apache performance very slow even on local network

    - by klausch
    I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to be served too, but do not show a long response time in the log file. So probably different tissues are involved here. I cannot find a solution until now, any suggestions??? THanx in advance, Klaas link to screenshot picture from access logfile Some extra configuration info: apache2.conf (comment is removed) LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %T/%D" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ And the virtual hostfile for one of the slow sites, in fact it is pretty straightforward... <VirtualHost *:80> ServerAdmin [email protected] ServerSignature EMail ServerName toenjoy.drsklaus.nl DocumentRoot /var/www/toenjoy.drsklaus.nl <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/toenjoy.drsklaus.nl/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig AuthType Basic AuthName "To Enjoy" AuthUserFile /etc/.htpasswd Require user petraaa Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> And the output of free -m: klaas@ubuntu-server:/etc/apache2$ free -m total used free shared buffers cached Mem: 1997 1401 595 0 144 1017 -/+ buffers/cache: 238 1758 Swap: 2035 0 2035 and I have no indication that swapping occurs on the moments the site is slow. I have runned top and it does not appear to be a CPU issue. I have the impression that the spawning of a apache thread could maybe be the bottleneck but it is just a suggestion. Maybe this gives some extra information! EDIT: The problem seemed to be gone for some time but occurs again! And not only with Apache, also connecting using SSH takes a tremendous time, sometimes it takes up to 15 seconds before the keyphrase is asked for. Also scp works very slowly. The behavious is really unpredoctable and makes the server very hard to use. Any ideas...?

    Read the article

  • Blocked Chrome by Company's Proxy

    - by gol.d.ace
    Chrome has just suddenly being blocked by my company for accessing internet. Instead, IE still works fine. It blocks the connection, the error says that my Chrome connection is timeout & the server is not responding Error 118 (net::ERR_CONNECTION_TIMED_OUT): The operation timed out). I guess the reason behind this is to standardize all user's browser for only using IE for security reason. I have tried to change the user-agent string (--user-agent="... ") so that the Chrome will pretend to be as IE. and yet, still not working! Could anyone shed my problem here? IE is just not comfortable so-called for surfing the web!

    Read the article

  • Puppet:get real-time status of catalog evaluation and post to remote server

    - by txworking
    According to this article http://docs.puppetlabs.com/guides/puppet_internals.html There are four phases when puppet agent got a catalog from master. resource generation = relationships = evaluation = reporting Reporting As the transaction progresses, it collects logs and metrics on what it does. At the end of evaluation, it turns this information into a report, which it sends to the server (if requested). And at the end of evaluation puppet agent would generate a report and sent the report to the master. Is there a way to get real-time status of evaluation phase and post them to a remote logcollector? Glad for any suggestions.

    Read the article

  • robots.txt file with more restrictive rules for certain user agents

    - by Carson63000
    Hi, I'm a bit vague on the precise syntax of robots.txt, but what I'm trying to achieve is: Tell all user agents not to crawl certain pages Tell certain user agents not to crawl anything (basically, some pages with enormous amounts of data should never be crawled; and some voracious but useless search engines, e.g. Cuil, should never crawl anything) If I do something like this: User-agent: * Disallow: /path/page1.aspx Disallow: /path/page2.aspx Disallow: /path/page3.aspx User-agent: twiceler Disallow: / ..will it flow through as expected, with all user agents matching the first rule and skipping page1, page2 and page3; and twiceler matching the second rule and skipping everything?

    Read the article

  • CURL -I issue stop responding when contain "="

    - by user1512778
    i did this command : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm&requestingHandler=WebNSORDetailHandler&ID=368343543' but stuck but if i did this : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:49:14 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 then i try shorten it : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm' stuck too i dont know why seems like if the url contain "=" it stop responding so tried this url removing the "=" (serviceName=WebNSOR to serviceNameWebNSOR) : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceNameWebNSOR' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:50:38 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 why i cant use = ? please assist me

    Read the article

  • Emulate, debug BlackBerry mobile website.

    - by Pennf0lio
    I'm developing a website that needs to be compatible for mobile phones (esp. BlackBerry). How can I debug it's CSS? I already installed the Default User Agent plug-in for Firefox problem is I just don't know how to add the BlackBerry agent. Next problem is how can I properly emulate the mobile site? without the actually having the device. Thanks! I'm using Windows 7, Firefox + Firebug + Web Developer Toolbar. I know XHTML, CSS and WordPress

    Read the article

  • SQL Server 2005 transactional replication break before a configured number of retries

    - by ti2
    We have a SQL Server 2000 Standard database with some tables being replicated (continuous transactional replication) to dozens of SQL Server 2005 Express and MSDE computers. The step 2 of the replication agent job (Run agent) is configured by default to retry every 1 minute for 10 times if some problem ocurr. Because the client machines get shut down at night (they are POS machines), we changed the number of retries to 5760 (4 days), so replication would not be broken at night and would not need to be restarted manually. But the problem is that every other day we have at least one machine with broken replication, with this error: The process could not connect to Subscriber 'POS986'. NOTE: The step was retried the requested number of times (5760) without succeeding. The step failed. It seems that SQL Server is not respecting the number of retries or the interval between retries as we configured. PS: I have restarted the replication job after changing the number of retries from 10 to 5760.

    Read the article

  • Open source monitoring tool without sending data to "Their Server"

    - by hangu
    I trying to use open source server monitoring tool. I know there are a lot, but I couldn't find what I need.. the basic process of monitoring tool I used to use before was, 1) Install agent in my server which I want to monitor 2) The agent send data to "their server" 3) I can check the health of my server through web browser presented by them. What I need is, avoiding "Step 2". Are there any monitoring tool that I can use? I have Windows 2008 and Linux servers simple feature will be enough like (CPU, Memory, Network..) Thank you

    Read the article

  • AuthSub token from Google/YouTube API is always returned as invalid

    - by Miriam Raphael Roberts
    Anyone out there have experience with the YouTube/Google API? I am trying to login to Google/Youtube using clientLogin, retrieve an AuthSub token, exchange it for a multi-session token and then use it in our upload form. Just a note that we are not going to have other users logging into our (secure) website, this is for our use only (no multi-users). We just want a way to upload videos to our YT account via our own website without having to login/upload to YouTube. Ultimately, everything is dependent on the first step. My AuthSub token is always being returned as invalid (Error '403'). All the steps I used are below with username/password changed. Anyone have an insight on why my AuthSub is always invalid? I am spending an enormous amount of time trying to get this to work. STEP 1: Getting the authsub token from Youtube/Google POST /youtube/accounts/ClientLogin HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Content-Length: 86 Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test RESPONSE RECEIVED: Auth=AIwbFAR99f3iACfkT-5PXCB-1tN4vlyP_1CiNZ8JOn6P-......yv4d4zeGRemNm4il1e-M6czgfDXAR0w9fQ YouTubeUser=MyYouTubeUsername CURL COMMAND USED: /usr/bin/curl -S -v --location https://www.google.com/youtube/accounts/ClientLogin --data Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test --header Content-Type:application/x-www-form-urlencoded STEP 2: Exchanging the AuthSub token for a multi-use token GET /accounts/AuthSubSessionToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKs......p5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Response received: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubSessionToken --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2g.....vp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 3: Checking to see if the token is good/valid GET /accounts/AuthSubTokenInfo HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhKs3u.....A_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Received response: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubTokenInfo --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2gHoAKDsNdFqdZdwWjGeNquOLpvp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 4: Trying to get the upload token using the authsub token POST /action/GetUploadToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: gdata.youtube.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/atom+xml Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGFKpxd_7a6hEERh_3......R82AShoQ" Content-Length:0 GData-Version:2 Recevied Response: 401 Token invalid - Invalid AuthSub token. Curl command used: /usr/bin/curl -S -v --location http://gdata.youtube.com/action/GetUploadToken -H Content-Type:application/atom+xml -H Authorization: AuthSub token="AIwbFASiRR3XDKs....sYDp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" -H X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGF......Kpxd6dN2J1oHFQYTj_7a6hEERh_3E48R82AShoQ" -H Content-Length:0 -H GData-Version:2

    Read the article

  • Session Cookies and IE 8

    - by Matt Luongo
    I recently built a simple web-app deployed over Tomcat. The app uses pretty standard session based security where a user who has logged in is given a session. Sessions work fine in Firefox and Chrome, but require the use of jsessionid in the URL for IE (tested 7 & 8), set to medium privacy. In IE 8, I tried to override cookie handling, setting "Allow all 3rd party cookies" and "Allow all session cookies"- no dice. However, when I run Tomcat on my local machine, IE accepts the cookie, and sessions work just fine. And now, for the HTTP headers. From Chrome, a logged in user gets a session GET http://devl:8080/testing/ HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:14:40 GMT GET http://devl:8080/testing/logout HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Referer: http://devl:8080/testing/ Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397 ... From IE 8, with standard medium level security and privacy- GET http://devl:8080/testing/ HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Host: devl:8080 Connection: Keep-Alive HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=192999F922D6E9C868314452726764BA; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:32:34 GMT GET http://devl:8080/testing/logout HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Referer: http://devl:8080/testing/;jsessionid=6371A83EFE39A46997544F9146AA5CEA Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: devl:8080 ... I thought it might be P3P, but on adding a compact policy, nothing changes. This is the standard Tomcat session, so I'm really surprised I haven't been able to find other people with the same problem so far. Anyone have any ideas?

    Read the article

  • Getting RINGING response on SIP UAC without sending it from the other UAC

    - by TacB0sS
    Hi, I hope this would be my last question about this SIP subject, I have managed to overcome the last issue I had by asking a friend to help me from a remote computer, I'm able to connect between the computers, but here is the thing, according to all the examples I saw, the Callee should invoke the Ringing response, but in my application case I didn't implement it yet, but I still receive on the Caller UAC a Ringing response, this is the SIP messages that are on the caller end: Outgoing Request 5: INVITE sip:[email protected] SIP/2.0 Contact: "Client 310" <sip:[email protected]> From: "Client 310" <sip:[email protected]> Max-Forwards: 32 CSeq: 2 INVITE Call-ID: [email protected] Allow: INVITE,CANCEL,ACK,BYE,OPTIONS Content-Type: application/sdp Proxy-Authorization: Digest username="310",nonce="012afffb",realm="asterisk",uri="sip:[email protected]",algorithm=MD5,response="d19ca5b98450b4be7bd4045edb8a3a2f" Via: SIP/2.0/UDP hostName.hn:5060 To: "Client 320" <sip:[email protected]>;tag=as5a8fa200 Content-Length: 257 v=0 o=310 7108915969559970847 7108915969559970847 IN IP4 xxx.xxx.x.xxx s=- i=Nu-Art Software - TacB0sS VoIP information c=IN IP4 xxx.xxx.x.xxx m=audio 3312 RTP/AVP 0 8 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:101 telephone-event/8000 Incoming Response 6: SIP/2.0 100 Trying Via: SIP/2.0/UDP hostName.hn:5060;branch=f8d171d3278788df9e03eb9cf3acba70-xxx.xxx.x.xxx-2-invite-hostName.hn-5060333732;received=79.181.6.233 From: "Client 310" <sip:[email protected]> To: "Client 320" <sip:[email protected]>;tag=as5a8fa200 Call-ID: [email protected] CSeq: 2 INVITE User-Agent: Freeswitch 1.2.3 Allow: INVITE,ACK,CANCEL,OPTIONS,BYE,REFER,SUBSCRIBE,NOTIFY,INFO Supported: replaces Contact: <sip:[email protected]> Content-Length: 0 Incoming Response 7: SIP/2.0 180 Ringing Via: SIP/2.0/UDP hostName.hn:5060;branch=f8d171d3278788df9e03eb9cf3acba70-xxx.xxx.x.xxx-2-invite-hostName.hn-5060333732;received=79.181.6.233 From: "Client 310" <sip:[email protected]> To: "Client 320" <sip:[email protected]>;tag=as5a8fa200 Call-ID: [email protected] CSeq: 2 INVITE User-Agent: Freeswitch 1.2.3 Allow: INVITE,ACK,CANCEL,OPTIONS,BYE,REFER,SUBSCRIBE,NOTIFY,INFO Supported: replaces Contact: <sip:[email protected]> Content-Length: 0 Call to: [email protected] is Ringing Incoming Response 8: SIP/2.0 183 Session Progress Via: SIP/2.0/UDP hostName.hn:5060;branch=f8d171d3278788df9e03eb9cf3acba70-xxx.xxx.x.xxx-2-invite-hostName.hn-5060333732;received=79.181.6.233 From: "Client 310" <sip:[email protected]> To: "Client 320" <sip:[email protected]>;tag=as5a8fa200 Call-ID: [email protected] CSeq: 2 INVITE User-Agent: Freeswitch 1.2.3 Allow: INVITE,ACK,CANCEL,OPTIONS,BYE,REFER,SUBSCRIBE,NOTIFY,INFO Supported: replaces Contact: <sip:[email protected]> Content-Type: application/sdp Content-Length: 264 v=0 o=root 27669 27669 IN IP4 yy.yy.yy.yy s=session c=IN IP4 yy.yy.yy.yy t=0 0 m=audio 10914 RTP/AVP 0 8 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:101 telephone-event/8000 a=fmtp:101 0-16 a=silenceSupp:off - - - - a=ptime:20 a=sendrecv Incoming Response 9: SIP/2.0 503 Service Unavailable Via: SIP/2.0/UDP hostName.hn:5060;branch=f8d171d3278788df9e03eb9cf3acba70-xxx.xxx.x.xxx-2-invite-hostName.hn-5060333732;received=79.181.6.233 From: "Client 310" <sip:[email protected]> To: "Client 320" <sip:[email protected]>;tag=as5a8fa200 Call-ID: [email protected] CSeq: 2 INVITE User-Agent: Freeswitch 1.2.3 Allow: INVITE,ACK,CANCEL,OPTIONS,BYE,REFER,SUBSCRIBE,NOTIFY,INFO Supported: replaces Content-Length: 0 I do not respond to the invite, that is why all this is happening, but why am I getting a ringing if I'm not the one sending it. Thanks, Adam.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >