Search Results

Search found 467 results on 19 pages for 'alexander wada'.

Page 6/19 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Simple NNTP client for Ubuntu

    - by Alexander Gladysh
    I need to download a particular NNTP group. I do not need to setup any crons, put group contents in /opt and other stuff. Just run <fetch-nntp> <server> <group-name> <output-dir> and be done with it without putting a lot of garbage in the system. If that <fetch-nntp> would not fetch the same messages on the second run — fine. But I can live without it. All NNTP clients that I've looked at are trying to be a NNTP server as well. Is there something simpler that is suitable for my needs?

    Read the article

  • "Security Warning" comes up when I run via another program

    - by Alexander Bird
    If I execute vmmap from the command line it works fine. However, if I call some other program and pass vmmap as a paramater for this other program to start the execution, then I get this "security error" popup -- which makes it hard to automate scripts. In other words, I want to wrap vmmap via another program. In my case, I want to wrap vmmap via another program because whenever vmmap runs, it will bring a window up momentarily and then disappear. So I try passing vmmap as an argument to another program which will start the program "headlessly". I tried this program and this program, and in both cases I get the same popup which defeats the purpose of automation. Why does this happen when the program isn't run directly? Does anyone know the internals of what this warning is? And, utlimately, is there a way to stop this from happening, but only for this instance? I don't want to disable this warning-system on my whole computer. EDIT: I am using Windows Server 2003, and I don't necessarily need solutions for other platforms, but I would like to know what they are if they are platform-dependent solutions.

    Read the article

  • How to use ssl_verify_client=ON on one virtual server and ssl_verify_client=OFF on another?

    - by Alexander Artemenko
    I want to force ssl client verification for on of my virtual hosts. But get "No required SSL certificate was sent" error, trying to GET something from it. Here are my test configs: # defaults ssl_certificate /etc/certs/server.cer; ssl_certificate_key /etc/certs/privkey-server.pem; ssl_client_certificate /etc/certs/allcas.pem; server { listen 1443 ssl; server_name server1.example.com; root /tmp/root/server1; ssl_verify_client off; } server { listen 1443 ssl; server_name server2.example.com; root /tmp/root/server2; ssl_verify_client on; } First server replies with 200 http code, but second returns "400 Bad Request, No required SSL certificate was sent, nginx/1.0.4". Probably, it is implossible to use ssl_verify_client on the same IP? Should I bind these servers to different IPs, will it solve my problem?

    Read the article

  • PHP failing to connect to GMail via IMAP [Edited!!]

    - by Alexander
    I have some php code that I'm trying to use to connect to gmail using imap. Here's the code: $hostname = '{imap.gmail.com:993/imap/ssl/novalidate-cert}INBOX'; $tmp_username = 'username'; $tmp_password = 'password'; $inbox = imap_open($hostname, $username, $password) or die(imap_last_error()); And I get this error output everytime i try to connect: Warning: imap_open() [function.imap-open]: Couldn't open stream {imap.gmail.com:993/imap/ssl/novalidate-cert}INBOX in /var/www/PHP/EmailScript.php on line 14 Login aborted I dont understand what could be wrong!! I've heard of people having SSL errors but this doesnt seem to be one of those. Please please please help me!!!!! Edit: When trying to connect to imap.gmail.com through telnet-ssl i get the following output: Trying 74.125.155.109... Connected to gmail-imap.l.google.com. Escape character is '^]'. And Nothign else happens

    Read the article

  • Archive Manager, SQL 2005 and MaxTokenSize high CPU

    - by Tim Alexander
    So, I posted this question a few days ago: Impact of increasing the MaxTokenSize for Kerberos Tickets Since then the thought was to test our settings on two member servers, one with IIS and one without. I setup two GPOs to configure the MaxTokenSize reg setting to 48000 and MaxFieldLength/MaxRequestBytes to 64200 (based on MS KB2020943, these are set at 4/3 * T + 200). The member server seemed to work ok (a devalued tape backup server). The IIS server however has had some strange repercussions. The IIS Sserver host Quest Software Archive Manager (AM) 4.5 that communicates with SQL Server 2005 Enterprise on Server 2003 R2. After the changes all looked good until the SQL Server hit 100% CPU. I have removed the GPOS, removed the reg values and even replaced them with defaults (12000 for token size and can't remember the other one but was in a blog post about the issue in my other post). No change. Bouncing the IIS Server stops the high CPU and a colleague has looked at the SQL server and it is definitely the AM connection taking up the time/work on the SQL server. I haven't changed the reg values on the SQL server or the DCs but am reluctant to do so without understanding why this has happened. I am guessing its to do with the overriding auth and group issue we have but I am not seeing Kerberos errors in either event log. Has anyone seen something similar or does anyone have some tips? Was definitely blindsided by the Kerberos issue and am swimming against the tide to keep things functioning.

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Intranet Ip - Access from Custom Domain

    - by Alexander Wigmore
    I have setup a local intranet in my office using IIS7 (Windows 7 Machine), currently it can be accessed through the PC's static IP, however I would like it so that internally it can just be accessed through an easier method, e.g typing in http://intranet (or something similar). There are over 60 PC's int he office, so individually updating Host files on the PC's is not really ideal. We don't need it to be accessible from the outside world (I.e, we don't care/want it to be an Extranet). Any tips?

    Read the article

  • nginx: dump HTTP requests for debugging

    - by Alexander Gladysh
    Ubuntu 10.04.2 nginx 0.7.65 I see some weird HTTP requests coming to my nginx server. To better understand what is going on, I want to dump whole HTTP request data for such queries. (I.e. dump all request headers and body somewhere I can read them.) Can I do this with nginx? Alternatively, is there some HTTP server that allows me to do this out of the box, to which I can proxy these requests by the means of nginx? Update: Note that this box has a bunch of normal traffic, and I would like to avoid capturing all of it on low level (say, with tcpdump) and filtering it out later. I think it would be much easier to filter good traffic first in a rewrite rule (fortunately I can write one quite easily in this case), and then deal with bogus traffic only. And I do not want to channel bogus traffic to another box just to be able to capture it there with tcpdump. Update 2: To give a bit more details, bogus request have parameter named (say) foo in their GET query (the value of the parameter can differ). Good requests are guaranteed not to have this parameter ever. If I can filter by this in tcpdump or ngrep somehow — no problem, I'll use these.

    Read the article

  • Multiple munin-nodes per machine

    - by Alexander T
    I'm collecting statistics remotely through JMX. The munin JMX plugin allows you to select an URL to connect to when aggregating statistics. This allows me to collect statistics from hosts which do not actually have munin-node installed. I find this a desirable property for some systems where I am hindered to install munin-node. How I work today is that if i want to collect JMX stats from machine A without munin-node, I install munin-node on machine B. Machine B then collects data from A via JMX, and reports it to munin-server, which runs on machine C. This setup requires multiple B-type machines: one per C-type machine. I would like to avoid this and instead use only one B-type machine to collect the data from all A-type machines and reports it to the only munin-server (C-type machine). As far as I understand this requires running multiple munin-nodes on B or in some other way report to munin-server that the B-type machine is reporting data from multiple sources. Is this possible? Thank you.

    Read the article

  • Why can't I connect to remote Microsoft SQL Server through SSH tunnel?

    - by Alexander
    I have at home a D-Link DIR-615 C1 router with DD-WRT. I set up the SSH server on the router, and log on through an SSH2-RSA passphrase-protected key. That router is the gateway between the local network and the internet. One of the computers on that network has Microsoft SQL Server 2008 installed, with TCP/IP protocol enabled through port 1433. I've set up port forwarding on the router, so that remote connections are possible and are, in fact, working (some developers log on remotely without problems). I am part of another network, that has internet access through a proxy server, which only has ports 80 and 443 opened. I can't connect to that MSSQL server on that remote server because 1433 port is closed on this network. I connected (using Putty) through 443 port to my router's SSH server, and set up 2 tunnels. One is for RDP (3389), and it's working. The other is for 1433 port, to connect to the server. I can't connect through the SSH tunnel to the MS SQL Server, neither through telnet, or through GUI clients. Am I missing something? Additional details: on connect, I get this error from SQL Server Management Studio: TITLE: Connect to Server Cannot connect to localhost:14330. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 3) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=3&LinkId=20476 BUTTONS: OK The tunnel is configured like this: L14330 192.168.0.103:1433 192.168.0.103 is the permanent address of the SQL Server on the LAN. I also successfully forwarded TCP traffic of 3389 port to that IP, so tunneling is working to that IP address. When connecting without tunnel, through Microsoft SQL Server Management Studio, using the same method the connection establishes. Too bad my proxy doesn't allow 1433 port traffic, I wouldn't have this headache.

    Read the article

  • Why can't I connect to remote Microsoft SQL Server through SSH tunnel?

    - by Alexander
    I have at home a D-Link DIR-615 C1 router with DD-WRT. I set up the SSH server on the router, and log on through an SSH2-RSA passphrase-protected key. That router is the gateway between the local network and the internet. One of the computers on that network has Microsoft SQL Server 2008 installed, with TCP/IP protocol enabled through port 1433. I've set up port forwarding on the router, so that remote connections are possible and are, in fact, working (some developers log on remotely without problems). I am part of another network, that has internet access through a proxy server, which only has ports 80 and 443 opened. I can't connect to that MSSQL server on that remote server because 1433 port is closed on this network. I connected (using Putty) through 443 port to my router's SSH server, and set up 2 tunnels. One is for RDP (3389), and it's working. The other is for 1433 port, to connect to the server. I can't connect through the SSH tunnel to the MS SQL Server, neither through telnet, or through GUI clients. Am I missing something? Additional details: on connect, I get this error from SQL Server Management Studio: TITLE: Connect to Server Cannot connect to localhost:14330. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 3) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=3&LinkId=20476 BUTTONS: OK The tunnel is configured like this: L14330 192.168.0.103:1433 192.168.0.103 is the permanent address of the SQL Server on the LAN. I also successfully forwarded TCP traffic of 3389 port to that IP, so tunneling is working to that IP address. When connecting without tunnel, through Microsoft SQL Server Management Studio, using the same method the connection establishes. Too bad my proxy doesn't allow 1433 port traffic, I wouldn't have this headache.

    Read the article

  • Would NetBSD be a good choice for a web server?

    - by Alexander
    I've the choice of crafting a NetBSD image for a Xen VPS host, and was just wanting to play around as I like BSD and wished to use it for my general web hosting. I will be hosting a low-mid traffic website and maybe a few other simple services. Do you think NetBSD would be a sufficient choice, in terms of general performance of multiple system users and fair amount of traffic to Apache compared to what Linux could normally handle? I am concerned if I do start to really like it and keep it, I may be limiting myself if I am to move further with my web host and get more traffic (and maybe a lot of FTP access and user shell accounts) Ken

    Read the article

  • Cannot Send Item error in Outlook - permissions to registry?

    - by Tim Alexander
    The issue I am trying to solve is to do with users getting a Cannot Send Item error in Outlook 2007 connecting to Exchange 2007. Basically if there is an image in the email (either one they have pasted in or one from another email in the chain) they get a "Cannot Send Item" error. Initially thought it was a citrix issue but users get it when they RDP to a server as well. Changing the message to Rich Text works 80% of the time but I do not think this is a solution but more of a temporary workaround. After some troubleshooting we found that the error can be fixed by adding the user as a member of the local power users group. of course this is not really a fix. My thoughts were that the ability of a power user to add/remove software may give them more access to the registry which might allow them to get round a restriction that is in place for a normal user. I have tried going through a procmon but the wealth of information is confusing. It initially looked like it may be an Outlook 2007 email security setting but this does not change between power user and normal user (set to 1 in the registry, "Use the security setting from Outlook Security Settings Public Folders"). I am struggling to fine tune my troubleshooting to work out exactly what is blocking it. Has anyone had an experience with an error similar to this? Or are there any tips for trying to track down issues via procmon as I must admit my approach seems somewhat lacking :) EDIT: So I have trawled through the two logs we have from process monitor (one as a power user and one a normal user). annoyingly I can find no obvious difference where something is denied access. There are more access denied events in the normal user log but these are quickly followed by sucessful entries to the same path fractions of a second later. The only thing that does stand out is an access denied to HKCR.html. This does not even appear in the power user version of the log. From what I understand this helps determine the default browser which ties in nicely with the fact that 9 out of 10 times you can send the message as Rich Text. EDIT: Looks like KB2509470 was causing the issue. Not really sure why but when I can work out what it does and why it causes the problem will post here unless anyone beats me to it!

    Read the article

  • Is ECC mandatory in SSD technology?

    - by Alexander Shcheblikin
    While shopping for an SSD I have noticed that some manufacturers promote their "Pro" models as the ones sporting ECC data protection. Those manufacturers do not mention ECC in their budget models descriptions. However, Wikipedia article on flash memory states that "NAND relies on ECC to compensate for bits that may spontaneously fail during normal device operation." So the question is does any SSD device use ECC behind the scenes for its normal operation and is that ECC "feature" just a marketing ploy?

    Read the article

  • Creating a custom NAS compatible with the Mac Time machine and for media streaming

    - by Bobby Alexander
    I am planning to assemble a custom NAS machine using an Intel Atom processor. I need the NAS for the following purposes: It should be accessible from by Windows PC so that I can dump data on the NAS (installations, media etc) It should be accessible from my Macbook for the above use. I should be able to use it with the Mac time machine software for backup. The media should be available to my PS3 for streaming. I should be able to access it from my iphone. All the above features should be available over wireless. The time machine feature is very important. Is this even possible? Can someone provide resources on how I can assemble such a machine and setup the required software on it? Much appreciated.

    Read the article

  • XP can't read data from transferred HD

    - by Alexander Miller
    Computer A, running XP, died. XP was installed on a fresh HD in computer B. Slaved data-backup HD from A was installed as slave on B. B will not read it; shows only 2 folders, Recycler and System Volume Info. All of these are older machines with IDE drives. What's going on & how can I read/transfer the data from the transferred drive? This was only a trial run. Actually I will need to transfer the master HD from A - which has XP on it - and read from its data partition because (blush) the backup drive was not up to date.

    Read the article

  • Make logwatch reports more interesting?

    - by Alexander Shcheblikin
    Is it possible to improve the quality of reports from logwatch? Like make it not just report disk usage which doesn't even change much in daily operation, but report significant changes in usage or approaching critical capacity levels? If I cannot do that with logwatch and instead have to write custom scripts to produce such reports, logwatch appears to be pretty useless, or even dangerous, as many users reportedly grow to ignore emails from it knowing they are so boring.

    Read the article

  • What version of Ubuntu to use for Desktop with 8Gb RAM?

    - by Alexander
    This may sound as a stupid question, but I am really interested whether Ubuntu Desktop i386 will be able to use all my available RAM. I want to use the latest, non-LTS version, 10.10. It says on the website that it's (i386) the Recommended version. I also recall that Flash Player had issues with 64 bit Linux. Also, the 64 bit version is listed in the Universal USB Installer as amd64. Does this mean that it's using instruction sets specific to AMD CPUs? (I have an Intel) Will it work fine with Intel? So which one to download and install? What to do to be able to use 8Gb of RAM?

    Read the article

  • Disable "Send as XPS Attachment" Word 2007

    - by Tim Alexander
    Is there a way to disable this option either via Group Policy or via some form of registry hack? Normally I would go down the route of telling users not to send as XPS and send as something else but with our recent upgrade to 2007 lots of users are banding these files around. Unfortunately our version of Citrix does not play nicely with XPS documents and we end up having to log them out. Am told the fix for Citrix is not forthcoming so wondered if I could bury my head in the sand and disable the option all together. Regards Tim

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • linux migration/N high cpu consumption

    - by Alexander
    on my linux appliance based on 3.0.0-14 kernel I got: RPN:/tmp# ps axuf | grep migration root 6 92.9 0.0 0 0 ? S Apr23 2788:33 \_ [migration/0] root 7 99.7 0.0 0 0 ? S Apr23 2993:20 \_ [migration/1] my top is RPN:/tmp# top -b -n1 top - 12:03:41 up 2 days, 2:18, 5 users, load average: 25.76, 25.26, 24.73 Tasks: 171 total, 1 running, 168 sleeping, 0 stopped, 2 zombie Cpu(s): 14.0%us, 12.6%sy, 0.8%ni, 72.0%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 1543032k total, 1264728k used, 278304k free, 25308k buffers Swap: 0k total, 0k used, 0k free, 183168k cached My question: why processes "migration/N" take so much CPU?

    Read the article

  • Why am I seeing Zero errors in non-ECC RAM?

    - by Alexander Shcheblikin
    According to sources, memory errors are a very probable event: Some say the probability of a DRAM error is 95% in just 3 days of operation of a computer with just 4 GB of RAM, others say 32% of servers experience at least one error in a month with 8% of DIMMs being at fault. Contrary to those horrors, in my more than 10 years of personal computers use I have seen exactly none of the memory errors. I admit I never paid special attention to the subject. However, I have ventured multi-hour memtest86 runs couple of times and never seen an error either. Some of the factors that IMO should aggravate the memory problems: I build my computers out of the most "bulk commodity" parts: mainstream budget motherboards and the next to cheapest memory. also I usually max out the technology available, e.g. in the times of 32 bit OS'es I used 4 GB of RAM and with the current desktop CPUs and the newer 64 bit OS'es I use 32 GB of RAM. memory usage is moderately heavy with lots of virtual machines up running small and big tasks 24/7/365. But nevertheless, no memory-related problems ever found! How's that?

    Read the article

  • How to route packets from Wi-Fi to Ethernet on OSX?

    - by Alexander Artemenko
    I have a trouble, configuring a home network. Here is how my devices are connected together: Internet     ? Wi-Fi Router ? MacBook     ? iMac ?ethernet cable? Synology NAS I have no ability to plugin NAS right into the Wi-Fi router. The problem is that MacBook does not see NAS, because they are in different networks — I configured Wi-Fi Router to serve 192.168.10.0/24 addresses, and configured iMac's ethernet connection to use 192.168.20/24 network. Is there a way to setup route from MacBook to the NAS?

    Read the article

  • How does Notepad++ know to recognize the HTML and CSS in PHP files? Can I do this with PSP files?

    - by Andrew Alexander
    I am trying to get Notepad++ to recognize PSP (Python Server Pages) files. I've got it to recognize Python (by adding PSP to the ext= section), however it doesn't seem to understand that Python is only within the <% %> and <%= %> sections. I want it to parse HTML, CSS, Javascript and possibly even PHP (though if I am using PSP, I'd probably stick with that) as well, showing all the colors, etc, that would normally be associated with it. How do I go about that?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >