Search Results

Search found 9366 results on 375 pages for 'common lisp'.

Page 270/375 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Clarification for setting up SSH terminal access on Cisco IOS

    - by Matt Malesky
    I'm attempting to set up SSH on a Cisco 2811 and having some difficulties. The first step to this should be running crypto key generate rsa I seem to be missing this though: better#crypto key generate rsa ^ % Invalid input detected at '^' marker. better# Furthermore, the only available commands I have in the crypto key namespace are lock and unlock, which seem to indicate a locked keypair (for which I don't know the password): better#crypto key ? lock Lock a keypair. unlock Unlock a keypair. better#crypto key unlock ? rsa RSA keys better#crypto key unlock rsa %% Please enter the passphrase: %% Unlocking failed. . better# More or less, I'm asking what exactly this might mean, and if I actually do have certificates already here (used router)? Otherwise, how can I solve this? It's my first time configuring this feature, but I definitely believe it's part of my IOS. Speaking of my IOS, I'm running the image c2800nm-advsecurityk9-mz.124-24.T6.bin I'll also note that I have my hostname and ip domain-name configured. I'll also give you a dir flash: below if it's at all of use: better#dir flash: Directory of flash:/ 2 -rw- 2748 Jul 27 2009 14:03:52 +00:00 sdmconfig-2811.cfg 3 -rw- 931840 Jul 27 2009 14:04:10 +00:00 es.tar 4 -rw- 1505280 Jul 27 2009 14:04:32 +00:00 common.tar 5 -rw- 1038 Jul 27 2009 14:04:46 +00:00 home.shtml 6 -rw- 112640 Jul 27 2009 14:05:00 +00:00 home.tar 7 -rw- 1697952 Jul 27 2009 14:05:26 +00:00 securedesktop-ios-3.1.1.45-k9.pkg 8 -rw- 415956 Jul 27 2009 14:05:46 +00:00 sslclient-win-1.1.4.176.pkg 9 -rw- 38732900 Dec 8 2011 06:28:56 +00:00 c2800nm-advsecurityk9-mz.124-24.T6.bin 64016384 bytes total (20598784 bytes free) better#

    Read the article

  • Windows 7 & Photoshop CS5.1 - "Fonts missing" issue - I have the font!! (sort of)

    - by Tigue Von Bond
    I've noticed a really aggravating issue with Adobe Photoshop CS5.1 on at least two occasions. I downloaded a layered PSD file to work with, in the release notes it directed me to a download page for all of the font used, which was Futura Medium Condensed. I chcked and did not have any Futura fonts at all. So I downloaded and installed the font from the source provided by the provider of the PSD. I closed and reopened Photoshop and when I open the PSD file I get an error saying: Some text layers contain fonts that are missing. These layers will need to have the missing fonts replaced before they can be used for vector based output. I then go to edit the text layer and receive: The following fonts are missing for text layer "discount" Future CondensedExtraBold Font substitution will occur. Continue? If I click OK, it substitutes Myriad Pro for this layer. Didn't I download the right font? I go into the font dropdown and see I have a font with a slightly different name "Futura-CondensedExtraBold-Th Regular" I have also seen this issue with Helvetica. I have received a PSD file, same "some text layers contain fonts that are missing These..." error dialog when I open up the file - and when I go to edit a layer with text I get: The following fonts are missing for text layer "Home": Helvetica Font substitution will occur. Continue? I click continue - it substitutes Myriad Pro - and check my font list and sure enough I have a bunch of Helvetica fonts, none exactly named "Helvetica" Is this a common issue? Googling it yielded a few people with similar problems (I think all on Macs) but either no concrete help or no response. Is it that the two font names aren't EXACT matches? If that is the case is there any way of setting up Photoshop to more intelligently substitute or even set up some sort of mapping (if "Helvetica" then substitute "Helvetica Lt Std" ? Is there anything else, maybe something that I am not thinking of?

    Read the article

  • Redirection of outbound UDP port NTP.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • Deploying ASP.NET MVC to Windows Server 2003

    - by pete the pagan-gerbil
    Hi, I have a problem with an MVC 2 website on Windows Server 2003 running IIS 6. It is externally hosted, but we have a 2003 server internally for testing. The internal server runs the website fine, the external server gives a 403 ("website declined to show this page") error when navigating to the root of the site, and a 404 if I try to navigate directly to a page resource. I have tried the wildcard ISAPI mapping and extension mapping, and a couple of other common checks (I forget exactly which now, most of them were already set correctly), but so far no joy. All the settings can be replicated on our internal server and the pages return properly. IIS logs just show exactly what the browser shows - 404 errors and 403s. I've read about a different level of trust required for an MVC application compared to a WebForms application - how can I check permissions and trust levels on the external and internal servers (assuming I am able to check that) and if that would cause these errors, what are the minimum levels that MVC require? Failing that, what else might be causing this error for me to try out?

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • URL Redirect Configuration in Virtualhost for a Single Page Web Application

    - by fenderplayer
    I have a web application under development that I am running locally. The home page of the application is fetched with the following url: http://local.dev/myapp/index.shtml When the app runs, javascript on the webpage maintains the url and the app state internally. Some of the other urls read as: http://local.dev/myapp/results?param1=val1&param2=val2 http://local.dev/myapp/someResource Note that there are no pages named results.html or someResource.html on my web server. They are just made up URLs to simulate RESTfulness in the single page app. All the app code - javascript, css etc - is present in the index.shtml file So, essentially, the question is how can I redirect all requests to the first URL above? Here's how the vhost configuration looks like: <VirtualHost 0.0.0.0:80> ServerAdmin [email protected] DocumentRoot "/Users/Me/mySites" ServerName local.dev RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(myapp|myapp2)\/results\?.+$ $1/index.shtml [R=301,L] <Directory "/Users/Me/mySites/"> Options +Includes Indexes MultiViews FollowSymlinks AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog "/private/var/log/apache2/error.log" CustomLog "/private/var/log/apache2/access.log" common </VirtualHost> But this doesn't seem to work. Requesting the other URLs directly results in 404 error.

    Read the article

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • Does this mean the router is faulty?

    - by Ashfame
    I have a router to which I have my desktop (running Ubuntu) connected via LAN & I use it on my phone via wifi. Sometimes it happen that the LAN one will stop working for no reason but the wifi will work fine. And it will resolve away by itself. Since last night, the router was restarting again & again on its own, so I lodged a complaint about it and they said the router is faulty and will be replaced, but I know they don't know anything about how things work & is just going to shoot an arrow in the dark. These restarts has happened for the first time, LAN-wifi issue described earlier is a common one (but not frequent one). So is the router faulty or there is some issue from my ISP side which will continue to persist even after they change the router? My very best guess is that they will replace it with an older refurbished router which will tend to give me more troubles in the coming time, so its better if I change it only its faulty (this is a new one - 6months old, I am its first hand user). I am happy to provide any details.

    Read the article

  • Dell Latitude D510 Runs From Battery But Not AC Adapter

    - by Jason George
    I have a Dell Latitude D510 that went belly up around two years ago. It will run from the battery, however, the wall adapter will neither power the machine nor charge the battery. Once the battery is dead, the machine is dead. Since it died I've searched repeatedly for solutions. I've tried a new AC adapter and even removed and replaced the DC jack thinking one of the solder joints might be bad. Both to no avail. After two years of searching I finally found the answer today. Since it's such a simple fix and I had such a hard time finding it I wanted to post the info for others (as it is apparently a common issue with the D510). -----SOLUTION----- It seems this is commonly caused by a cracked solder joint at pin 1 on an inductor filter pair (FL2) near the power jack. Pins 1 and 4 are ground and pins 2 and 3 are power. There should be 20V from 1 to 2 and 3. Anything less indicates a cracked joint that is increasing resistance and dropping the supply voltage. Repair simply requires reflowing all four pins with a little added solder for security. Detailed instructions can be found here. Dell Latitude D510 solder problem

    Read the article

  • How to set up a PRIVATE vimwiki on Dropbox.com

    - by Zongheng Yang
    Hi everyone, I assume those who are reading this page know what vimwiki and dropbox.com are and what they are for, so I might directly go into my confusion. The common way of setting a PRIVATE vimwiki on dropbox is simply let your vimwiki directories be under Dropbox folder (but not Dropbox/Public/ because it would be PUBLIC). Dropbox allows directly viewing html with dropbox.com/* url: for example a index.html can be accessed by url https://dl-web.dropbox.com/get/Wiki/html/index.html?w=bfead71a, being added after the file name a specified string, ?w=bfead71a. Hence, if inside index.html there is reference to A.html, which is located in the same folder index.html is in, it has to be accessed using some url like https://dl-web.dropbox.com/get/Wiki/html/index.html?w=SPECIFIED_STRING. But it is seemingly impossible to hack vimwiki in order to make the hrefs in converted htmls corrected in this way. Is there some approach that can resolve this problem? I hope I make myself clear. Had you any questions, please ask me for further explanations. Thank you!

    Read the article

  • server 2008 r2 stuck on installing updates

    - by volody
    I have 2008 r2 64bit server that is stuck on installing updates. It shows message "Please do not power off or unplug your machine. Installing 57 of 61.." This message is shown for 3 hours now. I was trying to connect to this server using mmc. runas /user:AdministratorAccountName@ComputerName "mmc %windir%\system32\compmgmt.msc" RUNAS ERROR: Unable to run - mmc C:\Windows\system32\compmgmt.msc 1311: There are currently no logon servers available to service the logon request. What else can I do? Update: Remote Desktop does not work. It just disappear after entering username and password from Win7-64 computer Update: I have disconected server. Now Server Manager in Roles Summary shows message "Unexpected error refreshing Server Manager: The remote procedure call failed." Event log shows Could not discover the state of the system. An unexpected exception was found: System.Runtime.InteropServices.COMException (0x800706BE): The remote procedure call failed. (Exception from HRESULT: 0x800706BE) at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo) at Microsoft.Windows.ServerManager.ComponentInstaller.ThrowHResult(Int32 hr) at Microsoft.Windows.ServerManager.ComponentInstaller.CreateSessionAndPackage(IntPtr& session, IntPtr& package) at Microsoft.Windows.ServerManager.ComponentInstaller.InitializeUpdateInfo() at Microsoft.Windows.ServerManager.ComponentInstaller.Initialize() at Microsoft.Windows.ServerManager.Common.Provider.RefreshDiscovery() at Microsoft.Windows.ServerManager.LocalResult.PerformDiscovery() at Microsoft.Windows.ServerManager.ServerManagerModel.CreateLocalResult(RefreshType refreshType) at Microsoft.Windows.ServerManager.ServerManagerModel.InternalRefreshModelResult(Object state) Update: Manual update hangs on (Installing update 58 of 62...) Seurity Update for Windows Server 2008 R2 x64 Edition (KB979309) Update: It could be an issue with that admin user was renamed

    Read the article

  • Running phpmyadmin xampp Ubuntu 12.10

    - by Luigi Tiburzi
    I know it is a common problem and there are many solutions on the web but I'm trying everything and anything is working, I can't have phpmyadmin running on my machine. I installed XAMPP through: sudo tar xvfz ./Downloads/xampp-linux-1.8.1.tar.gz -C /opt then I did the chmod trick supposed to make an end to access issues and I change the default location to my php projects from /var/www to Dropbox/php. Then I started XAMPP in the usual way: sudo /opt/lampp/lampp start When I tried to run one of my php projects the output on the web is fine but if for example I try to write localhost on my browser I get: It works and not the usual XAMPP interface and most of all when I try to access localhost/phpmyadmin I get the login page, insert username (root) and password and I get: You don't have permission to access /phpmyadmin/index.php on this server. Apache/2.2.22 (Ubuntu) Server at localhost Port 80 I tried the Required all granted trick and some others but nothing is working. I even tried to uninstall phpmyadmin and reinstall it but this is not working too. I don't know hot to proceed. Thanks for your help.

    Read the article

  • DNS Zone file and virtual host question (repost because my question got moved and now I can't comment on the original for some reason)

    - by Jake
    Sorry this is a repost, the original question got moved to here form stack overflow and for some reason I can't comment or respond to answers on that one anymore. Hi all, I'm trying to set up a virtual host for redmine.SITENAME.com. I've edited the httpd.conf file and now I'm trying to edit my DNS settings. However, I'm not sure exactly what to do. Here's an snippet of what's already in the named.conf file (the file was made by someone else who is unreachable): zone "SITENAME.com" { type master; file "SITENAME.com"; allow-transfer { ip.address.here.00; common-allow-transfer; }; }; I figure if I want to get redmine.SITENAME.com working, I need to copy that entry and just replace SITENAME.com with redmine.SITENAME.com but will that work? I was under the impression I needed a .db file but I don't see any reference to one in the current named.conf file. I also don't see any .db files or files named SITENAME in named.conf's directory. Any ideas where these illusive pre-existing db files could be?

    Read the article

  • Using my own Postfix, filtering spam and getting all the mail into my ISP's inbox

    - by djechelon
    Hello, I currently own a domain bought via GoDaddy.com, which provides me a basic email setup for the most common needs. I configured it to forward all mail to [email protected] to [email protected]. I also own a virtual server with a running Postfix that I use for a specific website (all mail to somedomain.com gets forwarded via LMTP to a program written by me). Since I'm recently experiencing some harassing by spammers, since GoDaddy doesn't seem to filter spam, and since my Windows Phone's Pocket Outlook cannot filter spam, I would like to use SpamAssassin to filter inbound spam by changing my domain's MX records to my server My ideal setup is the following: All mail delivered to somedomain.com gets redirected via LMTP as usual via virtual transport without any spam check All mail to [email protected] gets redirected to [email protected] after a severe spam check I don't care about [email protected] since I use just one address for now I would like to train SpamAssassin with customized spam rules, possibly based on the presence of certain keywords (links to certain unsubscribe pages I found recurring) I currently configured Postfix with transport somedomain.com lmtp:[127.0.0.1]:8025 .somedomain.com error: Cannot accept mail for this domain relay somedomain.com OK (I guess I should add mydomain.com OK too) virtual @mydomain.com [email protected] (looks like a catch-all rule, it's OK as requirement 3) I installed SpamAssassin, I can do rcspamd start and set it to boot with the server, but I don't know if there is anything else to do for use in Postfix, and how to apply requirement 1 (only mail to mydomain.com gets filtered) I also tried to send an email via Telnet to make sure my settings are ready for MX change. I received the message into my account but I found that it gone through secureserver.net, like Postfix didn't rewrite the destination but simply relayed the message. Thank you in advance. I'm no expert in SpamAssassin, and I have little experience in Postfix (enough to avoid making my server an open relay)

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • OpenVPN Keeps Crashing

    - by Frank Thornton
    Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28523 [vpntest] Peer Connection Initiated with [AF_INET]<MY_IP>:28523 Oct 20 21:00:44 sb1 openvpn[2082]: vpntest/<MY_IP>:28523 MULTI_sva: pool returned IPv4=10.8.0.6, IPv6=(Not enabled) Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28522 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1576', remote='link-mtu 1376' Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28522 WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1532', remote='tun-mtu 1332' Oct 20 21:00:45 sb1 openvpn[2082]: <MY_IP>:28522 [vpntest2] Peer Connection Initiated with [AF_INET]<MY_IP>:28522 Oct 20 21:00:45 sb1 openvpn[2082]: vpntest2/<MY_IP>:28522 MULTI_sva: pool returned IPv4=10.8.0.10, IPv6=(Not enabled) Oct 20 21:00:46 sb1 openvpn[2082]: vpntest/<MY_IP>:28523 send_push_reply(): safe_cap=940 Client File: client dev tun proto tcp remote <IP> 443 resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1410 persist-key persist-tun auth-user-pass comp-lzo SERVER: port 443 #- port proto tcp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 reneg-sec 0 #mtu-disc yes mssfix 1410 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /etc/openvpn/openvpn-auth-pam.so /etc/pam.d/login #plugin /usr/share/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Comment this line if you are using FreeRADIUS #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment this line if you are using FreeRADIUS client-to-client client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 3 30 comp-lzo persist-key persist-tun What is causing the VPN to keep dropping the connection and then reconnecting?

    Read the article

  • Ubuntu 10.04->10.10 in failed state - how to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: (1) "Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a)." (2) "gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok]" (3) "gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ..." What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

  • Erratic WiFi 2.4 GHz channel spikes, what gives?

    - by Francis W. Usher
    Sorry guys, first a gripe about my neighbor's WiFi access point (it is related): they totally hog the center nine 2.4 GHz channels (3-11), centered right at 7! I know the outer regions of the signal don't make as much of a difference, and technically they're running channels 5 & 9. Anyway, their signal is clearly interfering with mine, which is necessarily centered at 3 or 11 to evade their interference. I guess it's somewhat a case of access point envy: they happen to have both a stronger signal and a higher data rate, while occupying twice the band width that I do. Getting to the point, I've noticed that they tend to sit nice and pretty centered at 7, but they definitely auto-select their channel, and I've noticed that the auto-selection algorithm tends to shift towards the higher channels; hence I decided to pick channel 3, and I don't get so many intermittent lag spikes any more. Anyway, the thing that weirded me out was the reason they have to auto-select sometimes: unexplained, powerful (talking order of 0dB here), giant spikes of 2.4 GHz activity in consistent regions of the spectrum. I don't think it's just noise, since my wireless monitoring software is registering a MAC address, a manufacturer, and usually a fairly coherent ascii name... and it seems to be a fairly well-confined signal. But these signals are fairly common, and they do some weird stuff to my signal. So my question is what are these signals? Where are they coming from? Where are they going? Why are they so ridiculously strong? Why don't they ever last very long? Here's an inSSIDer screenshot I took, for your perusal. I am labeled with "me", my greedy neighbor labeled with "neighbor", and the 2 quasar signals are labeled with "WTF?".

    Read the article

  • SSL Connection Error

    - by toffee.beanns
    I have purchased a comodo ssl cert and have submitted the Certificate Signing Request (CSR) generated by my server to the ssl management site. With the 3 files it returned me with, - AddTrustExternalCARoot.crt - PositiveSSLCA2.crt - www_mydomainname_com.crt I have uploaded them to my /etc/ssl/ssl-certs folder and have updated my virtual host in my sites-available and restarted accordingly. NameVirtualHost 107.167.120.195:80 #sample ip address NameVirtualHost 107.167.120.195:443 #sample ip address ......... #normal http virtual host (working well) <VirtualHost 107.167.120.195:443> ServerAdmin [email protected] ServerName mydomainname.com ServerAlias www.mydomainname.com DocumentRoot /var/www/mydomainname SSLEngine on SSLCertificateFile /etc/ssl/ssl-certs/www_mydomainname.com.crt SSLCertificateKeyFile /etc/ssl/ssl-certs/server.key SSLCertificateChainFile /etc/ssl/ssl-certs/PositiveSSLCA2.crt </VirtualHost> I have also enabled ran 'a2enmod ssl' and it's enabled. This is the error I get when I access the webpage https in chrome: SSL connection error Error code: ERR_SSL_PROTOCOL_ERROR Unable to make a secure connection to the server. This may be a problem with the server, or it may be requiring a client authentication certificate that you don't have. I have also checked out my apache log files and there seems to be an error saying that the Common Name (CN) is not the same as the server. RSA server certificate CommonName (CN) `www.mydomainname.com' does NOT match server name!? and Invalid method in request \x16\x03\x01 What should I do?

    Read the article

  • Setting up xpra for client use in OS X

    - by Jonathan
    I've been trying to get xpra to run on OS X for the last few days to connect to my Ubuntu server. Note that there's a GUI for it called shifter, but that (at least on OS X) is still far too buggy. For those who don't know what xpra is, if you know what screen is, it's like screen for GUI X Windows apps tunneled over ssh. You can render a remote X app locally so it's faster than sending a series of compresses screen shots (like VNC), but with xpra you can disconnect and reconnect on different computers. To get the basic functionality you can just type "ssh -X server.location" and any GUI app you open from the command line will open locally. I've been able to get xpra to build by doing the following: Download pari-all-0.0.6.tar.gz from the xpra site listed under upstream and untar it. Issue the following Mac Ports command (Dependencies thanks to RogBlog): sudo port install python25 python26 py26-pyrex py26-gtk xorg-libXtst py25-gobject py25-gtk py25-nose py26-nose xorg-libXdamage xorg-libXcomposite xorg-libXtst xorg-libXfixes In the upstream list of v0.0.06 patches (NOT 0.0.8pre!) on the xpra site listed above, download mswindows-conditional-pyrex.patch. Open the patch with your favorite text editor and change the single occurrence of "win" in it to "darwin". Apply the patch to setup.py. Run do-build in the command line. Now where I'm stumped: how do I run xpra? The build produces a sub directory called install/bin in which xpra is located, but when I try to run it I get the following error: Traceback (most recent call last): File "./xpra", line 4, in import xpra.scripts.main ImportError: No module named xpra.scripts.main There is a file called main.py under xpra/scripts, but I don't know any python and I'm not sure if this is what it's looking for, and what to do with it even if it is. My goal is to set up xpra so I can install it into /usr/bin (or some other common path for executables) and execute it whenever I please. What do I do next?

    Read the article

  • Port 5357 TCP on Windows 7 professional 64 bit?

    - by Registered
    Is there a reason this port is open, a quick Nmap scan and Nessus scan reveal it's open, why? Are there any ramifications if I close this port via the firewall rule set? Or does anyone here now more info about this port besides Google? WTF? 1)http://www.symantec.com/connect/blogs/who-left-tunnel-door-open-windows-firewall-vista-0 I know the talk is about Vista, but I am pretty sure it's the same port on 7, also. 2)Port 5357 common errors:The port is vulnerable to info leak problems allowing it to be accessed remotely by malicious authors. (Web Services for Devices) I am blocking this crap, if I have issues will just re-enable. Damn windows. Inbound rule for Network Discovery to allow WSDAPI Events via Function Discovery. [TCP 5357] You just got blocked, until I break something, will see. Time to re-Nmap and re-Nessus. Nmap scan 0 open ports after closing Port 5357,Win7 still works for now, one more scan with Nessus just to make sure all is well.

    Read the article

  • Partitioning & Linux

    - by Zac
    Every tutorial on Linux-based partitioning schemes (or, just partitioning in general) will tell you that a PC can have either 4 primary partitions, or 3 primaries and 1 extended. They will all also tell you that Linux (in my case, Ubuntu) can be installed on either. It's also come to my attention that it is not too atypical for FHS directories, such as usr/, tmp/, etc/, home/ or var/ to be mounted separately on other partitions. Several questions I am unable to find the answers to, purely for my own edification: (1) By "PC", are we really talking about common PC disk types, like IDE or SATA? I guess I'm wondering why PC uses are limited to 4 primaries or 3 primaries + 1 extended (2) I'm choking on some basic OS concepts: it is said that a partition can be mounted by a file system or an OS. So I assume this means I can somehow instruct Ubuntu to mount to 1 partition, and then any part of, say, ReiserFS, to be mounted to another partition? How? (3)(a) What about creating swap partitions? Is there too much of a good thing with swap partitioning? If I have 4GB RAM over 320GB disk, what should my swap partition size be, and why? (3)(b) Are swap files the only way to create swap partitions? Wouldn't a Linux partitioning utility allow me to define a partition as being for virtual memory only? (4) Why are partitions limited to being "mounted" by just OSes and file systems? Why couldn't I write a program to take up its own, say, 512 MB partition, and then have it invoked or uses by an OS installed on another partition? Thanks for shedding any light here... not critical that I know this stuff, but it's got me thinking incessantly. And when I think incessantly, I...can't......sleep....

    Read the article

  • Redirect port / port 10000 to https apache

    - by Hamid Elaosta
    I have been reading around and trying different configurations to get a request to my server on port 10000 to redirect a http to a https request. For some reason I can't figure out how to make it happen when i use port 10000 although i can set a rewrite rule for port 80 (implicit) to do it: All I want is a request as follows: http://127.0.0.1:10000 to redirect me to https://127.0.0.1:10000 but it needs to be written so that it also works when accessed via my domain name externally. My current, vhost, the last of many different attempts is currently set as follows, but it doesn't seem to work at all: <VirtualHost *:10000> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_POST}%{REQUEST_URI} ErrorLog "/var/log/httpd/webmin-redirect_error_log.log" CustomLog "/var/log/httpd/webmin-redirect_access_log.log" common </VirtualHost> I'v also tried a few other things but nothing seems to work, any help would be appreciated. EDIT: I already have a re-write in my httpd.conf that redirects port 80 to https. If I access port 10000 externally it is redirected to https, but from the lan "http://192.168.0.2:10000" it doesnt.

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >