Search Results

Search found 3588 results on 144 pages for 'digital certificate'.

Page 120/144 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Copying symbolic links and filenames with special characters to NAS

    - by Mr E
    I have a new Western Digital My Book Live NAS. I am trying to copy files from an old drive to the NAS. I'm using Ubuntu 12.04 and I've mounted the drive by browsing the network in Nautilus and choosing a shared folder configured on the NAS. The shared folder is then automatically mounted at .gvfs/files on mybooklive. There are two problems so far: File names and directory names containing certain characters (e.g. : or |). Attempting to copy these results in the error message: cp: cannot stat `/path/to/destination.filename': Invalid argument Symbolic links. In Nautilus I get the error message: Symlinks not supported by backend My questions are: Can I connect to the NAS or configure the NAS so that I can copy my files without this problem? (In case it matters, I don't need Windows compatibility.) If not, what can I do to identify all the problem files? Can I do anything to automatically fix my filenames Please let me know if any of this needs clarification. I'm not too familiar with all of this so I may have left out some useful information.

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • help with xorg.conf: xrandr on one of two widescreen monitors; rhel5, kde, ATI Radeon X1300

    - by user35997
    Can anyone help with me configure my dual-screen monitors for rotation? I have xrandr 1.1. Have tried various approaches, nothing takes. I can't even get the xrandr options to show up in KDE's Display control panel. Thanks1 My lspci output: 03:00.0 VGA compatible controller: ATI Technologies Inc RV516 [Radeon X1300/X1550 Series] My current xorg.conf (works, minus screen rotation): # Xorg configuration created by system-config-display Section "ServerLayout" Identifier "Multihead layout" Screen 0 "aticonfig-Screen[0]" 0 0 InputDevice "Keyboard0" "CoreKeyboard" Option "Xinerama" "off" Option "Clone" "on" EndSection Section "Files" EndSection Section "Module" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us" EndSection Section "Monitor" ### Comment all HorizSync and VertSync values to use DDC: Identifier "Monitor1" VendorName "Monitor Vendor" ModelName "Dell 2407WFP (Digital)" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 76.0 Option "dpms" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "Videocard0" Driver "vesa" EndSection Section "Device" Identifier "Videocard1" Driver "vesa" VendorName "Videocard Vendor" BoardName "ATI Technologies Inc RV516 [Radeon X1300/X1550 Series]" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "aticonfig-Device[0]" Driver "fglrx" Option "DesktopSetup" "horizontal" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 16 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Videocard1" Monitor "Monitor1" DefaultDepth 16 SubSection "Display" Viewport 0 0 Depth 16 Modes "1920x1200" "1280x1024" "800x600" EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[0]" Device "aticonfig-Device[0]" Monitor "aticonfig-Monitor[0]" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1920x1200" "1280x1024" "800x600" EndSubSection EndSection

    Read the article

  • SATA HDD not recognized by BIOS after Windows 7 reinstall

    - by RoliSoft
    I got Win32.Virut.56 virus, which is a very nasty stuff. I reinstalled my Win7, but it reappeared somehow. After hours of headache, I was able to remove it by booting into a live Ubuntu and running CureIT using Wine. I then started reinstalling Windows 7. After the "expanding files" stage it rebooted, however from that point on, my 160 GB Western Digital SATAII hard drive was not recognized. The bios just freezes at "SATAII 1: Detecting...". My other 1.5 TB Seagate SATAII hard drive works correctly. I tried switching cables; that didn't help. I googled this issue, but what came up were usually firmware problems. I can't update the firmware or do anything at all, because if I plug it in, it won't start. My motherboard is an ASRock 4Core1333-Viiv, if that helps. I'm now stuck on a live Ubuntu. I can't install Win7 on the 1.5 TB drive, because it's full of data I need. What do you think I could try to make the hdd work again? As for the moment, I don't have another computer to try if that one recognizes the hdd.

    Read the article

  • does my machine configuration make sense?

    - by user1227914
    i couldn't think of a better place to ask this question, so here it goes. we're putting together a dedicated server for a website that will initially host the web server and the mysql database. as the website grows, we'll move the database to a different server and this machine will eventually only server the actual website. so the question is ...does my configuration look okay? it's the first time i'm building a server from scratch so i want to make sure i don't combine components that don't fit or something. things like ..do the drives i picked work for the hot swap ..etc. what do you guys think? am i good to go with this configuration? :) Chassis: Supermicro SuperServer 6016T-MTHF (6x DDR3 SDRAM - ECC DIMM 240-pin, 2x LGA1366 Socket, Power Provided: 600 Watt, 4 (free) x hot-swap - 3.5") CPU: Intel BX80614E5620 Xeon E5620 Processor - 4 Core, 2.40GHz, LGA 1366, 5.86GT/s QPI 12MB Cache, 64-Bit, 80W, HyperThreading Memory: Crucial CT51272BB1339 4GB PC10600 DDR3 Memory - 1333MHz, ECC, Registered, 1x4096MB (possibly 3 or 4 of them) Hard Drives: Western Digital WD2002FAEX Caviar Black Hard Drive - 2TB, 3.5", SATA 6Gbps, 7200 RPM, 64MB (possibly 2 or 3). thank you very much for any professional advice :)

    Read the article

  • SYS-5016T-MTFB will not POST without manual assistance (Motherboard: X8STi-F)

    - by Dan
    I have a Supermicro 5016T-MTFB 1U server which I am in the process of setting up, but it has a really strange problem. When the system is powered on it will not POST until I press the reset button a few times, followed by pressing the delete key on the keyboard to "wake it up". If I power it on and do nothing, the fans spin up but nothing else happens at all. After pressing the reset button once, the red "overheat" light comes on and blinks which is supposed to indicate a fan failure - but all the fans are working. Pressing reset again usually stops the blinking, and the system starts the normal POST routine but it will not actually get to the bios screen unless I press delete. If I don't press delete, it just continues to hang. After pressing delete it will take me into the bios setup screen, if I exit without saving changes I can boot the system normally. I was able to successfully install Linux with no trouble...but upon rebooting the same problem happened again. This board has integrated IPMI which I thought was the problem, so I disabled it via the jumper on the board. Did not help. Each time this system powers on, it goes on for a second, then turns off again for another second, then turns back on again. I don't know why it does that. Here is what I put in the system: 1 x Xeon E5630 (Nehalem) 80W TDP (it's not overheating, CPU temps stay under 40 degrees C) 2 x Kingston 2GB x 3 DDR3-1066 Memory ECC, unbuffered, unregistered (kvr1066d3e7sk3/6g) 1 x Intel X25-M 160 GB 2 x Western Digital RE3 1TB

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

  • Win 7 (SP1) Creationg of System Image failure... seems absurd. (0x80780119)

    - by DSKauai16
    This question seems to keep coming up in Microsoft Answers. Unfortunately, it never seems to be solved... on Microsoft Answers. Hopefully someone here will have a better idea. I'm trying to create a system image. It's not working. I have just re-installed windows as of a few hours ago. I have an admittedly older Western Digital 148 GB USB portable HDD which is not just completely empty, but I even did a long format with the NTFS file system just to be sure. The reinstallation of windows and a couple of programs take up 40 GB (seen below) When I try to create a System Image onto the USB HDD, I am told that there is not enough room. Obviously, this is wrong. The screen captures show exactly the steps that are happening to lead me to 0x80780119. I haven't done anything but put on Windows 7 (32bit; SP1) and install some programs. I haven't even done anything with them. There's almost zero in the way of data, and I checked that yes, the HDD works to accept data, send data and work with data. I've used the drive successfully for a long, long time. What's going wrong?

    Read the article

  • nginx reverse ssl proxy with multiple subdomains

    - by BrianM
    I'm trying to locate a high level configuration example for my current situation. We have a wildcard SSL certificate for multiple subdomains which are on several internal IIS servers. site1.example.com (X.X.X.194) -> IISServer01:8081 site2.example.com (X.X.X.194) -> IISServer01:8082 site3.example.com (X.X.X.194) -> IISServer02:8083 I am looking to handle the incoming SSL traffic through one server entry and then pass on the specific domain to the internal IIS application. It seems I have 2 options: Code a location section for each subdomain (seems messy from the examples I have found) Forward the unencrypted traffic back to the same nginx server configured with different server entries for each subdomain hostname. (At least this appears to be an option). My ultimate goal is to consolidate much of our SSL traffic to go through nginx so we can use HAProxy to load balance servers. Will approach #2 work within nginx if I properly setup the proxy_set_header entries? I envision something along the lines of this within my final config file (using approach #2): server { listen Y.Y.Y.174:443; #Internally routed IP address server_name *.example.com; proxy_pass http://Y.Y.Y.174:8081; } server { listen Y.Y.Y.174:8081; server_name site1.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer01:8081; } server { listen Y.Y.Y.174:8081; server_name site2.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer01:8082; } server { listen Y.Y.Y.174:8081; server_name site3.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer02:8083; } This seems like a way, but I'm not sure if it's the best way. Am I missing a simpler approach to this?

    Read the article

  • SSL in IIS 7 on a subdomain in a web farm

    - by justjoshingyou
    I have been having one of the most frustrating days in my entire IT career. I am trying to install an SSL certificate on a subdomain in a web farm. http://shop.mydomain.com needs to ALWAYS be forced to https://shop.mydomain.com I have a temporary cert issued from verisign on shop.mydomain.com I have installed the cert on the server. The website for shop.mydomain.com is set as a host header in IIS with the DNS entry pointed to the same IP as mydomain.com - which is our load balancer. I actually have 2 load balancers (as needed by our ISP). One redirects all traffic on port 80 out to the different servers on port 80. The other pushes out port 443 to the servers on port 443. shop.mydomain.com is to be the only site protected by SSL at this time. When I add the binding and I navigate to https://shop.mydomain.com it pops up with a warning about the cert being invalid (assumed because this is a test cert), and then it sends the user to http. So, I checked the box "Require SSL and it redirects to http://shop.mydomain.com/default.aspx and displayes an ASP.NET 404 error message. (not the IIS 404 error) I tried removing the binding on the site to port 80 as well with no luck. I am nearly ready to crawl under my desk into the fetal position. How on earth do I make this work? I can't even get it to work on one machine, let alone in the load balanced environment.

    Read the article

  • New Dvds with 99 Title tracks, which one is the correct track?

    - by Mike Fielden
    I have embarked on the task of backing up my DVD collection. I have noticed that some of the newer movies I am attempting to rip contain 99 Title tracks all with approximately equal overall run times. I use MacTheRipper to rip the DVDs and Handbrake to encode them. My question is, is there a site somewhere that has information regarding which Title track to select? Disclaimer: I cannot stress this enough, I legally own these DVDs. I am merely making a digital copy. Two examples of such DVDs are Star Trek and Carriers. UPDATE: Just an FYI each most of these 99 tracks appear to be the full length tracks. There times look to be very similar to the overall movie run time (within a few seconds of each other). So using the time isn't a valid way to tell which is the correct track. Opening the movie with VLC seems to be the best way to tell. Thank you all.

    Read the article

  • How to find out which program a default beep is coming from in Windows 7?

    - by leeand00
    There's a "default beep" (as defined in System Sounds) that emanates from my computer every so often. It kind of goes like this (where each number is a "default beep" sound): 1, 2, 3, 4, 5. So there's a distinct pattern to it. I thought I had this figured out a way to figure out what was happening by going to Control Panel-Ease of Access-Ease of Access Center-Replace sounds with visual cues But that just isn't the case. Whichever window I click on, that one displays the visual queue when this happens. It's driving me crazy, and I can't figure out which program is causing this. Update: it appears to only happen on one user profile on the computer...does that help? Update 2: Discovered that this sound was coming from a utility on my laptop called ASUS NB Probe; I'm certain that it is emanating from this program because the error message displayed by it changes in sync with the sound playing. Apparently the S.M.A.R.T. feature of my hard drive was reporting an issue. It displays the issue for a brief second and then makes it disappear, I'll have to keep watching it to see what it says, but I believe is says something about a read. I have an external hdd connected with eSATA to a container of sorts (BlacX) that you can plug two hdds into. I have one hdd attached and it's a Western Digital WD1001FALS - 00E8B0 Thanks again! Now I'm off to go around the Internet and report on this...since I posted it so many places!

    Read the article

  • Why are FMS logs filled with 'play' event status code 408 for a failed webcast?

    - by Stu Thompson
    Recently we had a live webcast event go horribly wrong. I'm doing the technical post-mortem, with limited information. We know that the hardware encoder (a Digital Rapid Touch Stream Web HDI) was unable to send upstream at a sustained reliable high rate. What we don't know is if the encoder's connection was problematic (Zürich), or that of the streaming server (in Frankfurt). Unfortunately, I've got three different vendors all blaming each other (the CDN who runs the server, the on-site ISP and the on-site encoding team.) In the FMS log files I see a couple of interesting things: Zillions of Status Code 408 on play event entries for clients. Adobe's documentation stats that this "Stream stopped because client disconnected". ("Zillions" would be a ratio of 10 events for every individual IP address.) Several unpublish / (re)publish events per hour for the encoder I'd like to know if all those 408s could tell me with authority that the FMS server was starved for bandwidth, or that the encoding signal was starved (and hence the server was disconnecting clients.) Any clues?

    Read the article

  • Mysql, SSL and java client problem

    - by CarlosH
    I'm trying to connect to an SSL-enabled mysql server from my own java application. After setting up ssl on mysqld, and successfuly tested an account using "REQUIRE ISSUER and SUBJECT", I wanted to use that account in a java app. I've generated a private key (to a file called keystore.jks) and csr using keytool, and signed the csr using my own CA(The same used with mysqld and its certificate). Once signed the csr, I've imported the CA and client cert into the keystore.jks file. When running the application the SSL connection can't be established. Relevant logs: ... [Raw read]: length = 5 0000: 16 00 00 02 FF ..... main, handling exception: javax.net.ssl.SSLException: Unsupported record version Unknown-0.0 main, SEND TLSv1 ALERT: fatal, description = unexpected_message Padded plaintext before ENCRYPTION: len = 32 0000: 02 0A BE 0F AD 64 0E 9A 32 3B FE 76 EF 40 A4 C9 .....d..2;.v.@.. 0010: B4 A7 F3 25 E7 E5 09 09 09 09 09 09 09 09 09 09 ...%............ main, WRITE: TLSv1 Alert, length = 32 [Raw write]: length = 37 0000: 15 03 01 00 20 AB 41 9E 37 F4 B8 44 A7 FD 91 B1 .... .A.7..D.... 0010: 75 5A 42 C6 70 BF D4 DC EC 83 01 0C CF 64 C7 36 uZB.p........d.6 0020: 2F 69 EC D2 7F /i... main, called closeSocket() main, called close() main, called closeInternal(true) main, called close() main, called closeInternal(true) connection error com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure Any idea why is this happening?

    Read the article

  • Robocopy launches and then hangs/just sits there

    - by NateO
    I'm setting up an archive process to store old files on an external hard drive. The computer in question is running Windows 7 Pro 32bit. We have a server folder with 150,000+ files in it, most of which are pretty small (below 200k). I'm trying to use robocopy in a batch file to do this. It was working fine the other day, now all it does upon launch is sit there. It shows me all the options and whatnot, and also lists the number of files in the directory and the directory itself, but it never gets past that line. If I switch the destination to the local C drive, it eventually starts copying files. Is there something in my batch file that needs to change? Or could there be a problem with the external Western Digital drive that I'm using? The WD drive currently is holding about 175,000 files. Here is the one line batch file I have: robocopy "\\cgifp01\Prepress\Public\ImportedPDF" "E:\OldFiles" *.* /R:2 /W:10 /MINAGE:15 /MOV /B /XJ /XF "blank_test.pdf" Thanks for any tips or ideas. Nate

    Read the article

  • Portable USB drives hidden pertition - New request

    - by ZXC
    This question was made by Francesco on Jul 29 '11 at 17:14. and the replies were not satisfactory due they not point to an important problem that´s: Why could anyone want to make certain data only accesible for a program but not to the users?. For example: If I want to do a safe distribution of original music for demostration purposes I will need several requisites: 1) The music should be heard using a simple procedure like selecting the name of each song on a playlist of a mediaplayer. 2) The portable media, ussually a portable USB drive, must hide for complete and should make unaccesible the files that contain the audio data to anything but the mediaplayer, that must be in the first partition, the one that is visible. 3) Considering that´s impossible to really hide files in a non-hidden partition, a second hidden partition should be created in the USB drive and the audio data will be stored there. 4) The trick is to read the audio data files stored in the hidden partition with a mediaplayer stored in the visible partition, the media player also should be a complete standalone program and independent from any library of the operating system except of the OS audio system. 5) The hidden partition should have a copy protection scheme that could impede to do copies of the data or create working ISO images of it. I know that this description could not be technically accurate but it has a complete logic from the needs of a music producer against the problem of piracy. The philosophy that surrounds the concept is to transform a virtual object like a digital string of audio in a solid object like the analog vinyl discs are.

    Read the article

  • Setup Entourage for Exchange via HTTP communication

    - by Johandk
    Our ISP set up a hosted exchange server for all our mail. I've setup all our Outlook users with no problems. We have two people using Mac OSX Leopard and Entourage. Entourage has the option of adding an Exchange account, but I have no idea how to tell it to connect to exchange via HTTP. Heres an excerpt from the client setup docs the hosting company sent me for Outlook: 1 .Go to control panel 2. Select ‘Mail’ 3. Select ‘Email accounts’ Under the E-mail tab select ‘New’ Select ‘Manually configure server settings......’ - click next Select ‘Microsoft Exchange’ – click next Complete details as below with Microsoft Exchange Server as: [server address] Do not select ‘Check Name’. Instead select ‘More Settings’. Go to the Connection tab, and select the bottom option ‘Connect to Microsoft Exchange using HTTP’. And then select the ‘Exchange Proxy Settings’ button. Enter Proxy server for Exchange Check Only connect to proxy servers that have this principal name in their certificate, Enter msstd:[servername] Proxy Authentication - select Basic Authentication Select OK, and again, so that you return to the main screen. Now select ‘Check Name’. Enter Username and Password: The username should now be the full name and underlined. If so select next, and then finish. Next time you open Outlook, enter username and password Any help GREATLY appreciated.

    Read the article

  • Force spin-down of external hard-drive on linux (raspberry pi)

    - by user258346
    I'm currently setting up a home-server using a Raspberry Pi with an external hard-disk connected via usb. However, my hard-drive will never spin down when being idle. I tried already the hints provided at raspberrypi.org ... without any success. 1.) sudo hdparm -S5 /dev/sda returns /dev/sda: setting standby to 5 (25 seconds) SG_IO: bad/missing sense data, sb[]: 70 00 04 00 00 00 00 0a 00 00 00 00 44 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2.) sudo hdparm -y /dev/sda returns /dev/sda: issuing standby command SG_IO: bad/missing sense data, sb[]: 70 00 04 00 00 00 00 0a 00 00 00 00 44 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...and 3.) sudo sdparm --flexible --command=stop /dev/sda returns /dev/sda: HDD 1234 ... without spin-down of the drive. I use the following hardware: Inateck FDU3C-2 dual Ports USB 3.0 HDD docking station Western Digital WD10EZRX Green 1TB Is it possible, that the sent spin-down-signals are somewhere overwritten/lost/ignored?

    Read the article

  • Firefox is very slow when establish SSL sessions

    - by yanglei
    Using wireshark, I discovered that Firefox v3.0 gets stuck every time before "client key exchange, change cipher spec" stage when establishing a SSL session. Specifically, it takes 0.8~1.8 second before Firefox send "Client Key Exchange" request. This is unacceptable since our application is HTTPS only. I tested this on IE6 and IE8, both works well. Any clues? [Update] Finally, I found the reason of 1 ~ 2 seconds stuck by displaying all captured packets in Wireshark. After the "server hello" stage, Firefox makes a request to ocsp.verisign.com combined with an additional DNS lookup for that domain. Firefox must wait the revocation status from OCSP before entering the next stage of SSL. Depends on whether DNS cache is in effect, this process takes 1 ~ 2 seconds. A interesting observation is that the IP packet contains "client key exchange" has a high possibility to get lost and thus a TCP retransmission is necessary. When this happens, the process can take 3 seconds at worst. I'm not sure if this is a coincidence or a bug. Anyway, here is the result from Wireshark: (delta-time) 0.369296 src-ip dst-ip TCP [ACK] Seq=161 Ack=2741 Win=65340 Len=0 2.538835 src-ip dst-ip TLSv1 Client Key Exchange, Change Cipher Spec, Finished 2.987034 src-ip dst-ip TLSv1 [TCP Retransmission] Client Key Exchange, Change Cipher Spec, Finished The difference between Firefox and IE is this: Firefox 3 enables OCSP checking by default where as IE only supports it. So, there is no problem with both IE6 and IE8. This is indeed a "certificate revoke" problem. Thanks

    Read the article

  • How does the "Steam" platform work? Is it DRM? Can I trust "Steam"-powered software? [closed]

    - by Chris W. Rea
    So – I just bought the new game Supreme Commander 2. This question is not about the game, but about the online software installation platform that it seems to require. I haven't bought a game in a long time, and I'm puzzled: Apparently, SC2 is a "Steam"-powered game. When I went to install the game, it asked me to either create a new Steam account, or log in with an existing account. I clicked "Cancel" because I don't plan to play online and I don't want anything unnecessary installed on my computer, since I only plan to play single player! However, after clicking "Cancel", the installer asked for my confirmation that I indeed wanted to cancel installation of the game! I thought I was just canceling the "online" portions! So I really want to know: How do "Steam" powered games work? Is this essentially a form of DRM (Digital Rights Management)? Can I trust this software platform? Has anybody done any independent verification on how this platform works? (I'm very leery of any DRM after the Sony BMG CD copy protection scandal. Thank goodness for Mark Russinovich.) Does the "Steam" platform install anything particularly nasty or unwanted on my computer? High-rep users: Please vote to reopen this question. It is not about the game, but about the software update platform / updater / DRM. Imagine if the software in question were a productivity application. The issues remain the same.

    Read the article

  • Ubuntu Server mdadm drbd ocfs2 kvm hangs under heavy file reading

    - by Stefano Annese
    I have deployed four ubuntu 10.04 server. They are coupled two by two in a cluster scenario. on both sides we have software raid1 disks, drbd8 and OCFS2 and on top of it some kvm machines run with qcow2 disks. I followed this: Link corosync is just used for DRBD and OCFS, the kvm machines are run "manually" When it works is fine: good performances, good I/O, but at a given time one of the two cluster started hanging. Then we tried with just one server turned on and it hangs the same. It seems to happen when an heavy READ in one of the virtual machines occurs, that is during rsyn backup. When the fact occurs the virtual machines are not reachable any more and the real server responds with good delay to the ping but no screen and no ssh is available. All we can do is force shutdown (hold the button) and restart and when it turns on again the raid on which relay drbd is resyncing. All the time it hangs we see such fact. After a couple of week of pain on one side this morning also the other cluster hung, but it has different moteherboard, ram, kvm instances. What is similar is reading for rsync scenario and Western Digital RAID Edistion disks on both side. Can anybody give me some input to solve such issue?

    Read the article

  • Ubuntu lucid, maverick high iowait

    - by netom
    I'm using Ubuntu, and I've got the same problem with Lucid and Maverick. From time to time, especially a few minutes after boot, the iowait goes between 50-100% and the box is unusable. Everything that tries to access the disk freezes. I have the following setup: Hard disk: Model Family: Western Digital Caviar Green family Device Model: WDC WD15EADS-00P8B0 Serial Number: WD-WMAVU0391287 Firmware Version: 01.00A01 User Capacity: 1.500.301.910.016 bytes I have a quad core Intel Core2 Q6600 processor, and 4G of memory. When the high iowait occurs, usually 4 processes are active: kdmflush (two procs) jbd2/dm-0-8 jbd2/db-1-8 and a few more starving user processes of course. I know this from top and iotop. Any suggestions about why this is happening? There are a lot of q/a-s about linux and high iowait, but none of them helped so far, I even tweaked the hard disk not to park the head in every 8 seconds (Load cycle count is 50334!!!!! :o ), but nothing. Problem persists. Thank you in advance.

    Read the article

  • How to set up that specific domains are tunneled to another server

    - by Peter Smit
    I am working at an university as research assistant. Often I would like to connect from home to university resources over http or ssh, but they are blocked from outside access. Therefore, they have a front-end ssh server where we can ssh into and from there to other hosts. For http access they advise to set up an ssh tunnel like this ssh -L 1234:proxyserver.university.fi:8080 publicsshserver.university.fi and put the proxy settings of your browser to point to port 1234 All nice and working, but I would not like to let all my other internet traffic go over this proxy server, and everytime I want to connect to the university I have to do this steps again. What would I like: - Set up a ssh tunnel everytime I log in my computer. I have a certificate, so no passwords are needed - Have a way to redirect some wildcard-domains always through the ssh-server first. So that when I type intra.university.fi in my browser, transparently the request is going through the tunnel. Same when I want to ssh into another resource within the university Is this possible? For the http part I think I maybe should set up my own local transparent proxy to have this easily done. How about the ssh part?

    Read the article

  • CryptSvc not matched by Windows 7 Firewall rule

    - by theultramage
    I am using Windows Firewall in conjunction with a third-party tool to get notified about new outbound connection attempts (Windows Firewall Notifier or Windows Firewall Control). The way these tools do it is by setting the firewall to deny by default, and to add an auditing policy to log blocked connections into the Security event log. Then they watch the log, and display notification about newly added entries. netsh advfirewall set allprofiles firewallpolicy blockinbound,blockoutbound auditpol /set /subcategory:{0CCE9226-69AE-11D9-BED3-505054503030} /failure:enable With this configuration in place, I now need to craft outbound allow rules for applications and system services. Here is the rule for CryptSvc, the service frequently used for certificate validation and revocation checking: netsh advfirewall firewall add rule name="Windows Cryptographic Services" action=allow enable=yes profile=any program="%SystemRoot%\system32\svchost.exe" service="CryptSvc" dir=out protocol=tcp remoteport=80,443 The problem is, this rule does not work. Unless I change the scope to "all programs and services" (which is really unhealthy), connection denied events like the following will keep appearing in the security log: Event 5157, Microsoft Windows security auditing. The Windows Filtering Platform has blocked a connection. Application Information: Process ID: 1476 (<- svchost.exe with CryptSvc and nothing else) Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Direction: Outbound Source Address: 192.168.0.1 Source Port: 49616 Destination Address: 2.16.52.16 Destination Port: 80 Protocol: 6 (<- TCP) To make sure it's CryptSvc, I have let the connection through and reviewed its traffic; I also configured CryptSvc to run in its own svchost instance to make it more obvious: ;sc config CryptSvc type= share sc config CryptSvc type= own So... why is it not matching the firewall rule, and how to fix that?

    Read the article

  • Mac Grey/White Screen of Death

    - by cust0s
    The other night I updated my iTunes to the latest version (through Software Update) when I came to turn on my computer I was greeted with the dreaded white screen of death. I use an early 2008 iMac 24". I've tried the basic things, unplugging/turning off accessories, trying to boot from the install disk, reseting pram, etc, etc. Still no luck and no change what-so-ever. All I've been able to ascertain that my keyboard still works (by ejecting). I should point out that I did recently replace my Hard drive with a Western Digital Black 500GB (though the computer is well out of warrenty) and I'm a little concerned that the problem could be the screen. Update (18/05/10): I've been told that I could be getting the White/Grey screen of death because the optical flex cable is damaged (aparently this is common). Apparently the Optical Drive is part of the POST sequence, and an inability to read the drive can result in failure for the system to move on to other bootable volumes. More info here. I will disable the optical drive and see if that works.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >