Search Results

Search found 11152 results on 447 pages for 'online music'.

Page 366/447 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Is there a way to extract a "private certificate key" from Chrome and import it into Firefox ?

    - by user58871
    This is a classical Catch-22 situation. I was using online banking the other day under Chrome. I had to order a digital certificate so that I could extend my privileges. The stupid thing is that when I got approved and opened the certificate installation menu, I saw only versions for IE/Firefox available. What the heck, I said, and chose FF - the result I got was Error 202 - ERR:CERT:INVALID. I opened FF, got to the same page, and tried to install the damn thing from there, but got a message basically saying that I must have been given a private key which obviously FF doesn't find. I read a bit, and it turned out that I really must have been given such a key but only to the browser that I ordered the cert with, i.e. Chrome. The worst thing is that if I deactivate my order, and reissue a new cert, this time from FF, I MUST go to a bank office (!!!WTF), but I am currently studying abroad, so I can't just go back. Is there a way, that I could extract that key from Chrome's profile, and import it into FF under Windows ? I will be glad to know

    Read the article

  • Why is squid breaking kerberos/NTLM auth?

    - by DonEstefan
    I'm using squid 2.6.22 (Centos 5 Default) as a proxy. Squid seems to break the authentication process for web pages when they require NTLM or Kerberos Auth. I tested with sharepoint 2007 and tried all 3 authentication methods (NTLM, Kerberos, Basic). Accessing the site without squid works in all cases. When I access the same page with squid, then only basic-auth works. Using IE or Firefox desn't make any difference. Squid itself can be used by anybody (no auth_param configured). Its a bit tricky to find solutions online, since most of the topics whirl around auth_param for authenticating users to squid rather than authenticating users to a webpage behind squid. Could anyone help? Edit: Sorry, but my first test was totally screwed up. I tested against the wrong webservers (Memo to myself: always check assumptions before testing). Now I realized that the problem scenario is completely different. Kerberos work for IE Kerberos works for Firefox (after changing "network.negotiate-auth.trusted-uris" in about:config) NTLM works for IE NTLM does NOT work in Firefox (even after changing "network.automatic-ntlm-auth.trusted-uris" in about:config) By the way: The feature that provides NTLM-passthrough in squid is called "connection pinning" and the HTTP header "Proxy-support: Session-based-authentication""

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • Windows Server 2008 network speed slow, Xen 3.4.3 HVM ISO

    - by Elliot.Bradshaw
    I've setup a VM running Windows Server 2008 on a host node running Xen 3.4.3-5 and the following kernel: 2.6.18-308.1.1.el5xen #1 SMP Wed Mar 7 05:38:01 EST 2012 i686 i686 i386 GNU/Linux The network speed on the VM is very slow--using the online speed tests I can only get it up to 8-9mbps. The line is 100mbps burstable and the host node has no problem achieving those speeds. If it setup a VM running CentOS, it too has no problems achieving those speeds. I've done some pretty exhaustive troubleshooting, but nothing has helped: New VM installations of Win2k8 do have the same network problem. Upgrading to most recent kernel-xen did not help (2.6.18-308.1.1.el5xen). Upgrading from xen 3.4.0 to xen 3.4.3-5 did not help. Disabling Windows firewall, etc did not help. Changing network card device config from auto negotiation to manually be 100mbps full duplex did not help. Changing the network receive buffer packet size did not help (tried all combos from 64k to 8k). At this point I'm pretty much out of ideas--any help would be appreciated!

    Read the article

  • Unable to activate Windows XP

    - by Josh Kelley
    The latest round of Patch Tuesday updates left my Windows XP computer unbootable. ("Fatal System Error: The Windows Logon Process system process terminated unexpectedly.") After much messing around with the recovery console, an XP CD's repair mode, and manually copying registry files around, I have a system that can boot again. However, I overwrote my OEM XP installation's activation information while trying to run a retail XP CD's setup, so it needs reactivation. Here's my problem: I cannot activate it at all. I log in, Windows tells me I have to activate to continue, I click Yes, and absolutely nothing happens: no windows, no response to keyboard or mouse, no response to Ctrl-Alt-Del, nothing. Safe mode works, but I can't activate in safe mode (EDIT: not even safe mode with networking). I read a trick online of pressing [Windows Key]+U to bring up the Microsoft Narrator, and that works, but clicking its Microsoft Web Site link does nothing. My last attempt to resolve this was to reinstall Windows off of the OEM CD. Now I have two parallel Windows installations, both on the same hard drive, one with all of my stuff and no way to activate it, one fully activated with no usable programs. Any ideas? Any way to activate in safe mode? Any way to copy activation information from my activated installation to my unactivated installation (since they're both on the same hard drive)?

    Read the article

  • Large mailbox in Outlook 2007 takes ages to index

    - by Reado
    In our company each user has a single mailbox and all email they have ever sent/received is in that mailbox. We don't do archiving to PST and we thought that was the way forward. The problem we now have is if someone switches to another PC for the day and opens Outlook, it has to download all emails first to that PC (cached mode) but even then when they try to search for something, Outlook says items are still being indexed. One user has over 100,000 items to be indexed and it's been saying that for about a week! As a temporary workaround I have turned off instant searching which allows them to search for anything, but it takes time to filter through, and Outlook doesn't exactly indicate if it's still searching for something, so in most cases the user thinks the search isn't working when really it is and it's just taking time to populate the results. I need a solution that allows the mailbox to be indexed really quickly if the user has to login to another PC. Are we best using Online Mode instead of Cached Mode or is there another way around this? Thanks in advance.

    Read the article

  • Automatically install driver on headless WHSv1 system

    - by Dan Neely
    I have one of the HP Mediasmart Windows Home Server v1 boxes. It's network port appears to have died a few days ago but the system is not giving any other sign of failure: No activity lights activate on either side of the cable when connected to my gigabit switch; when connected to one of my routers 100 megabit ports the lights turn on but it remains unreachable over the network and my router never lists it as among DHCP clients. I bought a USB-ethernet adapter to temporarily get it back online; but the adapter needs a driver to work which I can't install because the system is headless by design (no video out, no PCI/PCIe slots) with admin access only available via the WHS client or remote desktop. Both of those options require network connectivity and are consequently unavailable. I tried copying the drivers to a flash drive; but Windows either didn't look there or none of the drivers provided were suitable (Win8, Win7, or combined XP and Vista). I've been told that a USB WiFi adapter would have the same driver problem.

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • How to set up Git on remote instance using keys from local machine?

    - by Lucas
    I have a setup where I can ssh into my remote server (ie a Google Compute instance) from my local machine. I used to be able to clone, push, and pull from a repository on my remote instance without adding any keys to my remote instance, nor adding any new keys to my repository online (just the public key from my local machine). I believe the remote instance was using the keys from my local machine to authenticate my Git pushes and pulls. However, the system broke when I reinstalled the OS on my local machine. Now I when I try to connect with the Github server from my remote instance, I get the following: Cannot clone: [lucas@ecoinstance]~/node$ git clone [email protected]:lucasExample/test.git test Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly Cannot push: [lucas@ecoinstance]~/node/nodetest1$ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) [lucas@ecoinstance]~/node/nodetest1$ git push Permission denied (publickey). fatal: The remote end hung up unexpectedly Additional info: [lucas@ecoinstance]~/node/nodetest1$ ssh-add -l Could not open a connection to your authentication agent. [lucas@ecoinstance]~/.ssh$ ls authorized_keys known_hosts As you can see, I have no keys on my remote instance. I have never had keys on the remote, and it would push and pull just fine until I re-installed my local OS. I can still clone, push, and pull on my local machine, it is just my remote machine that cannot get authentication. My local OS is Ubuntu 14.04 and my remote OS is Debian Wheezy. Any suggestions would be great. I am not sure how to search for this concept where I can authenticate from a remote instance via my local machine, so any reference are appreciated as well.

    Read the article

  • Setting up HTTPS across multiple servers

    - by JohnyD
    I'm looking to offer our online services over https and I'm having a couple of problems understanding how to accomplish this. To access our services you must pass through our ISA firewall to a Win2000 server running IIS6. About half our services are located here and the other half take you to a Win2003 server also running IIS6. So, in order to achieve this must each server have the proper certificate installed? ISA, IIS6_1 and IIS6_2? Is there a separate configuration that must be made to our ISA firewall? The other problem is with the CA and knowing how many certificates I need. It's important to note that the domain name for our services on IIS6_1 is www.domainname.com but the domain name on IIS6_2 is services.domainname.com. I believe that this will require me to purchase more than one certificate. It looks as though we will be going with Thawte's SSL123 as it's a good name and it's fast to get. Will I need to purchase 2 certificates (one for www that will be installed on our ISA firewall as well as IIS6_1, and one for services.domainname.com on IIS6_2)? Or will I need to purchase 3, the extra one being used on our firewall server? Another side question is about SAN's (subject alternative names). Is this basically adding sub-domains to your cert? So I could purchase one cert with 1 SAN for my www and services.? Thanks a lot for your help! Please let me know if I can provide any further information.

    Read the article

  • Change which server mailbox is associated with in Exchange 2007

    - by tacos_tacos_tacos
    I have restored and mounted an EDB file onto a new Exchange 2007 Server. However, the old server is still online and although all the mailboxes I need are in the newly-mounted database, in Exchange 2007 System Manager it still shows that the mailbox is associated with the old server. If I try to "Move" the database it actually tries to copy the files from the old server to the new server, which is not necessary because they are already there - and produces and error about the mailbox on the destination already existing. How can I simply tell Exchange (AD?) to use the new server to find the mailbox rather than the old? Edit: I did the restore by taking the old server offline (turning off all Exchange services), copying EDB file to the new server, restoring it with eseutil, and mounting it to the new server. I did it this way in part because I didn't know a better way and in part because I couldn't use move-mailbox as the source location had a horrible Internet connection (which is why Exchange is being moved to the new location). I had to copy the EDB from the old server to a hard disk, go somewhere with a better Internet connection, upload the EDB to the new server.

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • Is "DSLAM congestion" a legitimate reason for slow DSL?

    - by Jay Bazuzi
    My DSL has been extremely slow in the evenings recently. To test it, I telnet to my DSL Modem, and ping the gateway. This way I eliminate internet congestion and local network issues. In the mornings I get 30ms - 50ms pings. In the evenings, it bounces around a lot, but 10000ms pings are common. I complained to Qwest support, and they said it was a known issue on their end, their engineers were working on it, and wouldn't say anything else. A couple days later I complained again, and they sent out a technician. He tested my house wiring and found that one of them had a short. It was an unused line, so we disconnected it, and he said things looked better and left. My daytime speeds improved at this point, but evening is still bad. I complained to Qwest support again, and they said it was a problem with DSLAM congestion at their end, and that they were working on it, but no ETA. My neighbor has Qwest DSL and doesn't seem to have these problems. That seems strange. I go use her network when I absolutely must get online and mine is behaving badly. I can't tell if they're yanking my chain or not. Regardless, these speeds are crap. I'm paying for 7Mpbs but am lucky if I get 1/10th that in the evenings. My kids like to watch Netflix streaming movies, and it's just impossible after 5pm or so. Should I wait it out? Will complaining again produce any results? Should I change my subscription to a lower speed until they fix their end? Or switch to cable?

    Read the article

  • nginx proxy_pass POST 404 errors

    - by Scott
    I have nginx proxying to an app server, with the following configuration: location /app/ { # send to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } Any request for /app goes to :9001, whereas the default site is hosted on :9000. GET requests work fine. But whenever I submit a POST request to /app/any/post/url it results in a 404 error. Hitting the url directly in the browser via GET /app/any/post/url hits the app server as expected. I found online other people with similar problems and added proxy_set_header Host $http_host; but this hasn't resolved my issue. Any insights are appreciated. Thanks. Full config below: server { listen 9000; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /home/scott/src/ph-dox/html; # root ../html; TODO: how to do relative paths? index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /app/ { # rewrite here sends to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } }

    Read the article

  • Compile php 5.3 ldap extension

    - by toups
    So trying to follow the very un-descriptive guide at my webhost for compiling a new php extension: **Compiling PHP 5.3 extensions You can also compile and load your own extensions. Here's how:** 1. Download and unpack the extension (from PECL, for instance). 2. If the extension is already compiled (most binary PHP loaders will be, for instance), skip to step 6. 3. /usr/local/php53/bin/phpize 4. ./configure --with-php-config=/usr/local/php53/bin/php-config 5. make 6. Copy the module to your .php/5.3/ directory. 7. Assuming your user is called "username" and your module is named "mymodule.so", add the following to your .php/5.3/phprc: extension = /home/username/.php/5.3/mymodule.so Downloaded Openldap stable release online, uploaded the unpacked gzip via ftp to my server, did step 3, 4, 5. Now on step 6 is says "copy the module...". My question is where is the module for me to copy? Sorry if it's obvious and I'm not seeing it; first time compiling a php extension :O

    Read the article

  • See former key sequences in vim

    - by Vasiliy Sharapov
    Sometimes I share screen shots and clips of vim usage with others. It would be nice to expand on the part of the status bar highlighted in this picture: I would like some way to make previous key sequences visible as well, such as: y2w jj f[ p 2d - You can see the key sequences leading up to the current one. I'll elaborate on my wish list at the bottom. Is something like this is available as a plugin or vim script? The sheer number of scripts available on vim online makes this hard to find by keyword. Some features I would hope for (but seem improbable): Delimit key sequences with a non-keyboard character instead of space, and a different one for the current command, so y2w jj f[ p 2d might become y2w¦jj¦f[¦p » 2d Replace keys that have a letter alternative with the alternative, such as the right arrow key - ^[[C with the equivalent l. Edit: To clarify, the right arrow key is a valid key in vim, but has no character to represent it, the l key preforms the same function and could/should substitute it. Have previous keystrokes run all the way to the beginning of the line (instead of just one or two), and just have vim's command prompt overwrite it when necessary. Replace some keystrokes with a more elegant alternative, for example hhhhh with 5h or more impressively d2f) with d% (in the appropriate situation).

    Read the article

  • Unable to Access Localhost after starting Xampp

    - by user7370
    OS: Windows XP Professional, SP 2. Few days back i had xampplite 1.7.1 installed and was able to access localhost and phpmyadmin through browser...... Today it suddenly stopped working. In firefox after i type http://localhost/ nothing happens, just blank white screen. I removed all the files in xampplite folders and re-installed ver 1.7.1 again, it's of no use. Then i installed xampplite 1.7.2 (latest), which i had downloaded from xampp website, again it's of no use. Apache and MySql are running though. I tried using locally installing wordpress, as i have a theme ready and want to convert that design to wordpress, test it and start using it online. Running 'Port-check' on xampp control panel showed this -- RESULT ------ Service -- -- Port -- -- Status -- --------------------------------------------------- Apache (HTTP) -- 80 -- C:\xampplite\apache\bin\httpd.exe Apache (WebDAV) -- 81 -- free Apache (HTTPS) -- 443 -- C:\xampplite\apache\bin\httpd.exe MySQL -- 3306 -- C:\xampplite\mysql\bin\mysqld.exe FileZilla (FTP) -- 21 -- free FileZilla (Admin) -- 14147 -- free Mercury (SMTP) -- 25 -- free Mercury (POP3) -- 110 -- free Mercury (IMAP) -- 143 -- free Mercury (HTTP) -- 2224 -- free Mercury (Finger) -- 79 -- free Mercury (PH) -- 105 -- free Mercury (PopPass) -- 106 -- free Tomcat (AJP/1.3) -- 8009 -- free Tomcat (HTTP) -- 8080 -- free --------------------------------------------- I also Have skype installed but it's not using 'Port 80' (as i have read, this was the issue, but checked under skype option the port is 65013). And when i run file:///C:/xampp/htdocs/index.php - it shows "Something is wrong with the XAMPP installation :-( " Please help with this problem. thanks Sharath kumar

    Read the article

  • Windows RDP cannot connect to x64 server from XP SP3+

    - by Tom
    Hi all, I have a strange problem that I can't seem to find the answer to anywhere online. The issue has to do with using Windows RDP to connect to our servers. Here is what works: -XP/Vista client (any SPs) connecting to 32-bit Server 2003 machine -XP (SP2 and lower) client conecting to 64-bit Server 2003 machine Here is what does not work: - XP SP3+/Vista client connecting to 64-bit Server 2003 machine It appears that the issue is that XP SP3 and Vista clients cannot connect to x64 Server 2003 boxes. After entering the username/password, we get an error message saying the below, and the connection drops: To log on to this remote computer, you must have Terminal Server User Access persmissions on this computer. By default, members of the Remote Desktop Users group have these permissions. If you are not a member of the Remote Desktop Users group or another group that has these persmissions, or if the Remote Desktop User group does not have these permissions, you must be granted these permissions manually. The issue is that the user is a member of the Administrators group, which has permission. Also, logging in using the same username, but from an XP SP2 machine, has no problems at all. I hope I explained this well enough, and any help/insight that can be given would be greatly appreciated. Thanks, Tom

    Read the article

  • RAID 5 Install on Ubuntu Server 12.04 [closed]

    - by tarabyte
    Environment: Ubuntu Server 12.04, installing from bootable flash drive Error: No root file system is defined. Please correct this from the partitioning menu. I'm trying to set up a personal file server with software RAID 5. I just got three hard drives for this, but haven't found any solid documentation. I'm unsure what the basic way to partition my hard drives is. Can someone upload a screenshot of their "partition disks" screen so that I can compare with mine (attached)? Should I set the bootable flag? Do I need a /home partition? A /boot partition? Should I "Use [my partition] as: Ext4 journaling file system"? Or make that field "physical volume for RAID"? I am an engineer, but I have only a cursory knowledge of all-things-linux. If you know of any good learning resources I'd be happy to hear about those too (that way I don't have to blindly follow deprecated tutorials online). well, image would be here but i don't have a high enough reputation yet (please vote up :)) Thank you, References I've looked into: https://help.ubuntu.com/community/Installation/SoftwareRAID https://help.ubuntu.com/12.04/serverguide/advanced-installation.html http://forevergeeks.com/setup-ubuntu-server-with-raid-5/

    Read the article

  • WRTU54G-TM router with 3rd party firmware; Can custom firmware include stock binary portions?

    - by dlamblin
    I've been doing a lot of reading online about the Linksys WRTU54G-TM router model that I now own. It seems getting a custom firmware onto it is not a problem. But no one is talking about retaining the Voip features (yet). So far they're all disappointed that it's not a SIP machine and used GSM over IPSec. Personally I don't care about using it with non-t-mobile. If I take the original firmware, shouldn't I be able to extract it, and it's SquashFS image, and then move all of the t-mobile specific binaries for enabling the calling features over to a custom firmware installation (maybe OpenWRT)? You might ask why, and the reason is, that if I do this I could retain my calling features, which I do want, and ssh to the router and use it to run additional software, as any OpenWRT router could do. Does anyone know if this can be done, and how the firmware's binaries could be gotten at and installed correctly? Update I have found someone working on 3rd party WRTU54G-TM firmware. I am still interested in my second part of the questions, that is can't the stock firmware images be pulled apart and have the close-source, if any, binary kernel modules moved into another more flexible custom firmware?

    Read the article

  • Windows RDP cannot connect to x64 server from XP SP3+ [closed]

    - by Tom
    Hi all, I have a strange problem that I can't seem to find the answer to anywhere online. The issue has to do with using Windows RDP to connect to our servers. Here is what works: -XP/Vista client (any SPs) connecting to 32-bit Server 2003 machine -XP (SP2 and lower) client conecting to 64-bit Server 2003 machine Here is what does not work: - XP SP3+/Vista client connecting to 64-bit Server 2003 machine It appears that the issue is that XP SP3 and Vista clients cannot connect to x64 Server 2003 boxes. After entering the username/password, we get an error message saying the below, and the connection drops: To log on to this remote computer, you must have Terminal Server User Access persmissions on this computer. By default, members of the Remote Desktop Users group have these permissions. If you are not a member of the Remote Desktop Users group or another group that has these persmissions, or if the Remote Desktop User group does not have these permissions, you must be granted these permissions manually. The issue is that the user is a member of the Administrators group, which has permission. Also, logging in using the same username, but from an XP SP2 machine, has no problems at all. I hope I explained this well enough, and any help/insight that can be given would be greatly appreciated. Thanks, Tom

    Read the article

  • How to handle server failure in an n-tier architecture?

    - by andy
    Imagine I have an n-tier architecture in an auto-scaled cloud environment with say: a load balancer in a failover pair reverse proxy tier web app tier db tier Each tier needs to connect to the instances in the tier below. What are the standard ways of connecting tiers to make them resilient to failure of nodes in each tier? i.e. how does each tier get the IP addresses of each node in the tier below? For example if all reverse proxies should route traffic to all web app nodes, how could they be set up so that they don't send traffic to dead web app nodes, and so that when new web app nodes are brought online they can send traffic to it? I could run an agent that would update all the configs to all the nodes, but it seems inefficient. I could put an LB pair between each tier, so the tier above only needs to connect to the load balancers, but how do I handle the problem of the LBs dying? This just seems to shunt the problem of tier A needing to know the IPs of all nodes in tier B, to all nodes in tier A needing to know the IPs of all LBs between tiers A and B. For some applications, they can implement retry logic if they contact a node in the tier below that doesn't respond, but is there any way that some middleware could direct traffic to only live nodes in the following tier? If I was hosting on AWS I could use an ELB between tiers, but I want to know how I could achieve the same functionality myself. I've read (briefly) about heartbeat and keepalived - are these relevant here? What are the virtual IPs they talk about and how are they managed? Are there still single points of failure using them?

    Read the article

  • VNC as a Support Tool Over the Internet

    - by dosboy
    I'd like to set up an environment where I can use VNC to remotely support my clients over the internet. No VPNs involved. I've used the UltraVNC repeater in the past, but the problem is that it requires a dedicated Windows server. What I'd like to do is as follows: VNC Client (me) - NAT - Internet - NAT - VNC Server (the person I'm offering support to) I'd basically like the same functionality that the UltraVNC repeater offers, but the only internet environment I have to host something on is a Linux shared server (standard hosting - PHP, Apache, etc.). Requirements: Multiple platform support for both Client and Server - specifically Mac and Windows Allows for connection with multiple NATs involved (Client and Server side) Will allow me to use my existing hosting environment for any repeater that might be involved I believe the way this would work is that the Server (the person I'm offering support to) when online would connect to a listener on the internet. When they needed support I would connect my Client to the same listener, see them connected, and use the listener (man-in-the-middle) to piggyback my Client to connect to their Server. I'm open to using any software (not limiting myself to VNC) but would prefer a FOSS solution (which is why I'm leaning towards VNC). Any advice would be greatly appreciated.

    Read the article

  • Proper 16:9 video size for non-HD 4:3 video (for youtube/vimeo)

    - by Xeoncross
    Since High Definition video came out on all the online sites it has changed the default aspect ratio of the player from 4:3 to 16:9. This means that for people posting SD video you have to resize some of your videos to get them to fit right. For example, NTSC DVD quality (aka 480i/p) is 720x480 pixels (width x height). However, low-end High Definition (720i/p) is 1280x720. Resolution Chart Anyway, now that the video players are built for HD you will find that uploading standard quality videos will result in videos that are "letter boxed" which means they have extra black bars on the top and bottom (or sides). Correct me if I'm wrong, but in order to get a 720x480 video to fit a box that is designed for HD the best practice would be to crop some of it off so that it fits as 720x404 since: 16/9 = 1.78 (1.7777777777778) 720/405 = 1.78 405x1.78 = 720.9 The same would stand for 640x480 (old TV quality) video that would need to be 640x360 correct? I'm asking because I'm not sure about all this and whether this is the proper way to fix these letter-boxing/display problems.

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >