Search Results

Search found 93612 results on 3745 pages for 'inquisitive one'.

Page 365/3745 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • What, Why and How of SEO

    Every day new domains are registered and new people are introducing their websites. With passing time the enthusiasm is lost when one realises it's not as easy as one thought when it comes to optimize the site.

    Read the article

  • A Guide to Google PageRank

    Getting to the top of Google for search terms related to your industry is one of the main goals of many modern businesses. There are many crucial factors which can determine how high you appear in SERP (Search Engine Results Page), one of which is Google PageRank.

    Read the article

  • 13.04 Ringtail USB installation overwrote my username

    - by barnhillec
    I had to use an .iso USB to upgrade from 12.10 to 13.04 Ringtail. Great results, except the USB stick install app asked me to provide an identity so I provided the same one I used for 12.10 . Unfortunately it overwrote my other identity with a fresh one, so now I have no ownership of all my previous profiles, etc. A dumb mistake, but is there a way to take ownership of my previous user identity and merge it with the current?

    Read the article

  • Motherboard HDDPWR1 connector

    - by Eric Leschinski
    I need help identifying the name of a connector. I have a Gateway DX4870-UB318 computer, I opened the case and wanted to attach another hard drive, but to my surprise one existing SATA hard drive was connected to the motherboard with this connector: And here is the spot on the Motherboard where the power was supplied. What is the name of this adapter and where can I get another one? Clues: This computer was bought new October 2013 from best buy, box number: DX4870-UB318. The gateway folks won't divulge the type of motherboard it has nor give specs on it. On the wire itself is an identification code: H.35090NJ01-000 Next to the connector on the motherboard it says: HDDPWR1 and the second one says HDDPWR2. This cable has two SATA power connectors and one mystery connector. The power supply has no molex power cables and no SATA power connectors! This is the most bizarre hard drive power system I've seen. I guess the motherboard folks are trying to remove the burden for desktop power supplies to provide adapters (molex, SATA, other) to CD's and hard drives. Can someone put a name on that white flat 6 pin HDD Power Connector? My Solution I can buy a "SATA Power Y Splitter Cable" to provide more spaces to power sata devices.

    Read the article

  • Bind ADFS 2.0 service to a specific IP address

    - by ccellar
    I have one server with ADFS-2.0 and a few websites on it. One of the websites is Dynamics CRM which listens on a specific IP address on port 443. Dynamics CRM provides a metadata file for configuration purposes which could be used to configure a relaying party trust with ADFS. It is accessible with the URL https://auth.contoso.com/FederationMetadata/2007-06/federationmetadata.xml The problem is that ADFS-2.0 installs a service which registers following urlacl https://+:443/FederationMetadata/2007-06/ This means the result of accessing the URL https://auth.contoso.com/FederationMetadata/2007-06/federationmetadata.xml is the metadata file of ADFS, not the one of Dynamics CRM. I've tried to delete the default urlacl and added (one of them at a time) https://192.168.1.2:443/FederationMetadata/2007-06/ https://adfs.mydomain.com:443/FederationMetadata/2007-06/ but neither of them worked. Instead the ADFS-service failed to startup complete. Is there any way to bind this service to a IP address? At the moment I see only two alternatives Bind the service to a non standard port. This leads to problems because this means that also the ADFS website has to use a non-standard HTTPS-port. Install ADFS-2.0 on a different server (this is my favorite alternative - however it is not possible in every situation...)

    Read the article

  • How to compare old laptop to new laptop?

    - by Lasse V. Karlsen
    I hope this question doesn't get closed at once :) I have an old laptop, a Compaq NC4200, which is going its final laps around the track these days. Battery is dead, and everything kinda runs slow. It also has only 1GB of memory, and even though I don't know if it can take more, I probably wouldn't be able to get hold of any that matches without having to special order it. The size, however, has been ideal for my usage pattern, so I'm looking to replace it with a similarly sized laptop, at least in the same size category. However, it's been a while since I tried keeping track of CPUs, so I have a question. The old laptop has a Intel Pentium M 760 1.86GHz processor. One laptop I found online has a Intel Pentium SU4100 1.3GHz dual-core. This type of processor seems to be quite common in the price and size-range I've been looking. What kind of relative performance boost could I expect from the old one to the new one? I am not expecting a "about 7.45x speed", but some indication would be nice. For instance, dual-core tells me it might be akin to 2.6GHz, but I assume I can't simply compare 1.86GHz to 2.6GHz and expect the new one to run about 1.4x as fast, I expect more these days. Or is that unrealistic for this kind of processor? Do I need to up my price range and go for a 2+ GHz processor?

    Read the article

  • What exactly is a Mobile mouse? + Mouse Recommendation

    - by chobo2
    I am really disappointed with Logitech. My first cordless wireless mouse was from them and it lasted like 5 years. So I decided to get another one from them http://www.futureshop.ca/catalog/proddetail.asp?logon=&langid=EN&sku_id=0665000FS10099373&catid= And this mouse sucks bad. After 6 months it broke. I returned it under warranty and got a new one now 4-6 month later it is on the verge of breaking again.... You pay like $50 for this mouse and it lasts like 6 months that sad. I just lost faith in Logitech mice as I remember my bro also had a logitech mouse too and it broke after like 6 months. He then bought another logitech mouse(different model) that has been working for maybe 2 years(and no signs of breaking) but I am not crazy about the mouse(I don't like the 2 buttons by the wheel) and I not sure if they even sell it(maybe they got rid of it because it lasts too long). http://www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=1578495&CatId=1285 So I am looking at a Microsoft mouse. http://www.futureshop.ca/catalog/proddetail.asp?logon=&langid=EN&sku_id=0665000FS10125565&catid= I am looking at this one but I am not sure what they mean by mobile mouse. I think that is what MS calls notebook mice. So I am not sure if this would be a good mouse to get for a desktop. I see it uses like a micro usb receiver but I am not sure if it is smaller then a standard mouse. But almost all the mice I looked at at futureshop.ca or staples are labeled notebook mice or mobile mice. So not sure what mice would be right for me. I don't want a corded one though. I really liked the LX6 design alot but it can't last more than 6 months. Thanks

    Read the article

  • KVM recommendations

    - by alex
    I recently tried out a cheaper KVM solution, a dual monitor one with a USB hub and audio/mic support. It seemed the perfect KVM. However, these are my experiences. When using my keyboard via the PS/2 connection, none of my multimedia keys work (useful, because it has a volume control and my speakers do not, only on a wireless remote control.) I plugged the keyboard in via the USB port - and it seemed to work. However, I believe to switch the hub from PC to Mac, you need to use a keyboard combo, which is only supported when the keyboard is plugged in via PS/2 Sometimes the middle mouse button doesn't work when connected via PS/2. The multi monitor is VGA - I just found out by the looks of things my Mac Mini outputs DVI Digital only (though this is my fault!). Mac works with 2 screens, but switching and switching back can cause it not to display on the 2nd screen unless I go detect displays again. My question is - is there a KVM out there that supports these features? USB keyboard and mouse inputs will full multimedia keys support Dual monitor DVI connections Hotkey to change PCs and physical button USB hub Emulate screens attached when switching to a different computer Works with PC and Mac Audio / Microphone support Does one exist, that won't cost the world? So far the only one that seems to support all this (that I can tell) is this one. UPDATE I ended up buying the Aten CS1642. It's expensive, but it seems to work great!

    Read the article

  • How can I trigger the creation of a new CLB file?

    - by Xperimental
    I'm currently having a problem with an application using COM running on Windows Vista. The application runs ok on one machine, but doesn't work on a similar configured machine. Both machines are virtual images originating from the same source image. While searching the registry for causes of this error, I came across the CLBVersion key in HKCR\CLSID which seems to have something to do with COM. The value of the key differs between the two machines (0x6 on the erroneous one, 0xc on the working one). Also there are files containing the same number in their filenames in the %SystemRoot\Registration directories of the machines. They are called R000000000006.clb and R00000000000c.clb respectively. I have already searched the windows event log for anything leading to the creation of those files (I have searched by the creation date of the files). Now a few questions regarding the registry keys and the files: Is it correct, that this is connected to COM? What is the function of the files? What causes the creation of a new "CLBVersion"? Is there a way for me to trigger the creation of a new CLB file? edit: I have now found out, that this has nothing to do with my application error. But I would still be interested in details about the registry key and the files. An installation of Visual Studio 2005 has brought the second machine to the same configuration (0xc in registry and file) as the other one.

    Read the article

  • Alter charset and collation in all columns in all tables in MySQL

    - by The Disintegrator
    I need to execute these statements in all tables for all columns. alter table table_name charset=utf8; alter table table_name alter column column_name charset=utf8; Is it possible to automate this in any way inside MySQL? I would prefer to avoid mysqldump Update: Richard Bronosky showed me the way :-) The query I needed to execute in every table: alter table DBname.DBfield CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; Crazy query to generate all other queries: SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname'; I only wanted to execute it in one database. It was taking too long to execute all in one pass. It turned out that it was generating one query per field per table. And only one query per table was necessary (distinct to the rescue). Getting the output on a file was how I realized it. How to generate the output to a file: mysql -B -N --user=user --password=secret -e "SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname';" > alter.sql And finally to execute all the queries: mysql --user=user --password=secret < alter.sql Thanks Richard. You're the man!

    Read the article

  • IMAP proxy as a POP3 hub?

    - by mailman stan
    Simple scenario, complicated technology: One family receiving mail from five email addresses via POP3 into one Outlook inbox on a single PC. Now we'd like to be able to replicate that single inbox across multiple devices (eg. desktop PC, laptop, netbook, smartphone). If we continue using POP3 as the mail transfer protocol, messages will be downloaded to one device and will not be visible to the others; replies will likewise be isolated on the sending machine. If we switch to IMAP, I understand that we can have multiple devices maintaining a shared view of an inbox hosted at the server end, but what about multiple accounts? I tried changing the account configuration in Outlook to fetch from the mail providers' IMAP service instead of POP3, which does give a shared view across multiple devices but also causes Outlook to create a separate inbox and PST for each account. This is awkward because it means there are five separate folders that need to be checked, and Outlook tools like search filters and rules don't seem to work across accounts. To get what I want (five accounts delivered into one shared mailbox) it seems that I would need some sort of intervening server that collects mail (using POP3) from all our accounts into a single inbox while preserving the original destination addresses, and then serves it up to all our devices using IMAP. Is this workable? Is it a good approach? Is there an easier way?

    Read the article

  • eMachine W5243 won't POST; fans run but optical drive will not open.

    - by NicciAdonai
    Symptoms are what is described in the title. The machine reacts to the power button being hit by spinning up the two fans: CPU and PSU. The hard drive (SATA) spins up as well. No other reaction. This one symptom is particularly weird, though: the optical drive will not open with the IDE cable attached, but if I unplug it from the mobo it will. I can turn the PC on with it attached, won't open; then unplug IDE while it is still on, WILL open; then plug IDE back in with the PC STILL ON, WON'T open. I have disconnected every peripheral unnecessary to POST. These include: mouse/keyboard, PCI modem, the IDE optical drive (power and data), and the SATA HDD (power and data). Video is onboard. The only two things connected are DB15 video and power cable. There were 2 512 MB DDR2 sticks of RAM in it. I have tried running it with just one of them, then switched the other in. Currently seated is a completely different 1 GB stick that I keep around for troubleshooting purposes, and I have tried it in both slots. I have replaced the CMOS battery with a used one I had lying around, and which worked in the computer it came out of. I have tested the PSU with a tester to confirm it was good, then tried connecting another PSU just in case--same symptoms. I have even tried a suggestion I found elsewhere on this site wherein one disconnects power from the PSU and then presses the PC's power button twice, thereby "resetting" the PSU. Currently I am trying yet another suggestion: turn it on and wait an inordinate amount of time for POST. Any help would be appreciated.

    Read the article

  • big speed difference on a network link with and without VPN tunnel

    - by xirtyllo
    Scenario: We have a network link between two offices. The link is provided by a third party company through a VLAN on their network, but to us it is totally transparent -as if we had a simple ethernet cable going from one location to the other-. We have one router at each side of the link, with 3 VPN tunnels in between the two. The test: When I test the speed of the network link with the routers in place, with one laptop directly connected to the router on each side, I consistently get ~30/35Mbps. But if I take out the routers and I test the link connecting the laptops directly to the ethernet cable at each side, I consistently get ~85/88Mbps. It's quite a big performance hit, and I would tend to think that the VPN tunnels are responsible for the slow down. Is it normal that this configuration (two routers with three VPN tunnels between them) takes away so much bandwidth? More info: The encryption algorithm used for the VPN tunnels is AES128. The routers model is Zyxel USG200 and Zyxel USG1000, and their CPU, memory, and storage use is well within normal limits. The nominal bandwidth of the network link is 100Mbps. The network link in question is supplied by a third party company (the building in between our two offices). Basically it passes through their network as a VLAN, but the VLAN is completely transparent to us (e.g. no configuration required on our side, just like one single cable from end to end). Unfortunately (or maybe fortunately) I cannot directly test different routers configurations as I'm not the person in charge of it.

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • award phoenix bios not recognizing my sata hdd.

    - by josh
    What am I doing wrong? I have a custom built comp with a Fatal1ty AA8XE mobo. It has 4 SATA ports and one IDE port. When i first got it, I had a really hard time putting in more than one hard drive. Right now i have one 120gb IDE HDD on master and my DVD+-RW on slave connected to the one IDE spot on the mobo. I ripped a bunch of movies and filled up my HDD, so I got a WD 80gb SATA drive. I plugged it into SATA1 and hooked up the power, turned on comp, went into bios. The only thing in any option in any of the menues in this crazy lookin bios is a thing that says "SATA mode". i put it on IDE, set it so PATA is primary, SATA is secondary. booted up my comp, nothin. Not recognizing the SATA. I went back into the bios and checked it all again. I saw that it says SATA2 and SATA4 are the secondaries so i put it on SATA2, booted, nothing, same with SATA4, same with SATA3, all same as SATA1. Bios and wt os are not recognizing the drive as being there at all. I even downloaded and printed the almost 100 page manual for the mobo, read the entire thing, and still can't figure it out. I know there are a lot of people out there smarter than me when it comes to computers. So please, somebody, anybody, please tell me something that I'm not seeing. Some setting somewhere that I didn't configure right. There is something, obviously, but I can't find it. As far as i can tell, everything is set perfectly fine for my 120gb to be the master and the SATA to be the slave. I don't know what I'm doing wrong but I'm seriously about to throw this computer out the window. thankyou in advance to whoever attempts to help.

    Read the article

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • What open source ecommerce webshops offer #1: usability, #2: PayPal integration, and #3: ease of administration and use

    - by Jonathan Hayward
    I've spent several days trying to deploy Satchmo, in the process asking several questions about deployment (http://stackoverflow.com/questions/11277407/can-anyone-explain-this-error-message-deploying-a-satchmo-project-under-gunicorn, http://stackoverflow.com/questions/11277685/is-there-a-howto-to-fcgi-for-deploying-satchmo, and http://stackoverflow.com/questions/11278295/what-is-the-most-stable-release-of-satchmo). Django's tagline is "The web framework for perfectionists with deadlines," and Satchmo's tagline is equally forceful: "The webshop for perfectionists with deadlines." I'm looking more to set up, configure, design, etc., rather than code for this one, and I'm taking a bit of a hint that for me at least the "with deadlines" bit is something that I cannot manage. Deployment has been a time sink. So, taking a step back, I don't specifically need to edit and extend the source code; what I want are first, good usability and a clean experience for the end-user, then being easy to deploy/install/manage/maintain, and enough so that even if you're having a slow day it should at most be one day's work to install, one day's work to get running, and one day's work to rebrand as white label (for simple branding). What ecommerce webshops should I be looking at?

    Read the article

  • How to fix a Postfix/MySQL/Dovecot Unknown Host Issue?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • Distribute IP packets accross different NIC queues with MSI (Message Signalled Interrupts)

    - by Ansis Atteka
    NetXtreme II BCM5709 Gigabit Ethernet NIC supports MSI feature (Message Signaled Interrupts) and it has 8 queues. Each queue has its own Interrupt handler in /proc/interrupts. What I am trying to accomplish is to tell NIC which packets should go to which queue. Questions: Is it possible to manually specify which IP packets should go to which queue by encapsulated protocol type (e.g. IPsec packets go in one queue, while TCP packets go in another queue)? If it is possible - how can I do it under Linux? If it is not possible - should I look at MSI-X capable NIC cards to solve this problem? More details: We have one Interface that is terminating IPSec and forwarding/terminating TCP connections. The IPSec packet decryption is inlined (this means that decryption is done under the same ksoftirqd/X context). We are trying to find out if we will be able to improve total performance if IPSec packets will be scheduled on another CPU than TCP packets. One more limitation is that IPSec code is not MP-safe, hence I can not run it under more than one ksoftirqd/X. By default it seems that packets are distributed/hashed by source IP over the 8 NIC queues. The bottleneck is IPSec that chokes out TCP traffic while it is decrypting/encrypting IPSec packets at ~100% CPU. OS is Ubuntu 10.10 (2.6.32-27-server) and NIC is Broadcom BCM5709.

    Read the article

  • iOS 6 in-app email does not send from within any app that supports it

    - by Joe Termine
    A strange problem -- Last night I upgraded to the final release of iOS 6 on my iPhone 4S and my iPad 2. When I open an app that allows you to send emails from within the app (e.g. adobe Reader, TurboScan, etc.) -- doesn't matter which one -- I am prompted with the email dialog from within the app, I can compose the message, but when I go to send one of two things will happen: either the email sending sound will "swoosh" and the dialog will close (leading me to think it worked) or some apps with good error handling will say there is an "error sending email." The error logs on my device are not reporting errors. It's just that the email doesn't really send. I have two Exchange mail boxes on these devices. One connects to a corporate network hosting on-premise exchange 2007 and the other connects to Gmail over the exchange interface. Have attempted to delete and re-pair these accounts (one at a time) without any change. I'm wondering if others are experiencing this problem, or whether I should just wipe the devices and chalk it up to (another) failed upgrade. Thoughts much appreciated. Joe

    Read the article

  • Can't connect to a Hyper-V VM from anywhere but the host OS

    - by Elbelcho
    I have an unusual situation on hand where I'm able to connect to a Hyper-V guest VM from the HOST, but not from anywhere but the host. The VM is running WIn2k8R2 and has IIS installed and Remote Desktop enabled. If I browse to the IP from the host OS, the IIS7 page displays. I can also RDP into the guest OS from the host as well as ping. From OFF the host, RDP, web and ping all fail. If I completely shut off the guest VM's firewall, ping will then start to respond, but all RDP and port 80 still don't. The physical host machine has 2 nics installed, but only one is plugged in. The one plugged in has a static IP. I have one Hyper-V virtual network and it's set to external. The guest VM has one NIC with a different static IP than the host, but both are on the same subnet. The host machine is joined to the domain, the guest VM is not. Any sugestions? Thanks so much for any help you may be able to provide!

    Read the article

  • How to wire 20 computers and 20 phones and 1 server into LAN?

    - by John Smith
    I have currently 3 switches Two Netgear JFS524 with 24 slots, One Belkin with 16 slots. Server DSL Internet Router. Main question is how to connect switches together, two Netgear's are next to each other, yet one is about 100 feet away and holds about 5 computer and 5 phones. If i connect them with only 1 wire will that limit bandwidth? e.g. all 23 computers will be limited to speed of one CAT5e cable? If i connect switches with 2 cables will this give speed boost? What's the ideal scenario should i just move the third switch next to other two? Will the speed of computer connected to white switch be same as computer connected to top switch? Will moving white switch right next top switch and having 16 wires comming 100 feet instead of 1 wire comming 100 feet make it faster? EDIT 1: I actually have NETGEAR ProSafe GS105 Gigabit switch its only has 4 ports in it though, you think i can have use of it in current setup? Like connect all 3 switches and server into it and keep internet router and phone server on one of the slower switches EDIT 2: Everyone mention gigabit switches, but will they do any difference with 10/100 network cards? I then have to use gigabit cards in every computer too? I could in server perhaps, but users will be 10/100

    Read the article

  • How to compare old CPU to new CPU?

    - by Lasse V. Karlsen
    I hope this question doesn't get closed at once :) I have an old laptop, a Compaq NC4200, which is going its final laps around the track these days. Battery is dead, and everything kinda runs slow. It also has only 1GB of memory, and even though I don't know if it can take more, I probably wouldn't be able to get hold of any that matches without having to special order it. The size, however, has been ideal for my usage pattern, so I'm looking to replace it with a similarly sized laptop, at least in the same size category. However, it's been a while since I tried keeping track of CPUs, so I have a question. The old laptop has a Intel Pentium M 760 1.86GHz processor. One laptop I found online has a Intel Pentium SU4100 1.3GHz dual-core. This type of processor seems to be quite common in the price and size-range I've been looking. What kind of relative performance boost could I expect from the old one to the new one? I am not expecting a "about 7.45x speed", but some indication would be nice. For instance, dual-core tells me it might be akin to 2.6GHz, but I assume I can't simply compare 1.86GHz to 2.6GHz and expect the new one to run about 1.4x as fast, I expect more these days. Or is that unrealistic for this kind of processor? Do I need to up my price range and go for a 2+ GHz processor?

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >