Search Results

Search found 5279 results on 212 pages for 'customer'.

Page 175/212 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • SSRS2008R2 report times out, but the underlying query executes in the Management Studio

    - by Matthew Belk
    A customer of mine recently moved servers and the new server has SQL2008R2. His old server was SQL2005. The new server has substantially better CPU, RAM, and disk performance than the old, but several reports time out while executing. When I run the underlying query in the SQL Management Studio, the query executes in sub-second time. The exact error message returned via the Report Manager UI is: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. (rsReportServerDatabaseError) Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It must be noted that this database is not just analytical; it's also fairly transactional, although the transaction volume is not exceptionally high. What can I do to improve the performance of the SSRS query engine? Are there settings in the data source I can adjust, or in the SSRS config files?

    Read the article

  • Is there any way to retire a AT&T Yahoo Email Account

    - by KindaSortaAsking
    Here are the facts as I (pretty sure) know them. Yahoo handles AT&T's DSL email accounts. I've called AT&T tech support and customer service and they say they can't help, but there has GOT to be a way to do this. It's too simple of a thing not to be able to do. I got behind on my dsl bill and my account got suspended. When I paid my bill, they said my account had been deactivated and I had to get a new account. When I tried to register my account with my old email address it would not let me, saying it was in use. I used a new email. The old email address is tied to a dsl account that can NEVER be reactivated. There has got to be a way to retire the old email address so that I can re-create it as a subaccount on my new dsl account. I'm not interested in anything that was in the old account (emails, addresses, etc) - I just want the address back.

    Read the article

  • Error when starting .Net-Application from ThinApp-Application

    - by user50209
    one of our customers uses SAP through VMWare ThinApp. In SAP there is a button that launches an .Net application from a server. When starting the .Net-application directly, there is no error. If the user tries to start the application by clicking the button in the ThinApp-Application, it displays the following errors: Microsoft Visual C++ Runtime Library R6034 An application has made an attempt to load the C runtime library incorrectly. Please contact the application's support team for more information. After clicking "OK" it displays: Microsoft Visual C++ Runtime Library Runtime Error! R6030 - CRT not initialized So, does the customer have to install some components into his ThinApp (if yes, which?) to get things working? Regards, inno ----- [EDIT] ----- @Sean: It's installed the following way: The .exe of the .Net-Application is on a mapped drive on a server. All clients have the requirements installed (.Net-framework for example) and start the .exe from the mapped drive. The ThinApp-Application tries to start this application and throws the mentioned exceptions. AFAIK there are no entry points for this application configured. What I should also mention is: The .Net-Application crashes during execution. That means, we have a debug mode implemented that shows what the application is doing. The application shows what it's doing and after some steps it crashes. The interesting point is: It's a .Net-application, not a C++ Application.

    Read the article

  • Create a mailbox in qmail, then forward all incoming message to Gmail

    - by lorenzo-s
    I needed to let PHP send mails from my webserver to my web app users. So I installed qmail on my Debian server: sudo apt-get install qmail I also updated files in /etc/qmail specifing my domain name, and then I run sudo qmailctl reload and sudo qmailctl restart: /etc/qmail/defaultdomain # Contains 'mydomain.com' /etc/qmail/defaulthost # Contains 'mydomain.com' /etc/qmail/me # Contains 'mail.mydomain.com' /etc/qmail/rcpthosts # Contains 'mydomain.com' /etc/qmail/locals # Contains 'mydomain.com' Emails are sent without any problem from my PHP script to any email address, using the standard mail PHP library. Now the problem is that if I send mail from my PHP using [email protected] as sender address, I want that customer can reply to that address! And possibly, I want all mails sent to this address should be forwarded to my personal Gmail address. At the moment qmail seems to not accept any incoming mail because of "invalid mailbox name". Here is a complete SMTP session I established with my server: me@MYPC:~$ nc mydomain.com 25 220 ip-XX-XX-XXX-XXX.xxx.xxx.xxx ESMTP HELO [email protected] 250 ip-XX-XX-XXX-XXX.xxx.xxx.xxx MAIL FROM:<[email protected]> 250 ok RCPT TO:<[email protected]> 250 ok DATA 554 sorry, invalid mailbox name(s). (#5.1.1) QUIT I'm sure I missing something related to mailbox or alias creation, in fact I did nothing to define mailbox [email protected] anywhere. But I tried to search something on the net and on the numerous qmail man pages, bot I found nothing.

    Read the article

  • how would it be possible to discover a cable modem's MAC remotely?

    - by amateurenthusiast
    i was reading the back archives of a canadian privacy law blog, and he linked to a judicial decision. apparently as part of an investigation in which were used yahoo chat and google's old 'hello' image trading program the officer was able to determine a suspect's modem's MAC address: In order to determine who STEPHTOSH was, the officer did a trace on a programme called WHO IS in an effort to learn from where STEPHTOSH was coming. WHO IS is a command program available to the public. The officer was able to ascertain that the person using the name STEPHTOSH was a Rogers Internet customer. The officer was able to obtain the Internet Protocol address, also known as the I.P. There is only one location for an I.P., which is unique to that subscriber. By use of the website known as DNS STUFF.com, one is able to find with which company this I.P. is registered. It was ascertained that the I.P. address used by STEPHTOSH was registered to Rogers Cable, from the Toronto area. The officer also learned the Cable Modem MAC address used by STEPHTOSH. This was all the information the officer was able to amass. now it was my understanding that the MAC address of any given device can only be accessed if you're only one 'hop' away on the Internet. the suspect in question was in Markham and the officer part of the Toronto Police, so it's conceivable that they both might have used Rogers internet. but would that still put them only one 'hop' away from each other? i thought the first hop after the modem was usually the ISP? and if he'd used a netBIOS query against this guy's machine it would return the ethernet card's MAC, not the modem's. so is this guy on the same rogers subnet as the suspect's cable modem, is that functionality part of google's Hello (i could only think that it would be possible if Hello operated as a virtual LAN or something), does the officer have remote access to the arp caches of the routers at Rogers or is he just full of crap and lying to make his case stronger?

    Read the article

  • finding the best network latency between two countries

    - by Yoav Aner
    I know there are many tools to test for bandwidth and latency, but they all rely on having at least one host from which you can run those tests. I wonder whether there's an online source or some other way to guestimate the latency or speed between two countries (in general). For example, would a customer in Japan get lower latency if the server is located in Singapore or Australia? Is a user in India likely to get higher download speed from a server in the UK or in the US? Are there any online resources or some clever ways to answer those questions with a reasonable degree of accuracy? [UPDATE]: Thanks for the great suggestions from Raffael Luthiger. I didn't know about those looking glass servers. The submarine cable maps were also really cool to discover (Thanks to Jesper Mortensen). Also seems really wise if I could ask those network professional in the area for their experience, but obviously I don't have access to those. At least some of them are on SF :) However, I'm still a little unsure how to combine those resources to give me some measurements. This is the information I have: Two countries (A,B). I do have IP addresses of customers in country A (I can obtain those from the web server log files for example). Presumably I can find some looking glass servers in country B and run a trace to those IPs. What's the best measurements to use? Are there any scripts that help automate at least some of this process?

    Read the article

  • Windows 7 64bit will not register a 32bit DLL

    - by Bad Neighbor
    I'm trying to install a 32bit Oracle instant client onto several Windows 7 PCs. This version is the one required by the customer's software. I have successfully installed it on about a dozen PCs using the same installer, but two machines refuse to register a DLL. The two PCs are of different make and model. I have been able to install this software in the past on these models. This is the error that the installer throws up: The file copies to the location referenced above. If I choose to ignore the error and manually register it later, I get the following error: This error is returned whether I use the 32bit (syswow64) or 64bit version of regsvr32. Command Prompt is run as admin, and the ID with which I'm logged into the PC is an admin. I've tried copying the file into the syswow64 folder, but I get the same error. This same installer works on other PCs. To further complicate the issue, one of the two PCs also will not register an OCX file from a different 32bit installer: Both PCs are relatively new and have standard software installed. We use MS Forefront for security, but disabling that didn't change the behavior. What am I missing?

    Read the article

  • Sending email to google apps mailbox via exim4

    - by Andrey
    I have a hosting server with several users. One of the customers decided to move his email account to google apps and added the corresponding MX records so he can receive email now. But when it comes to sending email from my server to those email addresses, they don't make it. I guess it's because exim still thinks these domains are local. That's what i see in logs (example.com is my domain, example.net is the customer's domain): 2010-06-02 14:55:37 1OJmXp-0006yh-UG <= [email protected] U=root P=local S=342 T="lsdjf" from <[email protected]> for [email protected] 2010-06-02 14:55:38 1OJmXp-0006yh-UG ** [email protected] F=<[email protected]> R=virtual_aliases: 2010-06-02 14:55:38 1OJmXq-0006yl-2A <= <> R=1OJmXp-0006yh-UG U=mail P=local S=1113 T="Mail delivery failed: returning message to sender" from <> for [email protected] 2010-06-02 14:55:38 1OJmXp-0006yh-UG Completed 2010-06-02 14:55:38 1OJmXq-0006yl-2A User 0 set for local_delivery transport is on the never_users list 2010-06-02 14:55:38 1OJmXq-0006yl-2A == [email protected] R=localuser T=local_delivery defer (-29): User 0 set for local_delivery transport is on the never_users list 2010-06-02 14:55:38 1OJmXq-0006yl-2A ** [email protected]: retry timeout exceeded 2010-06-02 14:55:38 1OJmXq-0006yl-2A [email protected]: error ignored 2010-06-02 14:55:38 1OJmXq-0006yl-2A Completed What should i do to fix that?

    Read the article

  • Directory tree in a Resource without extraction...

    - by Corelgott
    Hi all, i am looking for a way to store a complete directory including sub directories in an application's resource and not have to extract it to use it. Details: We would like to use GeckoFx (Gecko as C# Component) in one of our applications. GeckoFX needs the XUL-Runner and needs to find it's folder structure We have some other data which I would not prefer to extracted to the customer's pc; At least not onto something persistent like a hdd... Getting the complete directory into the resources is not that kind of a big deal. Compress to one file and done. But not writing it to the disk to use it is something else. I have a strong dislike against temp folders and such things. Would anything like a RAM drive be possible? Some part of the RAM beeing mounted? Does something like this even exist as a lib, or would this only be possible by a device driver? Any thoughts on this? Thanks in advance! Corelgott

    Read the article

  • Network latency and speed of light

    - by James
    This was kinda of covered by the following Is minimum latency fixed by the speed of light? , but i would like to add the follow up a bit. The scenario is as follows; we have two opposing sites one on the West Coast of the US and one in Ireland. The customer is in central Europe, and has requested a latency test. Ireland gives responses of ~65-70ms. However the West Coast guys claim to be faster with a response of 60ms. Now a quick check says that light in fiber would take about 42ms to make the trip to the States and 8.5ms to Ireland. So obviously this is a single hop and does not include routers, switches, firewalls, protocol overhead etc. Would I be right to call BS on their figures? As a final note I tested a ping to Google IP address that was allegedly on the west coast from a site that covered a similar distance and was amazed to get a response time of 20ms. Suggesting ICMP packets that travel twice the speed of light. So A) what am I missing B) Am I right to suspect shenanigans? UPDATE: Guys thanks so far for your help and I have been reading various previous questions on this. About 5 years I had an issue where the hop from the UK to Ireland added 10ms of latency no matter what we did. In the end I moved the servers; So imagine my surprise when I have guys that claim they are 5ms faster with a transatlantic trip. So again should I call BS? Oh and assume both sites are normal mortals that don't have access to Google magical routing, warp dives or flux capacitors. :)

    Read the article

  • VMWare Raw Device Mapping Not Working

    - by George H. Lenzer
    While I'm waiting for VMWare support to get back to me, I thought I'd ask here. I have a 400 gig LUN presented from a fiber channel SAN to my VMWare host. It's legacy from another virtualization platform and I need to keep it as is to avoid a long period of downtime. I formatted my VMFS3 datastore with 4 meg blocks to allow up to 1 TB disks. Then I tried adding my 400 gig disk as a raw device in physical compatibility mode. I get the error: "File is larger than the maximum size supported by datastore 'Base Test'. [Base Test]VMTEST01/VMTEST01_2.vmdk Originally I had the VMFS datastore formatted with 1 meg blocks which was the cause of this problem since the largest disk allowed would be 256 gigs. But I deleted the data store and then reformatted with 4 megs blocks. I've also tried using virtual compatibility mode for the raw device but it still fails. Does anyone have any suggestions? I've been waiting for a little over a week for VMWare, but that's fine because I'm not yet a paying customer. I'm still in the eval phase.

    Read the article

  • Restricting Access to Application(s) on Point of Sale system

    - by BSchlinker
    I have a customer with two point of sale systems, a few workstations and a Windows 2003 SBS Server. The point of sale systems are typically running QuickBooks Point of Sale and are logged in with a user who has restricted permissions / access (via Group Policy). Occasionally, one of the managers needs to be able to run a few additional applications -- including some accounting software. I have created an additional user for this manager, allowing them to login and access the accounting software. The problem is, it can be problematic to switch users on the system, as QuickBooks takes a few minutes to close (on POSUser) and then reopen (on ManagerUser). If customers are waiting, this slows things down drastically. Since the accounting software is stored on a network drive, it would be easiest if the manager could simply double click something, authenticate against the network drive / domain controller and then the program would launch. When they close the program, the session to the network drive would be lost and the program would no longer be accessible. Is there any easy way to do this? Both users are on a domain and the system is Windows 7. I just don't want to require the user to switch back and forth. In a worst case scenario, they forget to switch back and leave the accounting software wide open.

    Read the article

  • Reconnecting OST with Exchange

    - by syrenity
    Hi. I have a quite big problem with customer's MS Exchange. The server got it's disk filled about 2 weeks ago, so it's currently offline. They plan to upgrade it, but not in hurry, as they use it mainly for OWA and back-up - the mails exchange is done via SMTP and POP3. Trying to diagnose some problem today, one of the users has (following the ISP instructions), removed the Exchange account from Outlook, which essentially left the OST orphaned. The user naturally didn't move the emails or any other data to the Archive / PST before, so these emails located on the OST only. So currently I'm trying to figure out how to restore them. There are 2 options: 1) Make the user buy some tool to convert them to PST, and import as archive / main Outlok file? 2) Reconnect the Outlook to Exchange (once it up), let it sync the old server content, then shutdown Outlook and replace the new OST with the old one, start Outlook again in offline mode and move these files to archive. 3) Any other method? Can someone advice what would be the best approach here? The used versions are Outlook 2007 and Exchange 2003. Thanks!

    Read the article

  • Determine if the "yes" is necessary when doing an SCP

    - by glowcoder
    I'm writing a Groovy script to do an SCP. Note that I haven't ran it yet, because the rest of it isn't finished. Now, if you're doing an scp for the first time, have to authenticate the fingerprint. Future times, you don't. My current solution is, because I get 3 tries for the password, and I really only need 1 (it's not like the script will mistype the password... if it's wrong, it's wrong!) is to pipe in "yes" as the first password attempt. This way, it will accept the fingerprint if necessary, and use the correct password as the first attempt. If it didn't need it, it puts yes as the first attempt and the correct as the second. However, I feel this is not a very robust solution, and I know if I were a customer I would not like seeing "incorrect password" in my output. Especially if it fails for another reason, it would be an incredibly annoying misnomer. What follows is the appropriate section of the script in question. I am open to any tactics that involve using scp (or accomplishing the file transfer) in a different way. I just want to get the job done. I'm even open to shell scripting, although I'm not the best at it. def command = [] command.add('scp') command.add(srcusername + '@' + srcrepo + ':' + srcpath) command.add(tarusername + '@' + tarrepo + ':' + tarpath) def process = command.execute() process.consumeOutput(out) process << "yes" << LS << tarpassword << LS process << "yes" << LS << srcpassword << LS process.waitfor() Thanks so much, glowcoder

    Read the article

  • v2v of RHEL5 box - issues with retaining MAC address

    - by Alex Berry
    For the last week we have been troubleshooting a customer's Red Hat Virtual Machine running on ESXi. We've been using Veeam to try to create a replica off-site and have been having getting it to work on a decent schedule and recently we noticed that there were issues with orphaned snapshots while looking at the datastore. You can see several snapshots in the same folder and it's causing issues with replication and backup, so we decided the cleanest way was to v2v the machine to another datastore so that we had a clean single-vmdk setup to work with, this is where our trouble started. We first started off with a v2v using vmware converter and connecting to the powered on machine as we were having issues doing an offline v2v. This copied fine but when I tried to set a static MAC using this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=507 the new VM wouldn't take the address, it simply obtained a new MAC, received a dhcp lease and then would only boot up to a blank red screen, never the login screen. So the next step was to do an offline v2v, once we finally got it working. Same thing, followed the kb to the letter and still it wouldn't take the MAC. I then tried it again and upon completion I compared both old and new VMX file, copying every identifier and variable possible, then unregistered both VMs, uploaded the new VMX file and booted, only to see the same results. Finally I did the same as above but I copied the disk using DD to a second attached vmdk and then attached this to the new VM, and still no luck. After downloading the modified VMX file after the first boot and comparing it to the original I created I found that the bios uuid had changed from the one I typed in manually, so I'm assuming this may be the snagging point, but I have no idea. I've never had this issue before on a P2V and I'm just wondering if someone could shed some light on this, maybe it's to do with RHEL licencing?

    Read the article

  • More than 10k connections on linux vps

    - by Sash_007
    my question what is causing this and how to check? we use url masking script is the website..is it causing this?please help We could noticed that you are abusing our network, as you have made more than 10k connections in our node due to this our node became unstable and all of our customer faced down time because of your VPS. Please find the log details below for your reference. ============================== 593 src=199.231.227.56 dst=58.2.236.196 465 src=199.231.227.56 dst=192.223.243.6 396 src=199.231.227.56 dst=58.2.238.191 217 src=199.231.227.56 dst=58.2.236.197 161 src=199.231.227.56 dst=20.139.83.50 145 src=199.231.227.56 dst=192.223.163.6 136 src=199.231.227.56 dst=125.21.230.68 134 src=199.231.227.56 dst=125.21.230.132 131 src=199.231.227.56 dst=20.139.67.50 117 src=199.231.227.56 dst=110.234.29.210 112 src=199.231.227.56 dst=65.52.0.51 104 src=199.231.227.56 dst=202.46.23.55 100 src=199.231.227.56 dst=202.3.120.4 94 src=199.231.227.56 dst=117.198.39.22 69 src=203.197.253.62 dst=199.231.227.56 62 src=14.194.248.225 dst=199.231.227.56 53 src=199.231.227.56 dst=192.223.136.5 52 src=49.248.11.195 dst=199.231.227.56 51 src=199.231.227.56 dst=117.198.38.15 50 src=199.231.227.56 dst=192.71.175.2 47 src=199.231.227.56 dst=61.16.189.76 45 src=199.231.227.56 dst=122.177.222.17 43 src=199.231.227.56 dst=115.242.89.40 42 src=199.231.227.56 dst=103.22.237.215 41 src=125.16.9.2 dst=199.231.227.56 39 src=199.231.227.56 dst=117.198.35.90 38 src=199.231.227.56 dst=203.91.201.54 38 src=199.231.227.56 dst=14.139.241.89 38 src=199.231.227.56 dst=111.93.85.82 37 src=199.231.227.56 dst=65.52.0.56 Note: 1st column indicates the total number of connections to a particular IP. You have totally made more than 10k connections.

    Read the article

  • Managing arbitrary user permissions under PureFTPd

    - by Sebastián Grignoli
    I need to provide an FTP service that needs to be web-managed in the simplest way possible. My customer wants to create folders and users, and give them read only or read/write access arbitrarily. For example: The folder 'Documents' should be read only for several users, writable for internal users, and invisible for the rest. The folder 'Pictures' should be read only for journalists, writable for associates, and invisible for the rest. The folder 'Media' should be read only, writable or invisible for arbitrary users specified on the admin. There could be a large number of users and folders. I can't find a good way to accomplish that. I thought that I could give each user a home folder and put symlinks for the folders he has read access to, and make the user part of the folder's group when he has write access too, but now I think that this wouldn't work, because with PureFTPd (or ProFTPd) I can only specify the virtual user's mapping to a system user, and only one GUID for each virtual user. My approach requires that I could specify several GUIDs for each user (one by each folder he has write access to). I need to start programming this admin and I still don't know wich approach would work, if any. ¿Any ideas?

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • Legalities of freelance security consultant (SQLi) [closed]

    - by Seidr
    Over the years I've gained a large amount of experience in Programming (my main occupation) and server admin, and as a result have a fairly decent backing in security practices. I'm also pretty good at spotting security flaws in software (including but not limited to SQLi), and have built up a list of sites that could definately use some looking at. My question is, what are the legalities of me contacting these sites saying something along the lines of "I've looked at your site and it appears vulnerable - customer data could be compromoised - would you like me to fix it?". Could me finding out that the site is infact vulnerable be construed as an attack itself? If the prospective client so wished, could they take me to court over this? When I find a vulnerable site, all I do is confirm and make a note of the vulnerability. I'm not in it for personal gain (getting paid for FIXING it would be nice!), just curiosity. Is this a viable way to go about finding clients for this kind of work, or would you recommend a more 'legitimate' way? Any suggestions/advice would be greatly appreciated :)

    Read the article

  • High latency due to non-presence of a transit provider in my country

    - by nixnotwin
    My ISP, a state owned incumbent, buys bandwidth from different transit providers. Whenever it buys transits it announces only a specific prefix (in most cases a hitherto unused) through the new transit AS. For e.g. if it runs out of bandwidth, it buys bandwidth from a new transit and announces a new prefix through it, while the same prefix is not announced (or announced with lowest metrics, so that the routes are very rarely used) via the old transits which continue to provide bandwidth to it. I am a business customer, so I have a fiber based link to the ISP and a tiny subnet is given to me. The subnet which is provide to me is part of a prefix which is announced by the AS of a transit who, it seems, do not have a presence in my country. So when I do a trace the packets, when they leave my ISP's AS, they take about 275ms to reach the transit providers core router, which is located in USA (half the world away). Also for upstream traffic my ISP uses a transit provider (tier 1) who has a presence in my country. But the return path is always through the transit which is in USA. So, average latency is 400ms. All the users of other ISPs in my country discover my subnet via USA. Even the traffic from neighboring countries, from Europe (which is much nearer) follows the path via USA. Sites using CDNs also resolve to ips in USA. I have informed the ISP NOC about the issue and I have asked them to provide an ip subnet belonging to a prefix announced by a local transit (preferably a tier 1 transit provider) and I am waiting for a reply. My question: Is it a serious issue that I must follow up to get it resolved? When I compared the latency on other providers in my country, it is, in most cases, less than half of my ISPs latency. Why my ISP doesn't announce all its prefixes to all of its transit providers, so that the packets can take efficient and nearest routes to reach prefixes that originate within its network?

    Read the article

  • Problem with Outlook 2010 (SMTP AUTH LOGIN)

    - by Filipe YaBa Polido
    **IGNORE THIS QUESTION - SOLVED WITH A PYTHON SCRIPT available at: http://yabahaus.blogspot.com I have to connect one customer Outlook 2010 to a remote server on which I have either no right, neither way to talk to the sysadmin. This is the thing, after installing and reviewing the logs on Wireshark: Outlook Express: HELO machine AUTH LOGIN username base64 encoded password base64 encoded mails go through. Outlook 2010: HELO machine AUTH DIGEST-MD5 response from server Outlook sends just a * AUTH LOGIN password base64 encoded So... I can send mails in the same domain, but can't send outside, it gives me a relay denied message. My point is... Why the h**l Outlook 2010 doesn't send the username AND the password?! It can never login the right way :| With other versions of Outlook it works fine, and with OE works great, it auths and allows to send mail to a different domain. I've googled and nothing worked. I'm pretty sure that I'm not alone with this one. My last resort will be to configure a local proxy/server that relays to the original one :| Any help would be appreciated. Sorry my bad english as is not my natural language. Thanks.

    Read the article

  • Is there a better way to do bonded vlan tagged interfaces with XEN

    - by AJ01
    We have a number of XEN servers all running CentOS or RHEL. The VM's that they run are all required to be on their own VLAN for no other reason than the customer expects them to be. Long story short however, I can't change this right now. We are also required to have bonding enabled on the interfaces. So to accommodat this we enslave eth1 and eth2 to bond0. We then create a seperate interface called bond0.VLANID where VLANID corresponds to the correct vlan; eg ifcfg-bond0.204 DEVICE=bond0.204 BOOTPROTO=static ONBOOT=yes VLAN=yes BRIDGE=xenvlan204 Bridge to XEN As you will see, we eventually have to bridge this out to XEN, and we do this by adding another interface called xenvlan204 (in this instance) which contains; ifcfg-xenvlan204 DEVICE=xenvlan204 BOOTPROTO=none ONBOOT=yes TYPE=bridge XEN Vm Config Finally in our XEN config for each VM, we add vif = [ "bridge=xenvlan204" ] This then allows the vm host to access that particular vlan The Problem We've noticed a few problems with this setup. One being that we currently create the interfaces manually. Which means if we add more vlan enabled interfaces and bridges we usually have to restart xend which is something I'm not so hot about. Also lower level staff have their heads melted by the number of interfaces and the risk of a mistake occurring is high. Secondly, it can take sometime for a host to come up if it has a number of vlan taged interfaces. Thirdly, its just not scaling well on the management aspects The Question Is there a better more flexible way to do this (in particular with Xen that ships with centos 5.3, 5.4 and 5.5 as we have to support all three) that leverages either scripting or other solutions to allow an arbitrary amount of interfaces to be created when a vm is instanced. Your advise and expertise is more that welcomed.

    Read the article

  • Block SMTP session with sender domain which doesn't itself accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the declared sender's domain has an MX which refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. Note that I'm not, as some commenters have assumed, talking about checking whether the SMTP client will receive messages. The check I want is whether the declared sender's domain (regardless of who the current SMTP client is) will accept SMTP connections from here. In other words: when we get around to sending a message back, we'll need the sender's domain to accept SMTP connections; I want to do that check before accepting the incoming session. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >