Search Results

Search found 1694 results on 68 pages for 'communicate'.

Page 45/68 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • What are different ways to reduce latency between a server and a web application? [closed]

    - by jjoensuu
    this is a question about a web application that provides SOAP web services. For the sake of this question, this web app is hosted on a server SERVER B which is located in California. We have an automated, scheduled, process running on a server SERVER A, located in New York. This scheduled process is supposed to send SOAP messages to SERVER B every so often, but this process typically dies soon after starting. We have now been told by the vendor that reason why the process dies is because of the latency between SERVER A and SERVER B. The data traffic is routed through many diffent public networks. There is no dedicated line between SERVER A and SERVER B. As a result I have been asked to look into ways to reduce the latency between SERVER A and SERVER B. So I wanted to ask, what are the different ways to reduce latency in a situation like this? For example, would it help to switch from HTTPS to some other secure protocol? (the thought here is that perhaps some other alternative would require fewer handshakes than HTTPS). Or would a VPN help? If a VPN would reduce the latency, how would it do that? NOTE: I am not looking for an explicit answer that would work in my specific situation. I am more like looking just for a simple list of what technologies could be used for this. I will still have to evaluate the technologies and discuss them internally with others, so the list would just be a starting point. Here I am assuming that there exists very few ways to reduce latency between two servers that communicate across public networks using HTTPS. Feel free to correct me if this assumption is wrong and please ask if there is a need for specific information. NOTE 2: A list of technologies is a specific answer to the question I stated in the title. NOTE 3: Its rather dumb to close this question when it is after all about me looking for information and furthermore this information can clearly be useful for others. Anyway luckily there are other sites where I can ask around. StackExchange seems to attached to their own philosophical principles. Many thanks

    Read the article

  • How can I cleanly and elegantly handle data and dependancies between classes

    - by Neophyte
    I'm working on 2d topdown game in SFML 2, and need to find an elegant way in which everything will work and fit together. Allow me to explain. I have a number of classes that inherit from an abstract base that provides a draw method and an update method to all the classes. In the game loop, I call update and then draw on each class, I imagine this is a pretty common approach. I have classes for tiles, collisions, the player and a resource manager that contains all the tiles/images/textures. Due to the way input works in SFML I decided to have each class handle input (if required) in its update call. Up until now I have been passing in dependencies as needed, for example, in the player class when a movement key is pressed, I call a method on the collision class to check if the position the player wants to move to will be a collision, and only move the player if there is no collision. This works fine for the most part, but I believe it can be done better, I'm just not sure how. I now have more complex things I need to implement, eg: a player is able to walk up to an object on the ground, press a key to pick it up/loot it and it will then show up in inventory. This means that a few things need to happen: Check if the player is in range of a lootable item on keypress, else do not proceed. Find the item. Update the sprite texture on the item from its default texture to a "looted" texture. Update the collision for the item: it might have changed shape or been removed completely. Inventory needs to be updated with the added item. How do I make everything communicate? With my current system I will end up with my classes going out of scope, and method calls to each other all over the place. I could tie up all the classes in one big manager and give each one a reference to the parent manager class, but this seems only slightly better. Any help/advice would be greatly appreciated! If anything is unclear, I'm happy to expand on things.

    Read the article

  • Cloud Computing Business Benefits

    - by workflowman
    If you have been living under a rock for the past year, you wouldn't have heard about cloud computing. Cloud computing is a loose term that describes anything that is hosted in data centers and accessed via the internet. It is normally associated with developers who draw clouds in diagrams indicating where services or how systems communicate with each other. Cloud computing also incorporates such well-known trends as Web 2.0 and Software as a Service (SaaS) and more recently Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Its aim is to change the way we compute, moving from traditional desktop and on-premises servers to services and resources that are hosted in the cloud.  Benefits of Cloud Computing  There are clearly benefits in building applications using cloud computing, some of which are listed here:  Zero up- front investment:  Delivering a large-scale system costs a fortune in both time and money. Often IT departments are split into hardware/network and software services. The hardware team provisions servers and so forth under the requirements of the software team. Often the hardware team has a different budget that requires approval. Although hardware and software management are two separate disciplines, sometimes what happens is developers are given the task to estimate CPU cycles, disk space, and so forth, which ends up in underutilized servers.  Usage-based costing:  You pay for what you use, no more, no less, because you never actually own the server. This is similar to car leasing, where in the long run you get a new car every three years and maintenance is never a worry.  Potential for shrinking the processing time:  If processes are split over multiple machines, parallel processing is performed, which decreases processing time.  More office space:  Walk into most offices, and guaranteed you will find a medium- sized room dedicated to servers.  Efficient resource utilization:  The resource utilization is handed by a centralized cloud administrator who is in charge of deciding exactly the right amount of resources for a system. This takes the task away from local administrators, who have to regularly monitor these servers.  Just-in-time infrastructure:  If your system is a success and needs to scale to meet demand, this can cause further time delays or a slow- performing service. Cloud computing solves this because you can add more resources at any time.  Lower environmental impact:  If servers are centralized, potentially an environment initiative is more likely to succeed. As an example, if servers are placed in sunny or windy parts of the world, then why not use these resources to power those servers?  Lower costs:  Unfortunately, this is one point that administrators will not like. If you have people administrating your e-mail server and network along with support staff doing other cloud-based tasks, this workforce can be reduced. This saves costs, though it also reduces jobs.

    Read the article

  • Spotlight on an office - Moscow

    - by Maria Sandu
    Probably the most famous place in Moscow, after Red Square, is the centre of Moscow. Here you can find beautiful buildings that seem to touch the sky, located on the banks of the river. In one of these high towers you can find the Oracle offices, friendly and modern. The stunning view will keep capture your attention for a couple of minutes and then you can enjoy a delicious coffee and take a seat at your desk, starting a new day. My name is Dmitry and I can tell you that we’re enjoying every minute spent in the office and that’s because of the pleasant atmosphere. As soon as you enter the offices, the friendly environment will make you feel more relaxed. Even though the space is split between the different departments, we interact and communicate a lot. We take our cup of coffee or tea together and discuss our achievements and all sort of subjects in the kitchen or in the open space. One of my favorite parts are the festive events when we celebrate with cakes and goodies. Any birthday or new arrival is a good reason for a tea party! We have some work-related traditions that help us as employees. One of them is the monthly Tech Hour when Experts from the Pre-sales team discuss technical topics and about the most recent innovations within the company. Lunch is another good opportunity to interact and chat. We have a variety of options, such as the two kitchens or the vast number of restaurants where you can serve up anything you want. As we are right in the centre of Moscow, you can choose between Sushi, Italian Pasta and all sorts of food. We usually go with our colleagues to have lunch. If you care about your health, I have very good news for you as nearby there are two first-class fitness centres with swimming pools, yoga and various sport classes that you can attend. My suggestion would be to either start or end your day with a visit to the swimming pool for a well-deserved hour of relaxation. As I mentioned before, we’re right in the heart of Moscow, so after work you can spend some time in the large shopping centers where you can choose between many different entertainment options. We often go to bowling or to the cinema. I hope I have given you a glimpse into working life at the Oracle offices in Moscow, a really great and pleasant place to work in, so follow us on http://campus.oracle.com for our latest vacancies and internships.

    Read the article

  • How to chain actions/animations together and delay their execution?

    - by codinghands
    I'm trying to build a simple game with a number of screens - 'TitleScreen', 'LoadingScreen', 'PlayScreen', 'HighScoreScreen' - each of which has it's own draw & update logic methods, sprites, other useful fields, etc. This works well for me as a game dev beginner, and it runs. However, on my 'PlayScreen' I want to run some animations before the player gets control - dropping in some artwork, playing some sound effects, generally prettifying things a little. However, I'm not sure what the best way to chain animations / sound effects / other timed general events is. I could make an intermediary screen, 'PrePlayScreen', which simply has all of this hardcoded like so: Update(){ Animation anim1 = new Animation(.....); Animation anim2 = new Animation(.....); anim1.Run(); if(anim1.State == AnimationState.Complete) anim2.Run(); if(anim2.State == AnimationState.Complete) // Load 'PlayScreen' screen } But this doesn't seem so great - surely their must be a better way? I then thought, 'Hey - an AnimationManager! That'd be awesome!'. But then that creeping OOP panic set in as I thought about it some more. If I create the Animation in my Screen, then add it to the AnimationManager (which may or may not be a GameComponent hooked up to Update/Draw), how can I get 'back' to it? To signal commands like start / end / repeat? I'd still need to keep a reference to the object in my Screen so that I could still communicate with it once it's buried in the bosom of a List in my AnimationManager. This seems bad. I've also tried using events - call 'Update' on all the animations in the PlayScreen update loop, but crucially all of the animations have a bool flag ('Active') which determines whether they should begin. The first animation has this set to 'true', all others 'false'. On completion the first animation raises an event, which sets animation 2's bool flag to true (and so it then runs). Once animation 2 is complete another 'anim complete' event is raised, and the screen state changes. Considering the game I'm making is basically as simple as it gets I know I'm overthinking this... it's just the paradigm shift from web - game development is making me break out in a serious case of the stupids.

    Read the article

  • IPv6, isn't it just a few extra bits?

    - by rclewis
    It's always an interesting task, to try and explain what you do to family and friends. I have described IPv6 as the "Next Generation Internet"  or "Second Internet" but the hollow expressions on my kids faces scream for the instant relief of the latest video game.  Never one to give up easily, I have formulated a new example - the Post Office... Similar to the Post Office the Internet delivers mail and packages based on addresses. As the number of residences, businesses, and delivery locations increased, the 5 digit ZIP Code (Washington, DC 20005) was expanded to ZIP+4  allowing for more precise delivery points (Postmaster General, Washington, DC 20260-3100). Ah, if only computers were as simple.  IPv6 isn't an add-on or expansion of the existing IPv4 Addressing, it is a new addressing model which will allow the internet to grow from a single computer in the basement of a university or your parents kitchen table, to support the multitude of smart phones, smart TV's, tablets, dvr's, and disk players, all clambering to connect for information. Unfortunetly there are only a finite number of IPv4 public addresses left, and those are being consumed at an ever increasing rate. Few people could have predicted the explosive growth of the internet or the shortage of IPv4 addresses we now face - but there is a "Plan B" and that is the vastly larger address space of IPv6.  Many in the industry have labeled this a "business continuity" problem,  when in fact most companies will be able to continue conducting business once they run out of existing IPv4 Addresses. The problem is really a Customer Continuity problem, how will businesses communicate with existing customers and reach new customers online who's only option is to adopt IPv6 when IPv4 is depleted? Perhaps a first step is publishing a blog that is also accessible via IPv6, it's just a few extra bits. Join us for the Oracle OpenWorld 2012 Session:   Navigating IPv6 @ Oracle Thursday, Oct. 4th 2:15PM - 3:15PM  Palace Hotel - Concert   Learn more about IPv6 Technologies at Oracle

    Read the article

  • Watchguard SSL Certificate problems

    - by Bill Best
    We recently purchased a Watchguard XTM 510. The hope is to replace our ISA 2006 proxy with this UTM product. We are having some issues with secured sites in our test setup. Currently We are still running traffic through the ISA server and I have the Watchguard also setup to be connected to the network. Where we run into problems is when I set in ISA the HTTPS site's location to be forwarded through the XTM, I get a certificate could not be validated error. Therefore I think Ive narrowed it down to two possibilities. One, the certificate needs to be installed on the XTM. Im not 100% sure this is the case as I believe this should just be acting as strictly a proxy and forwarding all the traffic through no questions asked. Either way if I try to import a certificate to the XTM I always get a certificate validation failed error message. These are generally converted pfx to pem files. Second, the XTM CA certificate needs to be installed on the ISA server so that they may communicate. I have done this but it didn't seem to do anything. I believe this should be working and was hoping someone has struggled through this before.

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • SQL Server 2005 Merge Replication to SQL Server CE 3.5

    - by user33067
    Hi, In my organization, we have a SQL Server 2005 database server (DBServer). Users of an application will normally be connected to DBServer, but, occasionally, would like to disconnect and continue their work on a laptop using SQL Server Compact Edition 3.5 (SQLCE). Due to this, we have been looking into using Merge Replication between the DBServer and SQLCE. From what I have read about this process, IIS must be installed on "the server"... yet, I have found no indication to whether this is talking about DBServer or SQLCE. I had assumed the documentation was referring to DBServer and proposed this to our networking staff. That idea was quickly put to rest as it is not our policy to install IIS on an internal server. This is where our SQL Server 2005 web server (WebServer) entered the picture. The idea being that IIS would be installed on WebServer and would be the conduit for DBServer and SQLCE to communicate. This sounded like a good idea at first, until I started looking for documentation on this type of setup. Everything I have been able deals with a DBServer -- SQLCE -- DBServer setup... nothing on DBServer -- WebServer -- SQLCE -- WebServer -- DBServer. Questions: Is going with a 3 server setup ideal? Does anyone have documentation on this type of setup? Does IIS even need to be running on one of the big servers, or can it just run off the laptop with SQLCE on it? (I'd really like this option ;))

    Read the article

  • Exchange 2003 SP2 and Windows Server 2008 R2 Domain Controllers

    - by Brian
    I'm looking at adding two Windows Server 2008 R2 Domain Controllers into our Windows Server 2003 domain to support our Exchange 2003 SP2 server and replace a retiring Windows Server 2003 Server. Our Domain and Forest functional levels are currently Windows Server 2003, which supports domain controller operating systems (Windows Server 2008 R2, Windows Server 2008 and Windows Server 2003) according to the "Appendix of Functional Level Features" on Technet . So there should not be an issue other than running adprep /forestprep and adprep /domain.... right!? But, according to the Exchange Server Supportability Matrix, Windows Server 2008 R2 Active Directory Servers are not supported as global catalog servers or domain controllers in a Exchange 2003 SP2 environment!!!??? This was a shock to me... How can Windows Server 2008 R2 be a DC for a Windows Server 2003 domain and forest, but not communicate with an Exchange 2003 SP2 server? Hopefully, I'm not the first to see this issue (or maybe I am), but I know a lot of Exchange 2003 admins will not be happy if there is not a work around... or is Microsoft trying to push everyone automatically to Exchange 2010...

    Read the article

  • Linking Linux MIT Kerberos with a Windows 2003 Active Directory

    - by Beerdude26
    Greetings, I was wondering how one might link a Linux MIT Kerberos with a Windows 2003 Active Directory to achieve the following: A user, [email protected], attempts to log in at an Apache website, which runs on the same server as the Linux MIT Kerberos. The Apache module first asks the local Linux MIT Kerberos if he knows a user by that name or realm. The MIT Kerberos finds out it isn't responsible for that realm, and forwards the request to the Windows 2003 Active Directory. The Windows 2003 Active Directory replies positively and gives this information to the Linux MIT Kerberos, which in turn tells this to the Apache module, which grants the user access to its files. Here is an image of the situation: http://img179.imageshack.us/img179/5092/linux2k3.png (I'm not allowed to embed images just yet.) The documentation I have read concerning this issue often differ from this problem: Some discuss linking up a MIT Kerberos with an Active Directory to gain access to resources on the Active Directory server; While another uses the link to authenticate Windows users to the MIT Kerberos through the Windows 2003 Active Directory. (My problem is the other way around.) So what my question boils down to, is this: Is it possible to have a Linux MIT Kerberos server pass through requests for a Active Directory realm, and then have it receive the reply and give it to the requesting service? (Although it's not a problem if the requesting service and the Windows 2003 Active Directory communicate directly.) Suggestions and constructive criticism are greatly appreciated. :)

    Read the article

  • TeamCity EC2 Integration via ISA Server

    - by Tim Long
    I have a TeamCity server which is actually installed on SBS 2003 Premium with ISA Server (firewall/proxy) installed. My ADSL connection has multiple IP addresses, which all resolve directly to my SBS external NIC. The NIC is therefore multi-homed and I have allocated one of the IP addresses specifically to TeamCity. In ISA, I've created an access rule to allow the traffic in. I can access my TeamCity server externally and view the web interface, that all works fine. I want to use the Amazon EC2 integration in TeamCity to launch build agents 'in the cloud'. The problem I am having is that when the agent starts, it sees the server and registers, then just sits there waiting. On the server side, the agent appears as 'disconnected'. Examining the settings, the agent's IP address appears to be that of the external NIC. What I think might be happening is that the traffic is undergoing Network Address Translation (NAT) so that TeamCity always thinks the agent is locally installed and therefore can't communicate with the actual remote agent. This seems to happen even though I have a permanent static IP address dedicated to TeamCity. So, the question is this. How can I make traffic to a specific IP address pass through the ISA server un-NATted?

    Read the article

  • Static routing on a TP-Link TL-WR1043ND

    - by igor
    My home network setup looks like this: Both routers are TP-Link TL-WR1043ND routers. The basement router handles all devices in the house that are connected via cable, handing out addresses for the 10.89.49.0/24 network via DHCP. Wireless doesn’t really work from the basement, as the signal is too weak, so I have disabled it. To do WiFi, I have added a second (identical) router downstairs. On the WAN side it is assigned the 10.89.49.101 IP address from the basement router, and on its LAN it provides the 10.89.7.0/24 network. Basic internet access works flawlessly from any device this way. I am now facing the problem that I am not able to communicate (e.g. SSH) between all devices, wired or wireless. I am able to connect from a wireless device to a wired device, for example SSH-ing from 10.89.7.X to 10.89.49.Y, but it doesn’t work the other way round—despite the fact that I have added a static route to the basement router: Does anybody have any idea on how to solve it? Both routers have already been upgraded to use the most recent firmware from TP-Link.com (Build 110429), to no avail. Errata: I would like to stick with the official firmware, switching to something like DD-WRT or OpenWrt only as a last resort.

    Read the article

  • How to make sysctl network bridge settings persist after a reboot?

    - by user183394
    I am setting up a notebook for software demo purpose. The machine has 8GB RAM, a Core i7 Intel CPU, a 128GB SSD, and runs Ubuntu 12.04 LTS 64bit. The notebook is used as a KVM host and runs a few KVM guests. All such guests use the virbr0 default bridge. To enable them to communicate with each other using multicast, I added the following to the host's /etc/sysctl.conf, as shown below net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Afterwards, following man sysctl(8), I issued the following: sudo /sbin/sysctl -p /etc/sysctl.conf My understanding is that this should make these settings persist over reboots. I tested it, and was surprised to find out the following: root@sdn1 :/proc/sys/net/bridge# more *tables :::::::::::::: bridge-nf-call-arptables :::::::::::::: 1 :::::::::::::: bridge-nf-call-ip6tables :::::::::::::: 1 :::::::::::::: bridge-nf-call-iptables :::::::::::::: 1 All defaults are coming back! Yes. I can use some kludgy "get arounds" such as putting a /sbin/sysctl -p /etc/sysctl.conf into the host's /etc/rc.local but I would rather "do it right". Did I misunderstand the man page or is there something that I missed? Thanks for any hints. -- Zack

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • How to know if your computer is hit by a dnschanger virus?

    - by kira
    The Federal Bureau of Investigation (FBI) is on the final stage of its Operation Ghost Click, which strikes against the menace of the DNSChanger virus and trojan. Infected PCs running the DNSChanger malware at unawares are in the danger of going offline on this coming Monday (July 9) when the FBI plans to pull down the online servers that communicate with the virus on host computers. After gaining access to a host PC, the DNSChanger virus tries to modify the DNS (Domain Name Server) settings, which are essential for Internet access, to send traffic to malicious servers. These poisoned web addresses in turn point traffic generated through infected PCs to fake or unsafe websites, most of them running online scams. There are also reports that the DNSChanger virus also acts as a trojan, allowing perpetrators of the hack attack to gain access to infected PCs. Google issued a general advisory for netizens in May earlier this year to detect and remove DNSChanger from infected PCs. According to our report, some 5 lakh PCs were still infected by the DNSChanger virus in May 2012. The first report of the DNSChanger virus and its affiliation with an international group of hackers first came to light towards the end of last year, and the FBI has been chasing them down ever since. The group behind the DNSChanger virus is estimated to have infected close to 4 million PCs around the world in 2011, until the FBI shut them down in November. In the last stage of Operation Ghost Click, the FBI plans to pull the plug and bring down the temporary rogue DNS servers on Monday, July 9, according to an official announcement. As a result, PCs still infected by the DNSChanger virus will be unable to access the Internet. How do you know if your PC has the DNSChanger virus? Don’t worry. Google has explained the hack attack and tools to remove the malware on its official blog. Trend Micro also has extensive step-by-step instructions to check if your Windows PC or Mac is infected by the virus. The article is found at http://www.thinkdigit.com/Internet/Google-warns-users-about-DNSChanger-malware_9665.html How to check if my computer is one of those affected?

    Read the article

  • SQL Server 2008 cluster freezing

    - by Ed Leighton-Dick
    We have run into a strange situation in which a SQL Server 2008 single-node cluster hangs. As background, we are rebuilding a Windows Server 2003/SQL Server 2005 two-node cluster using Windows 2008 and SQL Server 2008. Here's the timeline: Evicted the passive node (server B) from the Windows 2003/SQL 2005 cluster. The active node now functions as a single-node cluster with no problems. Wiped server B's disks and installed Windows 2008 and SQL Server 2008 as a single-node cluster. Since we do not want to the two clusters to communicate yet, we left the cluster's private network "heartbeat" adapter unconfigured. The cluster comes up and functions normally. Moved all databases to the new cluster. Cluster continues to function normally. Turned off server A (old cluster) in preparation for rebuilding as the second node of the new cluster. SQL Server instance on server B (new cluster) locks up, even though it should have no knowledge of or interaction with server A. Restarted server A. SQL Server instance on server B (new cluster) immediately begins working again. Things we have tried: The new cluster's name responds to ping and NETBIOS requests, even while the SQL Server is hung. We have confirmed that no IP address is assigned to the old heartbeat adapter, and it is not pulling an IP address from DHCP. Disabling the heartbeat's network card has the same effect. No errors were generated in any logs - Windows or SQL. When the error first occurred, it sat in the hung state for quite some time (well over 10 minutes) before anyone figured out what was going on. This would seem to eliminate any sort of normal cluster timeout in which it would have been searching for the other node (even if one had been configured). Server B is running Windows 2008 SP2, fully patched, and SQL Server 2008 SP1 CU7 (10.0.2775).

    Read the article

  • iPhone Remote with iTunes Library via VPN

    - by sudo work
    Alright, so I'm currently behind a network router (not under my control). The router performs NAT and somehow prevents a computer from scanning other nodes. At least, you're unable, in this instance, to locate an iTunes library. You can, however, communicate with a node's open ports if the local IP address is known, as well as the port. I haven't actually tried port scanning a specific IP using nmap or another tool yet. So I've tried one solution to remove the contribution of the router entirely (to verify that it works without the influence of the routers). I set up an access point using my iPhone and tethered my computer (with the library) to it. From here, I was able to pair my library and the iPhone Remote application. Control of the library was normal as well. This solution is not ideal, however, because I am actively using bandwidth with my computer and cannot afford to be tethered to my 3G connection. A viable solution for me is to use a common VPN connection, which I have set up on a Ubuntu (Intrepid) server that is remote. Both my computer and iPhone are able to access the VPN via PPTP. The server is setup with PPTPD as the VPN-server; I'm using IPTables to perform IP masquerading and forwarding traffic. I however, still cannot connect the library to the phone. I can however, see both devices on the VPN subnet (192.168.0.0/24). SSH'ing and such works fine. What settings on the VPN server must I change to get this to work? Also, how can I assign static IP addresses to various PPTP clients based on MAC addresses?

    Read the article

  • Windows Server 2008 SMTP & POP3 Configuration

    - by Alex Hope O'Connor
    This is the first time I have ever configured a VPS server without 3rd party applications such as Plesk control panel. I have got most functionality working in all my websites except I am very unsure as to how I can setup my email functionality on this new server. Basically I want the standard POP3 functionality, a bunch of accounts with private boxes, all able to send and receive emails using their individual usernames and passwords. My server setup is pretty simple, its a VPS with IIS & DNS Server running. What I have tried to do to setup SMTP & POP3 is adding the SMTP Server feature through the Server Manager Console (very unsure of the configuration as guides I found did not explain), I then installed a 3rd party application called Visdeno SMTP Extender as it claims to be a POP3 service providing accounting and the ability to communicate with email clients. That is as far as I have gotten as I can not seem to find too much information on the subject. So can someone please tell me how to go about configuring these services in order to provide standard SMTP & POP3 functionality? Thanks, Alex.

    Read the article

  • Xbox360 Universal Media Remote - out of sync?

    - by Traveling Tech Guy
    Hi, I have the Universal Media Remote from Microsoft, which was included with my HD-DVD package. I've been using it for over a year to watch videos/DVDs on my Xbox360 and it saved me the hassle of navigating with the game controller (which turns itself off every 5 minutes).All of a sudden (it didn't fall or suffer any severe trauma), it does not communicate with the Xbox anymore: it is on, I replaced batteries several times, but the Xbox does not respond to commands. The TV does - volume, channels, etc. - but I need the Xbox functionality.As far as I can see, there's no way to sync the remote with the Xbox - it lacks that small sync button that the game controllers have.I called Microsoft Support and spoke for an hour to someone who, I guess didn't know what to do at all. Bottom line - since it's been over a year, they won't fix/replace it - I have to get a new one.Before I do (if I do), I need to know if there's anything I can do with the existing remote, and will I have the same problem with a new one (i.e. the problem is with the Xbox itself)? Thanks!

    Read the article

  • How to Move SMS from iPhone to Mac?

    - by seda16
    SMS is the main form to Communicate with others, you would saved many messages on your iPhone. Well, there're many reasons you need to backup your iPhone sms to the Mac. For example, your family or friends have sent you some important and you want to save them on your iPhone in case you delete them by accident, or you just need to backup your sms for other use. So today let's talk about how to move sms from iPhone to Mac. It would be very easy if you use an app to help you, I always use the iPhone to Mac transfer on Amacsoft to copy sms from iPhone to Mac. Now let me tell you how to use this great app: Step 1:Connect iPhone to Mac First of all, you need to install and launch the iPhone to Mac transfer, then connect your iPhone to Mac. The iPhone to Mac transfer would recognise your iPhone automatically. And all information of your iPhone will be shown on an interface. Step 2:Select sms and Start the Export Now you can see many choice on the left, find "SMS" and click it, all sms on your iPhone will be listed on the right. Select and check those you want to move, then just click "Export" on the top to start the transfer. Wait just a few a minute the transfer will be done. Great! You have finish the transfer now, it's really very easy, right? I believe it won't be a problem if you want to transfer your sms from iPhone to Mac. By the way you can also use this Amacsoft iPhone to Mac transfer to move other kind of files , like photos, songs etc. If you're a windows user, you can use iPhone to PC transfer on this web to move sms from iPhone to your PC just with the same steps, good luck!

    Read the article

  • How to determine which ports are open/closed on a FIREWALL?

    - by Rahl
    It seems no one has asked this question before (most regard host-based firewalls). Anyone familiar with port scanning tools (e.g. nmap) knows all about SYN scanning, FIN scanning, and the like to determine open ports on a host machine. Question is though, how do you determine the open ports on a firewall itself (disregard whether the host you're trying to connect to behind the firewall has those particular ports open or closed). This is assuming the firewall is blocking your IP connection. Example: We all communicate with serverfault.com through port 80 (web traffic). A scan on a host would reveal port 80 is open. If serverfault.com is behind a firewall and still allows this traffic through, then we can assume the firewall has port 80 open also. Now let's assume the firewall is blocking you (e.g. your IP address is under the deny list or is missing in the allowed list). You know port 80 has to be open (it works for appropriate IP addresses), but when you (the disallowed IP) attempt any scanning, all port scan attempts on the firewall drop the packet (including port 80, which we know to be open). So, how might we accomplish a direct firewall scan to reveal open/closed ports on the firewall itself, while still using the disallowed IP?

    Read the article

  • update all the servers through one virtual servers using Storage are network virtual machine

    - by Mr.Calm
    Using UBUNTU and Virtal Box by Oracle, and Using this script to start nginx in Virtual Box, and placing it in Virtual box inside~/init.d #!/bin/bash ### BEGIN INIT INFO # Provides: Testinit # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO # RETVAL=0; start() { CurrentTime=$(date +%d/%m/%Y"-"%I:%M:%S) ./usr/local/nginx/sbin/nginx echo "Current Time:"$CurrentTime>>/home/server/Desktop/NginxLogs.txt echo "!Starting nginx!" >>/home/server/Desktop/NginxLogs.txt Like this i want to write auto script (setup.sh file) and place that script in all virtual boxes inside my system, for example 8 virtual boxes and in all Virtual boxes NGINX is installed. Now, The thing is i am facing problem when i want change something in setup.sh i have to go to each and every virtual box, or Communicate each Virtual machine through SSH from my main machine. i am thinking to write another script (ex: Update.sh),and inside that script we give one path of file which is saved and recently edited in main machine (ex: DummySetup.sh). as soon as i run that script all the setup.sh files which are saved in each virtual machines should update the change or replace contents with DummySetup.sh's contents. Hope this is possible thing. Help would be appreciated.Thanking you

    Read the article

  • Multiple VM environment for developing/testing

    - by Hippo
    I was asked to create a setup for automated deployment, configuration, installation/updates of websites. A bunch of small websites will be bundled on one server. If more website will come up a new server will be created... I decided to us chef for this task. All servers will be running Ubuntu at the same version and configuration. The actual question: Everything needs to be tested properly before starting live deployment, so my question is: What is the best virtualisation tool to run multiple (5 - 10) virtual machines on a Ubuntu Laptop? Requirements: easy setup, fast (clone/snapshot of VMs) All VMs should be easily connected to the internet and should be able to communicate to each other (Open-Source / free would be great) So far I looked into: Virtual box is more for Desktop virtualisation, Cloning not possible, every new machine needs to be installed VMware Player Any suggestions? If there are any question about what I am doing please comment on this question, I will answer as soon as possible. This question is not about the actual set up, it is about a nice working environment.

    Read the article

  • What is the correct iptables rule when NATing multiple private subnets?

    - by Jose Mendez
    I have a Centos minimal 6.5 acting as a router. eth0 is connected to a Cisco switch trunk port, allowing VLANs 200-213. I have several VLAN interfaces just as this link suggests: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html And have IPv4 forwarding, so all my network devices from any of the networks 200-213 can communicate with each other using this linux box as their router. Problem is, I need them to access the Internet, so I added the following rule: iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -j SNAT --to 1.1.1.56 1.1.1.56 is the "outside" address. This works fine, devices connected to the internal networks can ping Intertnet addresses BUT, they stop being able to talk to each other across subnets, so 192.168.211.55 can ping 8.8.8.8, but can't talk to 192.168.213.5. As soon as I do a service iptables restart to remove the rule, I can start talking across internal subnets again. What would be the correct way to set up NAT for multiple private subnets? Or maybe the correct way to set up forwarding?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >