Search Results

Search found 5783 results on 232 pages for 'translation unit'.

Page 154/232 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • site timing out when under heavy load

    - by naunu
    My client sends out eblasts at 8am monday/wed/friday. Between 8:15-8:45 the site becomes extremely slow and many users sessions timeout. My setup: Mediatemple VE 2gb dedicated ram (3 burst) Ubuntu 9.10 Apache2-mpm-worker PHP5.3-fcgi MySQL 5 I recently tried to remedy the problem by switching from apache2-mpm-prefork to mpm-worker, but am still having the same issues. My apache settings are: Timeout 100 KeepAlive On MaxKeepAliveRequests 100 <IfModule mpm_worker_module> StartServers 12 MinSpareThreads 25 MaxSpareThreads 96 ThreadLimit 96 ThreadsPerChild 25 MaxClients 225 MaxRequestsPerChild 0 </IfModule> The site is only getting ~10,000 page views during the 8am-9am hour, which I dont think should be stressing the server too badly. Maybe it is an error with the PHP settings, or bandwidth per unit time, or the site outgrew the server? Any suggestions would be very helpful - as you can see i've given it a good go before looking for help (installed mpm-worker). Also, can anyone suggest to me some free load testing software, or a tutorial on mod_status? Thank you

    Read the article

  • Cisco ASA 5505: Force NAT before IPsec?

    - by WuckaChucka
    I'm trying to route public-to-public IPs over an IPSec tunnel. However, the src IP is not "interesting" to the Cisco's IPSec engine because it doesn't appear to be getting translated to the outside IP before being evaluated by the Cisco's IPSec engine. From WEST to EAST, my public-to-public IPSec works fine: I can make a request from 192.168.0.5:any to 200.200.200.200:80 because the Vyatta does the NAT translation before the IPSec tunnel inspects the traffic, so the remote-subnet and local-subnet matches (see below). However from EAST to WEST, I see a deny in my Cisco logging buffer for Deny tcp src inside:192.168.1.5/59195 dst outside:100.100.100.100/80 which leads me to believe that the IPSec engine is not matching the encrypt_acl because the address has not been translated yet. Any ideas? WEST (Vyatta): inside: 192.168.0.0/24 inside host: 192.168.0.5/24 outside: 100.100.100.100 IPSec local-subnet: 100.100.100.100/32 IPSec remote-subnet: 200.200.200.200/32 EAST (Cisco): inside: 192.168.1.0/24 inside host: 192.168.1.5/24 (DNAT'ed on port 80 to outside) outside: 200.200.200.200 IPSec local-subnet: 200.200.200.200/32 IPSec remote-subnet: 100.100.100.100/32

    Read the article

  • How to pass custom options to configure when building a package with debuild?

    - by TestUser16418
    Short background: I'm using Debian Sid. Currently the audacity package is conflicting with the pidgin package, because gstreamer0.10-plugins-bad are outdated. I'm trying to rebuild it, but one of the unit tests is failing as one plugin I don't need is causing a segfault. I need to disable these tests, and there's a configure option for that, but I don't know how to pass it. So, how can I run configure with custom options? Either by passing them to debuild, or by editing some file in the debian directory? I only worked with Gentoo ebuilds so far, which are extremely simple compared to the Debian control files, which I still find completely undecipherable.

    Read the article

  • Browsers ignoring hosts file

    - by madkris
    Until recently my browsers started to ignore my hosts file. I have Windows 7 operating system installed. 192.168.0.5 livesite.com I have tried: Clearing browser cache Issued "ipconfig /flushdns" from the command line Issued "ping livesite.com" from the command line (response was "Reply from 192.168.0.5: bytes=32 time=1ms TTL=128") Restarting unit Backing up original hosts file and making a new one Checking lmhosts.sam (everything is commented out) Connecting directly to modem using cable Checked \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Tried it on another laptop with exactly the specs as I have Then I tried Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser ok but only for a sec) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser not ok) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Any idea why it worked for a moment? Or better yet anything I havent tried or some error I may have overlooked?

    Read the article

  • How can I pass extra parameters to the routeMatch object?

    - by Marcos Garcia
    I'm trying to unit test a controller, but can't figure out how to pass some extra parameters to the routeMatch object. I followed the posts from tomoram at http://devblog.x2k.co.uk/unit-testing-a-zend-framework-2-controller/ and http://devblog.x2k.co.uk/getting-the-servicemanager-into-the-test-environment-and-dependency-injection/, but when I try to dispatch a request to /album/edit/1, for instance, it throws the following exception: Zend\Mvc\Exception\DomainException: Url plugin requires that controller event compose a router; none found Here is my PHPUnit Bootstrap: class Bootstrap { static $serviceManager; static $di; static public function go() { include 'init_autoloader.php'; $config = include 'config/application.config.php'; // append some testing configuration $config['module_listener_options']['config_static_paths'] = array(getcwd() . '/config/test.config.php'); // append some module-specific testing configuration if (file_exists(__DIR__ . '/config/test.config.php')) { $moduleConfig = include __DIR__ . '/config/test.config.php'; array_unshift($config['module_listener_options']['config_static_paths'], $moduleConfig); } $serviceManager = Application::init($config)->getServiceManager(); self::$serviceManager = $serviceManager; // Setup Di $di = new Di(); $di->instanceManager()->addTypePreference('Zend\ServiceManager\ServiceLocatorInterface', 'Zend\ServiceManager\ServiceManager'); $di->instanceManager()->addTypePreference('Zend\EventManager\EventManagerInterface', 'Zend\EventManager\EventManager'); $di->instanceManager()->addTypePreference('Zend\EventManager\SharedEventManagerInterface', 'Zend\EventManager\SharedEventManager'); self::$di = $di; } static public function getServiceManager() { return self::$serviceManager; } static public function getDi() { return self::$di; } } Bootstrap::go(); Basically, we are creating a Zend\Mvc\Application environment. My PHPUnit_Framework_TestCase is enclosed in a custom class, which goes like this: abstract class ControllerTestCase extends TestCase { /** * The ActionController we are testing * * @var Zend\Mvc\Controller\AbstractActionController */ protected $controller; /** * A request object * * @var Zend\Http\Request */ protected $request; /** * A response object * * @var Zend\Http\Response */ protected $response; /** * The matched route for the controller * * @var Zend\Mvc\Router\RouteMatch */ protected $routeMatch; /** * An MVC event to be assigned to the controller * * @var Zend\Mvc\MvcEvent */ protected $event; /** * The Controller fully qualified domain name, so each ControllerTestCase can create an instance * of the tested controller * * @var string */ protected $controllerFQDN; /** * The route to the controller, as defined in the configuration files * * @var string */ protected $controllerRoute; public function setup() { parent::setup(); $di = \Bootstrap::getDi(); // Create a Controller and set some properties $this->controller = $di->newInstance($this->controllerFQDN); $this->request = new Request(); $this->routeMatch = new RouteMatch(array('controller' => $this->controllerRoute)); $this->event = new MvcEvent(); $this->event->setRouteMatch($this->routeMatch); $this->controller->setEvent($this->event); $this->controller->setServiceLocator(\Bootstrap::getServiceManager()); } public function tearDown() { parent::tearDown(); unset($this->controller); unset($this->request); unset($this->routeMatch); unset($this->event); } } And we create a Controller instance and a Request with a RouteMatch. The code for the test: public function testEditActionWithGetRequest() { // Dispatch the edit action $this->routeMatch->setParam('action', 'edit'); $this->routeMatch->setParam('id', $album->id); $result = $this->controller->dispatch($this->request, $this->response); // rest of the code isn't executed } I'm not sure what I'm missing here. Can it be any configuration for the testing bootstrap? Or should I pass the parameters in some other way? Or am I forgetting to instantiate something?

    Read the article

  • samba "username map" stopped to work

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user Kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems. p.s. i've asked this question at serverfault but no answer came. Maybe I have more luck with this forum. Sorry for duplicate if any of you reads both.

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by Koobz
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's infrastructure to support multiple databases etc.

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by bundini
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like hand partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's supporting infrastructure to support multiple databases etc.

    Read the article

  • Fedora14 serial console how-to needed

    - by lamba2
    Has anyone ever got a serial console working in fedora 14 ? Is it as simple as adding to grub: serial --unit=0 --speed=38400 terminal --timeout=10 serial console and add to the kernel lines: console=tty0 console=ttyS0,38400 ??? If so, this isn't working for me. I have agetty installed, and im using minicom, although i've heard you can also use "screen /dev/ttyUSB0" on the client side. The /etc/init/serial.conf file suggests it should be working, but nothing. Currently getting no joy from any of this after 2 days. Does anyone know a method that definitely works on fedora 14 ? (no /etc/event.d/ needed or such) edit: Client side im using a null modem cable and usb-serial adaptor.

    Read the article

  • SAN cache memory upgrade

    - by Scott Lundberg
    We currently have an IBM DS4300 Dual Controller Fibre SAN. It is a good box, but getting pretty old. It came with 256MB of cache per controller. Recently we replaced the batteries in one of the controllers and noticed that the cache is a DDR PC2100 ECC DIMM. Of course, we are thinking about how cheap this RAM is now and is there any good reason we can't upgrade the RAM. IBM used to have a "Turbo" upgrade to this box that doubled the cache and had a bunch of software features for about 10K USD. Since that product has been end-of-lifed, I don't think we can get that upgrade and we don't need the software upgrades (FlashCopy, StorageCopy, etc). Besides the obvious potential warranty issue, what if any issues would we expect to see if attempting to put 2 - 1GB DIMMS in this unit? Any other things I am missing here? EDIT: Memory label: Samsung CN 0433 PC2100U-25331-A1 M381L3223ETM-CB0 256MB DDR PC2100 CL2.5 ECC

    Read the article

  • powermock : ProcessBuilder redirectErrorStream giving nullPointerException

    - by kaustubh9
    I am using powermock to mock some native command invocation using process builder. the strange thing is these test pass sometimes and fail sometimes giving a NPE. Is this a powermock issue or some gotcha in the program. the snippet of the class under test is.. public void method1(String jsonString, String filename) { try { JSONObject jObj = new JSONObject(jsonString); JSONArray jArr = jObj.getJSONArray("something"); String cmd = "/home/y/bin/perl <perlscript>.pl<someConstant>" + " -k " + <someConstant> + " -t " + <someConstant>; cmd += vmArr.getJSONObject(i).getString("jsonKey"); ProcessBuilder pb = new ProcessBuilder("bash", "-c", cmd); pb.redirectErrorStream(false); Process shell = pb.start(); shell.waitFor(); if (shell.exitValue() != 0) { throw new RuntimeException("Error in Collecting the logs. cmd="+cmd); } StringBuilder error = new StringBuilder(); InputStream iError = shell.getErrorStream(); BufferedReader bfr = new BufferedReader( new InputStreamReader(iError)); String line = null; while ((line = bfr.readLine()) != null) { error.append(line + "\n"); } if (!error.toString().isEmpty()) { LOGGER.error(error`enter code here`); } iError.close(); bfr.close(); } catch (Exception e) { throw new RuntimeException(e); } and the unit test case is .. @PrepareForTest( {.class, ProcessBuilder.class,Process.class, InputStream.class,InputStreamReader.class, BufferedReader.class} ) @Test(sequential=true) public class TestClass { @Test(groups = {"unit"}) public void testMethod() { try { ProcessBuilder prBuilderMock = createMock(ProcessBuilder.class); Process processMock = createMock(Process.class); InputStream iStreamMock = createMock(InputStream.class); InputStreamReader iStrRdrMock = createMock(InputStreamReader.class); BufferedReader bRdrMock = createMock(BufferedReader.class); String errorStr =" Error occured"; String json = <jsonStringInput>; String cmd = "/home/y/bin/perl <perlscript>.pl -k "+<someConstant>+" -t "+<someConstant>+" "+<jsonValue>; expectNew(ProcessBuilder.class, "bash", "-c", cmd).andReturn(prBuilderMock); expect(prBuilderMock.redirectErrorStream(false)).andReturn(prBuilderMock); expect(prBuilderMock.start()).andReturn(processMock); expect(processMock.waitFor()).andReturn(0); expect(processMock.exitValue()).andReturn(0); expect(processMock.getErrorStream()).andReturn(iStreamMock); expectNew(InputStreamReader.class, iStreamMock) .andReturn(iStrRdrMock); expectNew(BufferedReader.class, iStrRdrMock) .andReturn(bRdrMock); expect(bRdrMock.readLine()).andReturn(errorStr); expect(bRdrMock.readLine()).andReturn(null); iStreamMock.close(); bRdrMock.close(); expectLastCall().once(); replayAll(); <ClassToBeTested> instance = new <ClassToBeTested>(); instance.method1(json, fileName); verifyAll(); } catch (Exception e) { Assert.fail("failed while collecting log.", e); } } I get an error on execution an the test case fails.. Caused by: java.lang.NullPointerException at java.lang.ProcessBuilder.start(ProcessBuilder.java:438) Note : I do not get this error on all executions. Sometimes it passes and sometimes it fails. I am not able to understand this behavior.

    Read the article

  • samba "username map" stopped to work after upgrade to 3.6

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems.

    Read the article

  • Can I replace a broken PSU with one of a smaller size?

    - by Carson Myers
    I have a broken OEM power supply unit that is cooked. I'm browsing online to find a replacement and am happy to see that they don't cost too much -- the only thing is they all seem to have varying sizes. Is it a problem if I get a PSU that is smaller than the original one? This is going in an HP Pavillion a000, it's about five and a half years old -- I don't know if that means anything, I just thought there might be some recent standardized dimensions for PSUs or something. No idea.

    Read the article

  • Cisco router for educational purposes

    - by user39214
    Hey all. I want to buy a Cisco router to use on my home network. I'm just hoping to get a unit that is not too old and is not a SOHO model. I want to run the latest Cisco OS just to learn how Cisco does things. I would use it to divide my network into two or three IP networks, firewalling, etc. I'm just asking for a model name/number. Thanks.

    Read the article

  • How do we keep Active Directory resilient across multiple sites?

    - by Alistair Bell
    I handle much of the IT for a company of around 100 people, spread across about five sites worldwide. We're using Active Directory for authentication, mostly served to Linux (CentOS 5) systems via LDAP. We've been suffering through a spate of events where the IP tunnel between the two major sites goes down and the secondary domain controller at one site can't contact the primary domain controller at the other. It seems that the secondary domain controller starts denying user authentication within minutes of losing connectivity to the primary. How do we make the secondary domain controller more resilient to downtime? Is there a way for it to cache the entire directory and/or at least keep enough information locally to survive a multi-hour disconnection? (We're all in a single organizational unit if that makes any difference.) (The servers here are Windows Server 2003; don't assume that we set this up correctly. I'm a software engineer, not an IT specialist.)

    Read the article

  • Debugging problems in Visual Studio 2005 - No source code available for the current location

    - by espais
    Hi all I've searched up and down Google for others with a similar problem, and while I can find the error I don't think that other people have the same base problem that I do. Basically, I had to create a project for a unit-testing environment in order to run this test suite. First, I add my original C file, compile, and then a test file (C++) is generated. I then exclude my original source from the project, include this test script (which includes the original source at the top), and then run. I can debug the test file fine, but when it jumps to the original C file I get the dreaded 'no source code available for the current location' error. Both files are located within the same location, and I compiled the original file without any issue. Anybody have any thoughts about this? Its driving me crazy!

    Read the article

  • A* (A-star) implementation in AS3

    - by Bryan Hare
    Hey, I am putting together a project for a class that requires me to put AI in a top down Tactical Strategy game in Flash AS3. I decided that I would use a node based path finding approach because the game is based on a circular movement scheme. When a player moves a unit he essentially draws a series of line segments that connect that a player unit will follow along. I am trying to put together a similar operation for the AI units in our game by creating a list of nodes to traverse to a target node. Hence my use of Astar (the resulting path can be used to create this line). Here is my Algorithm function findShortestPath (startN:node, goalN:node) { var openSet:Array = new Array(); var closedSet:Array = new Array(); var pathFound:Boolean = false; startN.g_score = 0; startN.h_score = distFunction(startN,goalN); startN.f_score = startN.h_score; startN.fromNode = null; openSet.push (startN); var i:int = 0 for(i= 0; i< nodeArray.length; i++) { for(var j:int =0; j<nodeArray[0].length; j++) { if(!nodeArray[i][j].isPathable) { closedSet.push(nodeArray[i][j]); } } } while (openSet.length != 0) { var cNode:node = openSet.shift(); if (cNode == goalN) { resolvePath (cNode); return true; } closedSet.push (cNode); for (i= 0; i < cNode.dirArray.length; i++) { var neighborNode:node = cNode.nodeArray[cNode.dirArray[i]]; if (!(closedSet.indexOf(neighborNode) == -1)) { continue; } neighborNode.fromNode = cNode; var tenativeg_score:Number = cNode.gscore + distFunction(neighborNode.fromNode,neighborNode); if (openSet.indexOf(neighborNode) == -1) { neighborNode.g_score = neighborNode.fromNode.g_score + distFunction(neighborNode,cNode); if (cNode.dirArray[i] >= 4) { neighborNode.g_score -= 4; } neighborNode.h_score=distFunction(neighborNode,goalN); neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; insertIntoPQ (neighborNode, openSet); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); } else if (tenativeg_score <= neighborNode.g_score) { neighborNode.fromNode=cNode; neighborNode.g_score=cNode.g_score+distFunction(neighborNode,cNode); if (cNode.dirArray[i]>=4) { neighborNode.g_score-=4; } neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; openSet.splice (openSet.indexOf(neighborNode),1); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); insertIntoPQ (neighborNode, openSet); } } } trace ("fail"); return false; } Right now this function creates paths that are often not optimal or wholly inaccurate given the target and this generally happens when I have nodes that are not path able, and I am not quite sure what I am doing wrong right now. If someone could help me correct this I would appreciate it greatly. Some Notes My OpenSet is essentially a Priority Queue, so thats how I sort my nodes by cost. Here is that function function insertIntoPQ (iNode:node, pq:Array) { var inserted:Boolean=true; var iterater:int=0; while (inserted) { if (iterater==pq.length) { pq.push (iNode); inserted=false; } else if (pq[iterater].f_score >= iNode.f_score) { pq.splice (iterater,0,iNode); inserted=false; } ++iterater; } } Thanks!

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • 3CX behind UT7.1 using a callcentric.com SIP account

    - by Corey
    Has anyone had any luck with getting 3CX working behind UT7.1 with a SIP account from callcentric.com? I am willing to reset my current UT box back to defaults, and start from there. I have a static public IP assigned to the external interface. My internal addressing is 192.168.76.0 . My 3CX box has 192.168.76.17 . Would anyone be willing to give me a step by step of changes to make in UT / 3CX. I currently have my UT box unplugged, and have replaced it with a Linksys unit. I have port forwarding setup for… TCP/UDP 5060 to 192.168.76.17 UDP 9000-9049 to 192.168.76.17 … and everything works great. I also have additional external IPs available if that helps.

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • How do I setup a systemd service to be started by a non root user as a user daemon?

    - by Hans
    I just finished the install and setup process of systemd on my arch-linux system (2012.09.07). I uninstalled initscripts (and removed the configuration files). What I want to do is create a service that can be started and stopped by a non-root user. The service is to start a detached screen session running rtorrent. However I want every user on the system who has set this service to start (enabled) to have a particular instance started for them specifically. How would one go about doing this? I remember reading that systemd supports user instances of services, however I have been unable to find any information on how to set this up, or whether it relates to what I am looking for. Service file that I have used for system: [Unit] Description=rTorrent [Service] Type=forking ExecStart=/usr/bin/screen -d -m -S rtorrent /usr/bin/rtorrent ExecStop=/usr/bin/killall -w -s 2 /usr/bin/rtorrent

    Read the article

  • Browsers ignoring hosts file

    - by madkris
    Until recently my browsers started to ignore my hosts file. I have Windows 7 operating system installed. 192.168.0.5 livesite.com I have tried: Clearing browser cache Issued "ipconfig /flushdns" from the command line Issued "ping livesite.com" from the command line (response was "Reply from 192.168.0.5: bytes=32 time=1ms TTL=128") Restarting unit Backing up original hosts file and making a new one Checking lmhosts.sam (everything is commented out) Connecting directly to modem using cable Checked \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Tried it on another laptop with exactly the specs as I have Then I tried Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser ok but only for a sec) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser not ok) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Any idea why it worked for a moment? Or better yet anything I havent tried or some error I may have overlooked?

    Read the article

  • Browsers ignoring hosts file

    - by madkris
    Until recently my browsers started to ignore my hosts file. I have Windows 7 operating system installed. 192.168.0.5 livesite.com I have tried: Clearing browser cache Issued "ipconfig /flushdns" from the command line Issued "ping livesite.com" from the command line (response was "Reply from 192.168.0.5: bytes=32 time=1ms TTL=128") Restarting unit Backing up original hosts file and making a new one Checking lmhosts.sam (everything is commented out) Connecting directly to modem using cable Checked \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Tried it on another laptop with exactly the specs as I have Then I tried Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser ok but only for a sec) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser not ok) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Any idea why it worked for a moment? Or better yet anything I havent tried or some error I may have overlooked?

    Read the article

  • Can't communicate using PuTTY, but can with Termite

    - by SharpHawk
    I'm trying to establish a serial connection to a peripheral from my PC's RS-232 port. Pretty simple stuff, and I've had not trouble doing it with countless peripherals before. And yet when I configure PuTTY to the right baud rate, stop bits, etc. I'll type in "*IDN?", press enter, and the unit won't reply. After going over my settings over and over again, I decided to try another terminal program, Termite. This time it worked like a charm. What puzzles me, and what I'm trying to figure out by posting this question, is why Termite would work when PuTTY did not despite the fact that they both have the same settings. PuTTY: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Termite: http://www.compuphase.com/software_termite.htm EDIT: I now tried Tera Term as well, and it works. So PuTTY is the odd one out.

    Read the article

  • Better to develop Ruby project on server or buy a faster desktop computer? B/c laptop too slow.

    - by user33184
    I have a Linux laptop (Vostro V13) running a Celeron M chip. This is a fine laptop, but running unit tests especially for Rails applications is slow. I want a faster development environment but I don't want to spend too much money. So the choice I have is between $390 for a Linux desktop machine with a Pentium Dual Core Processor E5400 and just paying between $30 and $40 a month to Linode and trying to do development remotely on that server. Can anyone with experience developing server applications using both methods offer any advice?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >