Search Results

Search found 11841 results on 474 pages for 'virtual hosts'.

Page 170/474 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • C++ game designing & polymorphism question

    - by Kotti
    Hi! I'm trying to implement some sort of 'just-for-me' game engine and the problem's plot goes the following way: Suppose I have some abstract interface for a renderable entity, e.g. IRenderable. And it's declared the following way: interface IRenderable { // (...) // Suppose that Backend is some abstract backend used // for rendering, and it's implementation is not important virtual void Render(Backend& backend) = 0; }; What I'm doing right now is something like declaring different classes like class Ball : public IRenderable { virtual void Render(Backend& backend) { // Rendering implementation, that is specific for // the Ball object // (...) } }; And then everything looks fine. I can easily do something like std::vector<IRenderable*> items, push some items like new Ball() in this vector and then make a call similiar to foreach (IRenderable* in items) { item->Render(backend); } Ok, I guess it is the 'polymorphic' way, but what if I want to have different types of objects in my game and an ability to manipulate their state, where every object can be manipulated via it's own interface? I could do something like struct GameState { Ball ball; Bonus bonus; // (...) }; and then easily change objects state via their own methods, like ball.Move(...) or bonus.Activate(...), where Move(...) is specific for only Ball and Activate(...) - for only Bonus instances. But in this case I lose the opportunity to write foreach IRenderable* simply because I store these balls and bonuses as instances of their derived, not base classes. And in this case the rendering procedure turns into a mess like ball.Render(backend); bonus.Render(backend); // (...) and it is bad because we actually lose our polymorphism this way (no actual need for making Render function virtual, etc. The other approach means invoking downcasting via dynamic_cast or something with typeid to determine the type of object you want to manipulate and this looks even worse to me and this also breaks this 'polymorphic' idea. So, my question is - is there some kind of (probably) alternative approach to what I want to do or can my current pattern be somehow modified so that I would actually store IRenderable* for my game objects (so that I can invoke virtual Render method on each of them) while preserving the ability to easily change the state of these objects? Maybe I'm doing something absolutely wrong from the beginning, if so, please point it out :) Thanks in advance!

    Read the article

  • Laravel with Homestead

    - by Ahmed el-Gendy
    I new with virtual box and vagrant , Now I using Homestead image and every thing is run well but when i create my project named laravel on virtual machine it supposed that i see this new folder named laravel on my machine but i didn't get any thing on my machine , The synchronization is not working. NOTE: I'm using ubuntu 14.04 This is my homestead.yaml ip: "192.168.10.10" memory: 2048 cpus: 1 authorize: ~/.ssh/id_rsa.pub keys: - ~/.ssh/id_rsa folders: - map: /var/projects/ to: /home/vagrant/projects/ sites: - map: homestead.app to: /home/vagrant/projects/laravel/public variables: - key: APP_ENV value: local thanks advance

    Read the article

  • Running OpenMPI on Windows XP

    - by iamweird
    Hi there. I'm trying to build a simple cluster based on Windows XP. I compiled OpenMPI-1.4.2 successfully, and tools like mpicc and ompi_info work too, but I can't get my mpirun working properly. The only output I can see is Z:\orterun --hostfile z:\hosts.txt -np 2 hostname [host0:04728] Failed to initialize COM library. Error code = -2147417850 [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\mca\ess\hnp\ess_hnp_module.c at line 218 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_plm_init failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\runtime\orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\..\..\openmpi -1.4.2\orte\tools\orterun\orterun.c at line 543 Where z:\hosts.txt appears as follows: host0 host1 Z: is a shared network drive available to both host0 and host1. What my problem is and how do I fix it? Upd: Ok, this problem seems to be fixed. It seems to me that WideCap driver and/or software components causes this error to appear. A "clean" machine runs local task successfully. Anyway, I still cannot run a task within at least 2 machines, I'm getting following message: Z:\mpirun --hostfile z:\hosts.txt -np 2 hostname connecting to host1 username:cluster password:******** Save Credential?(Y/N) y [host0:04728] This feature hasn't been implemented yet. [host0:04728] Could not connect to namespace cimv2 on node host1. Error code =-2147024891 -------------------------------------------------------------------------- mpirun was unable to start the specified application as it encountered an error. More information may be available above. -------------------------------------------------------------------------- I googled a little and did all the things as described here: http://www.open-mpi.org/community/lists/users/2010/03/12355.php but I'm still getting the same error. Can anyone help me? Upd2: Error code -2147024891 might be WMI error WBEM_E_INVALID_PARAMETER (0x80041008) which occures when one of the parameters passed to the WMI call is not correct. Does this mean that the problem is in OpenMPI source code itself? Or maybe it's because of wrong/outdated wincred.h and credui.lib I used while building OpenMPI from the source code?

    Read the article

  • Is there a g++ equivalent to Visual Studio's __declspec(novtable)?

    - by ceretullis
    Is there a g++ equivalent to Visual Studio's __declspec(novtable) argument? Basically, in a pure virtual base class the __declspec(novtable) argument can be used to suppress the creation of a vtable for the base class as well as vtable initialization/deinitialization code in the contstructor/destructor respectively. E.g., class __declspec(novtable) PureVirtualBaseClass { public: PureVirtualBaseClass(){} virtual ~PureVirtualBaseClass() = 0; }; See Paul DiLascia's article for more info. Also see my related question.

    Read the article

  • Can I write a test that succeeds if and only if a statement does not compile?

    - by Billy ONeal
    I'd like to prevent clients of my class from doing something stupid. To that end, I have used the type system, and made my class only accept specific types as input. Consider the following example (Not real code, I've left off things like virtual destructors for the sake of example): class MyDataChunk { //Look Ma! Implementation! }; class Sink; class Source { virtual void Run() = 0; Sink *next_; void SetNext(Sink *next) { next_ = next; } }; class Sink { virtual void GiveMeAChunk(const MyDataChunk& data) { //Impl }; }; class In { virtual void Run { //Impl } }; class Out { }; //Note how filter and sorter have the same declaration. Concrete classes //will inherit from them. The seperate names are there to ensure only //that some idiot doesn't go in and put in a filter where someone expects //a sorter, etc. class Filter : public Source, public Sink { //Drop objects from the chain-of-command pattern that don't match a particular //criterion. }; class Sorter : public Source, public Sink { //Sorts inputs to outputs. There are different sorters because someone might //want to sort by filename, size, date, etc... }; class MyClass { In i; Out o; Filter f; Sorter s; public: //Functions to set i, o, f, and s void Execute() { i.SetNext(f); f.SetNext(s); s.SetNext(o); i.Run(); } }; What I don't want is for somebody to come back later and go, "Hey, look! Sorter and Filter have the same signature. I can make a common one that does both!", thus breaking the semantic difference MyClass requires. Is this a common kind of requirement, and if so, how might I implement a test for it?

    Read the article

  • IIS hosting, asp.net mvc

    - by tomasz
    Hi I have a site that uses flex and calls controller actions which returns json to the flex. This works fine in a dev server , the folder that has the flex app lives inside the web project and in the dev ennvironment, makes calls hostname, ie www.someurl.com in the actual live scenario, this will be an intranet so not hostname to call, the flex app seems to have trouble calling http://localhost/Virtual directory name it seems to totally miss the virtual directory name. I am obviously missing something basic, any help?

    Read the article

  • python object to native c++ pointer

    - by Lodle
    Im toying around with the idea to use python as an embedded scripting language for a project im working on and have got most things working. However i cant seem to be able to convert a python extended object back into a native c++ pointer. So this is my class: class CGEGameModeBase { public: virtual void FunctionCall()=0; virtual const char* StringReturn()=0; }; class CGEPYGameMode : public CGEGameModeBase, public boost::python::wrapper<CGEPYGameMode> { public: virtual void FunctionCall() { if (override f = this->get_override("FunctionCall")) f(); } virtual const char* StringReturn() { if (override f = this->get_override("StringReturn")) return f(); return "FAILED TO CALL"; } }; Boost wrapping: BOOST_PYTHON_MODULE(GEGameMode) { class_<CGEGameModeBase, boost::noncopyable>("CGEGameModeBase", no_init); class_<CGEPYGameMode, bases<CGEGameModeBase> >("CGEPYGameMode", no_init) .def("FunctionCall", &CGEPYGameMode::FunctionCall) .def("StringReturn", &CGEPYGameMode::StringReturn); } and the python code: import GEGameMode def Ident(): return "Alpha" def NewGamePlay(): return "NewAlpha" def NewAlpha(): import GEGameMode import GEUtil class Alpha(GEGameMode.CGEPYGameMode): def __init__(self): print "Made new Alpha!" def FunctionCall(self): GEUtil.Msg("This is function test Alpha!") def StringReturn(self): return "This is return test Alpha!" return Alpha() Now i can call the first to functions fine by doing this: const char* ident = extract< const char* >( GetLocalDict()["Ident"]() ); const char* newgameplay = extract< const char* >( GetLocalDict()["NewGamePlay"]() ); printf("Loading Script: %s\n", ident); CGEPYGameMode* m_pGameMode = extract< CGEPYGameMode* >( GetLocalDict()[newgameplay]() ); However when i try and convert the Alpha class back to its base class (last line above) i get an boost error: TypeError: No registered converter was able to extract a C++ pointer to type class CGEPYGameMode from this Python object of type Alpha I have done alot of searching on the net but cant work out how to convert the Alpha object into its base class pointer. I could leave it as an object but rather have it as a pointer so some non python aware code can use it. Any ideas?

    Read the article

  • virtaul function

    - by hitech
    class a { virtual void foo(void) ; }; class b : public a { public: virtual void foo(void) { cout<< "class b"; } }; int main ( ) { class a *b_ptr = new b ; b_ptr-foo(); } please guide me why the b_ptr-foo() will not call the foo() function of the class b?

    Read the article

  • Why does my simple hello world console app use so much memory?

    - by CodingThunder
    Looking in Process Explorer it uses; Virtual Size: 550,000k , Working Set: 28000k Why does my simple hello world console app use so much memory? I take it the difference between the Working Set and Virtual Size means that difference will be paged to disk? /I am running 64 bit XP. Thanks class Program { static void Main(string[] args) { Console.WriteLine("Hello world"); Console.ReadLine(); } }

    Read the article

  • Attribute vector emptying itself

    - by ravloony
    Hello, I have two classes, derived from a common class. The common class has a pure virtual function called execute(), which is implemented in both derived classes. In the inherited class I have an attribute which is a vector. In both execute() methods I overwrite this vector with a result. I access both classes from a vector of pointers to their objects. The problem is when I try to access the result vector form outside the objects. In one case I can get the elements (which are simply pointers), in the other I cannot, the vector is empty. Code: class E; class A{ protected: vector<E*> _result; public: virtual void execute()=0; vector<E*> get_result(); }; vector<E*> A::get_result() { return _result; } class B : public A { public: virtual void execute(); }; B::execute() { //... _result = tempVec; return; } class C : public A { public: virtual void execute(); }; C::execute() { //different stuff to B _result = tempvec; return; } main() { B* b = new B(); C* c = new C(); b->execute(); c->execute(); b->get_result();//returns full vector c->get_result(); //returns empty vector!! } I have no idea what is going on here... I have tried filling _result by hand from a temp vector in the offending class, doing the same with vector::assign(), nothing works. And the other object works perfectly. I must be missing something.... Any help would be greatly appreciated.

    Read the article

  • Detect if a method was overridden using Reflection (C#)

    - by Andrey
    Say I have a base class TestBase where I define a vistual method TestMe() class TestBase { public virtual bool TestMe() { } } Now I inherit this class: class Test1 : TestBase { public override bool TestMe() {} } Now, using Reflection, I need to find if the method TestMe has been overriden in child class - is it possible? What I need it for - I am writing a designer visualizer for type "object" to show the whole hierarchy of inheritance and also show which virtual methods were overridden at which level.

    Read the article

  • Virtualbox in Headless mode

    - by ask
    I used the virtual machines in virtualbox in a "headless" mode instead of a GUI mode. what are the advantages of using it in a headless mode?? by headless does it mean that the server doesnt have a keyboard or monitor attached or does it mean that no window will "pop up" , denoting that it is ON(or any other status), when a virtual machine is worked with? what exactly does it mean? pls reply...

    Read the article

  • Specifying routes by subdomain in Express using vhost middleware

    - by user730569
    I'm using the vhost express/connect middleware and I'm a bit confused as to how it should be used. I want to have one set of routes apply to hosts with subdomains, and another set to apply for hosts without subdomains. In my app.js file, I have var app = express.createServer(); app.use...(middlware)... app.use(express.vhost('*.host', require('./domain_routing')("yes")); app.use(express.vhost('host', require('./domain_routing')("no")); app.use...(middlware)... app.listen(8000); and then in domain_routing.js: module.exports = function(subdomain){ var app = express.createServer(); require('./routes')(app, subdomain); return app; } and then in routes.js I plan to run sets of routes, dependent on the subdomain variable passed in is "yes" or "no". Am I on the right track or is this not how you use this middleware?

    Read the article

  • PyQt4 plugin in c++ application

    - by veverica17
    How is it posible to load python script as plugin in qt based application? The basic idea would be to make a class in c++ class b { virtual void method1(); virtual void method2(); } and 'somehow' inherit it in python like class c(b): def method1: #do something def method2: #do something I need to be able to modify the gui from python( add buttons to some widgets made in c++ with qt ). Basicaly something similiar to (gedit, blender, etc) plugin architecture with qt

    Read the article

  • In C# is there a thread scheduler for long running threads?

    - by LogicMagic
    Hi, Our scenario is a network scanner. It connects to a set of hosts and scans them in parallel for a while using low priority background threads. I want to be able to schedule lots of work but only have any given say ten or whatever number of hosts scanned in parallel. Even if I create my own threads, the many callbacks and other asynchronous goodness uses the ThreadPool and I end up running out of resources. I should look at MonoTorrent... If I use THE ThreadPool, can I limit my application to some number that will leave enough for the rest of the application to Run smoothly? Is there a threadpool that I can initialize to n long lived threads?

    Read the article

  • C++ - Error: expected unqualified-id before ‘using’

    - by Francisco P.
    Hello, everyone. I am having some trouble on a project I'm working on. Here's the header file for the calor class: #ifndef _CALOR_ #define _CALOR_ #include "gradiente.h" using namespace std; class Calor : public Gradiente { public: Calor(); Calor(int a); ~Calor(); int getTemp(); int getMinTemp(); void setTemp(int a); void setMinTemp(int a); void mostraSensor(); }; #endif When I try to compile it: calor.h|6|error: expected unqualified-id before ‘using’| Why does this happen? I've been searching online and learned this error occurs mostly due to corrupted included files. Makes no sense to me, though. This class inherits from gradiente: #ifndef _GRADIENTE_ #define _GRADIENTE_ #include "sensor.h" using namespace std; class Gradiente : public Sensor { protected: int vActual, vMin; public: Gradiente(); ~Gradiente(); } #endif Which in turn inherits from sensor #ifndef _SENSOR_ #define _SENSOR_ #include <iostream> #include <fstream> #include <string> #include "definicoes.h" using namespace std; class Sensor { protected: int tipo; int IDsensor; bool estadoAlerta; bool estadoActivo; static int numSensores; public: Sensor(/*PARAMETROS*/); Sensor(ifstream &); ~Sensor(); int getIDsensor(); bool getEstadoAlerta(); bool getEstadoActivo(); void setEstadoAlerta(int a); void setEstadoActivo(int a); virtual void guardaSensor(ofstream &); virtual void mostraSensor(); // FUNÇÃO COMUM /* virtual int funcaoComum() = 0; virtual int funcaoComum(){return 0;};*/ }; #endif For completeness' sake, here's definicoes.h #ifndef _DEFINICOES_ #define _DEFINICOES_ const unsigned int SENSOR_MOVIMENTO = 0; const unsigned int SENSOR_SOM = 1; const unsigned int SENSOR_PRESSAO = 2; const unsigned int SENSOR_CALOR = 3; const unsigned int SENSOR_CONTACTO = 4; const unsigned int MIN_MOVIMENTO = 10; const unsigned int MIN_SOM = 10; const unsigned int MIN_PRESSAO = 10; const unsigned int MIN_CALOR = 35; #endif Any help'd be much appreciated. Thank you for your time. Thanks for your time!

    Read the article

  • How to integrate camera image into physics engine?

    - by Pedro
    I recently came across this and would like to implement something similar. The basic approach is clear: I have to threshold the image and check if a virtual object collides with the remaining foreground. Instead of implementing the physics myself, I'd like use an engine like Box2D. But how do I integrate the thresholded image into the physics engine so it is possible to interact with virtual objects?

    Read the article

  • HttpContext returning only "/"

    - by user281180
    I have the following two lines of codes in my model, however, both virtual and path have values "\". Where have I gone wrong? var virtual = VirtualPathUtility.ToAbsolute(HttpContext.Current.Request.ApplicationPath); var path =HttpContext.Current.Request.ApplicationPath;

    Read the article

  • Test if a method is an override?

    - by Water Cooler v2
    Is there a way to tell if a method is an override? For e.g. public class Foo { public virtual void DoSomething() {} public virtual int GimmeIntPleez() { return 0; } } public class BabyFoo: Foo { public override int GimmeIntPleez() { return -1; } } Is it possible to reflect on BabyFoo and tell if GimmeIntPleez is an override?

    Read the article

  • Windows 2008 Unknown Disks

    - by Ailbe
    I have a BL460c G7 blade server with OS Windows 2008 R2 SP1. This is a brand new C7000 enclosure, with FlexFabric interconnects. I got my FC switches setup and zoned properly to our Clariion CX4, and can see all the hosts that are assigned FCoE HBAs on both paths in both Navisphere and in HP Virtual Connect Manager. So I went ahead and created a storage group for a test server, assigned the appropriate host, assigned the LUN to the server. So far so good, log onto server and I can see 4 unknown disks.... No problem, I install MS MPIO, no luck, can't initialize the disks, and the multiple disks don't go away. Still no problem, I install PowerPath version 5.5 reboot. Now I see 3 disks. One is initialized and ready to go, but I still have 2 disks that I can't initialize, can't offline, can't delete. If I right click in storage manager and go to properties I can see that the MS MPIO tab, but I can't make a path active. I want to get rid of these phantom disks, but so far nothing is working and google searches are showing up some odd results, so obviously I'm not framing my question right. I thought I'd ask here real quick. Does anyone know a quick way to get rid of these unknown disks. Another question, do I need the MPIO feature installed if I have PowerPath installed? This is my first time installing Windows 2008 R2 in this fashion and I'm not sure if that feature is needed or not right now. So some more information to add to this. It seems I'm dealing with more of a Windows issue than anything else. I removed the LUN from the server, uninstalled PowerPath completely, removed the MPIO feature from the server, and rebooted twice. Now I am back to the original 4 Unknown Disks (plus the local Disk 0 containing the OS partition of course, which is working fine) I went to diskpart, I could see all 4 Unknown disks, I selected each disk, ran clean (just in case i'd somehow brought them online previously as GPT and didn't realize it) After a few minutes I was no longer able to see the disks when I ran list disk. However, the disks are still in Disk Management. When I try and offline the disks from Disk Management I get an error: Virtual Disk Manager - The system cannot find the file specified. Accompanied by an error in System Event Logs: Log Name: System Source: Virtual Disk Service Date: 6/25/2012 4:02:01 PM Event ID: 1 Task Category: None Level: Error Keywords: Classic User: N/A Computer: hostname.local Description: Unexpected failure. Error code: 2@02000018 Event Xml: 1 2 0 0x80000000000000 4239 System hostname.local 2@02000018 I feel sure there is a place I can go in the Registry to get rid of these, I just can't recall where and I am loathe to experiement. So to recap, there are currently no LUNS attached at all, I still have the phantom disks, and I'm getting The system cannot find the file specified from Virtual Disk Manager when I try to take them offline. Thanks!

    Read the article

  • Openswan ipsec transport tunnel not going up

    - by gparent
    On ClusterA and B I have installed the "openswan" package on Debian Squeeze. ClusterA ip is 172.16.0.107, B is 172.16.0.108 When they ping one another, it does not reach the destination. /etc/ipsec.conf: version 2.0 # conforms to second version of ipsec.conf specification config setup protostack=netkey oe=off conn L2TP-PSK-CLUSTER type=transport left=172.16.0.107 right=172.16.0.108 auto=start ike=aes128-sha1-modp2048 authby=secret compress=yes /etc/ipsec.secrets: 172.16.0.107 172.16.0.108 : PSK "L2TPKEY" 172.16.0.108 172.16.0.107 : PSK "L2TPKEY" Here is the result of ipsec verify on both machines: root@cluster2:~# ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.28/K2.6.32-5-amd64 (netkey) Checking for IPsec support in kernel [OK] NETKEY detected, testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [FAILED] Checking for 'ip' command [OK] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] root@cluster2:~# This is the end of the output of ipsec auto --status: 000 "cluster": 172.16.0.108<172.16.0.108>[+S=C]...172.16.0.107<172.16.0.107>[+S=C]; prospective erouted; eroute owner: #0 000 "cluster": myip=unset; hisip=unset; 000 "cluster": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cluster": policy: PSK+ENCRYPT+COMPRESS+PFS+UP+IKEv2ALLOW+lKOD+rKOD; prio: 32,32; interface: eth0; 000 "cluster": newest ISAKMP SA: #1; newest IPsec SA: #0; 000 "cluster": IKE algorithm newest: AES_CBC_128-SHA1-MODP2048 000 000 #3: "cluster":500 STATE_QUICK_R0 (expecting QI1); EVENT_CRYPTO_FAILED in 298s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #2: "cluster":500 STATE_QUICK_I1 (sent QI1, expecting QR1); EVENT_RETRANSMIT in 13s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #1: "cluster":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 2991s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 Interestingly enough, if I do ike-scan on the server here's what happens: Doesn't seem to take my ike settings into account root@cluster1:~# ike-scan -M 172.16.0.108 Starting ike-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/) 172.16.0.108 Main Mode Handshake returned HDR=(CKY-R=641bffa66ba717b6) SA=(Enc=3DES Hash=SHA1 Auth=PSK Group=2:modp1024 LifeType=Seconds LifeDuration(4)=0x00007080) VID=4f45517b4f7f6e657a7b4351 VID=afcad71368a1f1c96b8696fc77570100 (Dead Peer Detection v1.0) Ending ike-scan 1.9: 1 hosts scanned in 0.008 seconds (118.19 hosts/sec). 1 returned handshake; 0 returned notify root@cluster1:~# I can't tell what's going on here, this is pretty much the simplest config I can have according to the examples.

    Read the article

  • apache2 namevirtualhost resolving wrong site

    - by joe
    Running apache 2.2.6. I'm setting up a development environment. dev and production will be hosted on the same machine, same IP address. DNS entries like prod.domain.com and dev.domain.com point to the same IP. * Imprortant: it is required that dev and prod are otherwise completely separate. Each will run it's own apache instance. Each will use it's own apache configuration. Each, prod and dev, will host http and https. I have this set up and working, but not as restrictive as I'd like. For instance, the production config: NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80 > ServerName prod.domain.com # ... etc </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com # ... etc </VirtualHost> The dev site is set up similarly, using ports 8080 and 4443. Each site works fine. But assuming both apaches are running, one can also hit "cross-site" by mistake. So, inadvertently hitting prod.domain.com:8080 successfully returns a page from the dev site. It would be much better if this failed completely. This is a bit more difficult to solve (for me) because of the need for two apache configs. If all in one, the single process would have full knowledge of everything. So, I tried to solve this with brute force, including virtual hosts for the "other" site, with something that would fail, like no access to documentroot. But apache then inexplicably finds the "wrong" virtual host. Here's the full config for production, with the dummy dev configs. NameVirtualHost *:80 NameVirtualHost *:443 # ---------------------------------------------- # DUMMY HOSTS <VirtualHost *:8080 > ServerName dev.domain.com:8080 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:4443 > ServerName dev.domain.com:4443 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> # ---------------------------------------------- # REAL PRODUCTION HOSTS <VirtualHost *:80 > ServerName prod.domain.com:80 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com:443 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> # .... other valid ssl setup </VirtualHost> Here's the strange thing. With this configuration, a prod.domain.com:80 hit succeeds. But a prod.domain.com:443 hit fails, because it finds the dev.domain.com:4443 instead. I've also tried removing the port from the ServerName, but it still doesn't work. Sorry for the long question. Hopefully this is enough information. Thanks in advance for any help.

    Read the article

  • Linux not buffering block I/O when the device is not "in use" (i.e. mounted)

    - by Radek Hladík
    I am installing new server and I've found an interesting issue. The server is running Fedora 19 (3.11.7-200.fc19.x86_64 kernel) and is supposed to host a few KVM/Qemu virtual servers (mail server, file server, etc..). The HW is Intel(R) Xeon(R) CPU 5160 @ 3.00GHz with 16GB RAM. One of the most important features will be Samba server and we have decided to make it as virtual machine with almost direct access to the disks. So the real HDD is cached on SSD (via bcache) then raided with md and the final device is exported into the virtual machine via virtio. The virtual machine is again Fedora 19 with the same kernel. One important topic to find out is whether the virtualization layer will not introduce high overload into disk I/Os. So far I've been able to get up to 180MB/s in VM and up to 220MB/s on real HW (on the SSD disk). I am still not sure why the overhead is so big but it is more than the network can handle so I do not care so much. The interesting thing is that I've found that the disk reads are not buffered in the VM unless I create and mount FS on the disk or I use the disks somehow. Simply put: Lets do dd to read disk for the first time (the /dev/vdd is an old Raptor disk 70MB/s is its real speed): [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 36.8038 s, 71.2 MB/s Buffers: 14444 kB Rereading the data shows that they are cached somewhere but not in buffers of the VM. Also the speed increased to "only" 500MB/s. The VM has 4GB of RAM (more that the test file) [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.16016 s, 508 MB/s Buffers: 14444 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.05727 s, 518 MB/s Buffers: 14444 kB Now lets mount the FS on /dev/vdd and try the dd again: [root@localhost ~]# mount /dev/vdd /mnt/tmp [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 4.68578 s, 559 MB/s Buffers: 2574592 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 1.50504 s, 1.7 GB/s Buffers: 2574592 kB While the first read was the same, all 2.6GB got buffered and the next read was at 1.7GB/s. And when I unmount the device: [root@localhost ~]# umount /mnt/tmp [root@localhost ~]# cat /proc/meminfo | grep Buffers Buffers: 14452 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.10499 s, 514 MB/s Buffers: 14468 kB The bcache was disabled while testing and the results are same on faster (newer) HDDs and on SSD (except for the initial read speed of course). To sum it up. When I read from the device via dd first time, it gets read from the disk. Next time I reread it gets cached in the host but not in the guest (thats actually the same issue, more on that later). When I mount the filesystem but try to read the device directly it gets cached in VM (via buffers). As soon as I stop "using" it, buffers are discarded and the device is not cached anymore in the VM. When I looked into buffers value on the host I realized that the situation is the same. The block I/O gets buffered only when the disk is in use, in this case it means "exported to a VM". On host, after all the measurement done: 3165552 buffers On the host, after the VM shutdown: 119176 buffers I know it is not important as the disks will be mounted all the time but I am curious and I would like to know why it is working like this.

    Read the article

  • IBM Server Config questions

    - by Joel Coel
    I have a few questions on a potential server setup. First, the situation: Last year we bought an IBM x3500 server with 2 Xeon E5410's, 9GB RAM, 6 HDDs. The original intent for this server was to replace the old exchange e-mail server. It was brought in, set up, and then shortly after we switched to gmail. Shortly after that my predecessor left for greener pastures, and finally I was hired. So this nice server is now sitting (mostly) idle. This year I have budget again for one server, and of course I want to put this other server to work. I'm thinking about the best use for the two server, and I think I finally have a plan for what I want to do with them. The idea is to use the two newer servers as a pair of VM hosts. I will set up each server with the same 8 VMs, but divide up the load so that only 4 are active per physical host. That means I've normally got 2GB RAM + 2 cores per host. I've done some load testing to pick out what servers to convert to virtual, and chose them so that each host will be capable of handling the entire set of 8 by itself in a pinch with 1 core and 1GB RAM, but would be very taxed to do so. This should take our data center from 13 total servers down to 7. The "servers" I'm replacing are mostly re-purposed desktops, so I'm more than happy to be able to do this. Now it's time to go shopping for the new server. I'd like my two hosts to match as closely as possible, and so I'm looking at IBM again. It also helps that we have some educational matching grant money from IBM that I need to use to help pay for this system (we're a small private college). So finally, (if you're not bored already), we come to my questions: Am I missing anything big or obvious in this plan? I'm a little worried about network performance since the VM hosts will only have 4 nics total where 8 used to be, but I don't think it will be a problem. Is there anything else like this I might be overlooking? Am I making it even too complicated? IBM no longer has a good analog to last year's server. If I want to match the performance (8 cores, 9GB RAM, 1333mhz front side bus, 6 spindles), I have to spend quite a bit more than we paid last year: $2K+, or nearly a 33% cost increase. This only brings a marginal increase in performance. The alternative to stay in budget is to take a hit on the fsb down to 800mhz or cut the number of cores in half, neither of which is attractive. The main cost culprit is the processor. IBM no longer offers the E5410. It's listed as a part, but not available in any of the server configs I've looked at. I'm considering getting the cheapest 800mhz fsb dual core xeon I can configure and then buying the E5410's separately. That's still an extra $350 I wasn't counting on, but that's better than $2K. I want to know what others think of this - will it work or will I end up with the wrong motherboard or some other issue? Am I missing a simple way to configure the server I really want? I don't really intend to do this, but one option to save some money back is to omit the redundant power supply. Since my redundancy plan for these system is to switch over to a completely different host, the extra power isn't fully necessary. That said, it's still very helpful to avoid even short downtimes while I switch over VMs. Has anyone done this?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >