Search Results

Search found 11930 results on 478 pages for 'shared machines'.

Page 236/478 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • DHCP Client Can't Find DHCP Server

    - by leeman24
    I currently have 3 machines: CentOS (router) eth1 - 18.0.168.1 eth2 - 145.165.34.1 Windows Server 2008 (server) 18.0.168.2 DHCP scope - 145.165.34.10 - 145.165.34.20 Windows 7 (client) Supposed to use DHCP I can't get my Windows 7 client to get an address from the Windows Server 2008 DHCP server. Every network interface can ping each other (ex. 18.0.168.2 can ping 18.0.168.1 & 145.165.34.1 and the other way around). My Linux machine acting as the router has default IP tables. Other than this command which may or may not be right: iptables -I INPUT -p udp -d 18.0.168.2 --dport 67:68 -j ACCEPT I have also tried it after I flushed the IP tables. I was looking at the dhcrelay command but it seems CentOS doesn't have it and I am not even sure how to use it.

    Read the article

  • Can't create new folder from anywhere in Windows 7

    - by ymasood
    I have this problem on two of my new laptops and can't seem to find a decent workable solution elsewhere in forum land. The problem is that on my Windows 7 Professional machines the right mouse button doesn't show the New Folder option and elsewhere as well (via Explorer) I'm unable to create new folders. I'll be happy to get this tiny problem resolved and declare that Windows 7 is almost perfect! Thanks to all of you in advance for your contribution! PS: None of the Vista solutions seem to work here!

    Read the article

  • vmware server end of life, where to go now?

    - by matnagel
    We have some virtual machines on vmware server 2.x running on 64 bit hardware and quite happy with it. As vmware server will no longer be offered we are thinking to migrate to ESXi, which seems is free. We will have to install the specialized network cards but that's a minor problem. But once left alone with a quite silently discontinued product there is some resistance to vmware. VirtualBox seems to work: http://blogs.oracle.com/virtualization/2010/06/migrating_from_vmware_to_virtu.html What other free (of licencing cost) options are there? We have windows server 2003 32 bit VMs and also linux 32 and 64 bit VMs to migrate. So xen does not seem an option, which does not run microsoft OSes.

    Read the article

  • Installing PHP 5.3 on a Windows host with both Apache and IIS

    - by Hippyjim
    I'm currently experimenting with a couple of configurations of Apache and IIS on the same server box - so far using Apache as a proxy for IIS is winning, but another of my setups has Apache on a non-standard port with IIS taking the majority of traffic. Both of these machines currently have PHP 5.2 installed. I want to upgrade to PHP 5.3, but the installer asks which server I'm running - I'm running both - so what do I tell it? Which configuration will be the most flexible, tell it we're running IIS, or tell it we're running Apache?

    Read the article

  • git networking for small team

    - by takeshin
    I'm trying to set up git for my programming team. My setup is: 1. example.com (Ubuntu server) IP: 192.168.1.2 (public: xxx.yyy.yyy.zzz) main git repository in /var/www/testgit user: mot (root) 2. host2, Ubuntu IP: 192.168.1.101 git clone of main repo in ~/public_html/testgit1 user: nairda 3. host3, Ubuntu IP: 192.168.1.102 git clone of main repo in ~/www/testgit2 user: mot 4. host4, Windows Vista, Samba, msysgit IP: 192.168.1.103 git clone of main repo in c:\shared\testgit3 user: ataga I start a new main repo: cd /var/www/testgit1 git init Now, a lot of questions: Which groups and users do I have to create? How to set up required ssh keys? (I'm playing with gitosis, but with no success by now.) How to make the main repo visible to other hosts? How to clone this repo on the hosts? How to pull changes from others to main repo?

    Read the article

  • Sun Grid Engine: Automatically Terminating Idle Interactive Jobs

    - by dmcer
    We're considering using Sun Grid Engine on a small compute cluster. Right now, the current set up is pretty crude and just involves having people ssh to an open machine to run their jobs. We'd like to allow interactive jobs, since that should ease the transition from manually starting jobs to starting them using qsub. But, there is some concern that, if we do, people might accidentally leave their interactive sessions idle and block other jobs from being run on the machines. The issue isn't just theoretical, since we previously tried using OpenPBS and there was a problem with people opening up an interactive job in a screen session and essentially camping on a machine. Is there anyway to configure SGE to automatically kill idle interactive jobs? It looks like this was requested as an enhancement (Issue #:2447) way back in 2007. But, it doesn't seem like the request ever got implemented.

    Read the article

  • forward all mail on a specified domain to script

    - by David
    Hey all! I run a disposable e-mail service that accepts all incoming mail and forwards it to a PHP script that stores it in a database for people to view. Before now, I have been on shared hosting with cPanel, which makes it easy to pipe e-mails to a script. Now, however, I got my own VPS, and it doesn't have cPanel. How do I pipe e-mails to script? Further, how do I pipe emails to any address on certain specified domains to my script? You see, aside from the main domain, there are several alternate domains that people can use if the main domain is blocked, and on each domain I want any address to be usable (xyz@domain, abc@domain, anythingelse@domain). The VPS has Ubuntu 9.04 installed, and I have been experimenting with Postfix, though I can switch to Exim or Sendmail if it is easier.

    Read the article

  • Migration with SysPrep, ImageX and

    - by Jack Smith
    I know that you can use SysPrep and ImageX to create a prepared image that can be used on several systems but the question is. How well does it work in a corporate environment of moving machines from old hardware off to new harddrives and new hardware? EDIT: The system runs accounting software and databases. So would SysPrep remove all License keys and other information which means would cause problems right? Would something else be a better option even though there are heavy costs involved? Currently, when I clone/copy the drive, Windows will black screen on me. So I need something with differential hardware support?

    Read the article

  • CentOS only detecting 50% of ram

    - by Devator
    I have 16GB ram in my machine. Before, free -m outputted the normal 16 GB ram, however now (after a reboot) it only detects 8 GB ram. Is one ram module damaged? grep -i memory /var/log/dmesg outputs Memory: 15621184k/16017200k available (2535k kernel code, 387120k reserved, 1748k data, 196k init). (Which looks like 16 GB to me). Free -m outputs: total used free shared buffers cached Mem: 7484 7415 68 0 6104 524 -/+ buffers/cache: 786 6697 Swap: 2055 0 2054 Anything I might be missing? Thanks in advance.

    Read the article

  • What are the implications of expanding an internal subnet mask?

    - by Philip
    Our network is currently working on a 192.168.0.x subnet, all controlled through DHCP, except for the few main servers who have hard-configured IP address settings. What would I kill if I changed the DHCP-published subnet mask from 255.255.255.0 to 255.255.0.0? The reason for doing this is not because we have a huge sudden influx of machines, but because I'd like to start partitioning specific devices into specific IP ranges (to be neat and tidy). For what its worth, I don' plan on changing the allocated DHCP address range, but rather want to move some of the reserved and excluded DHCP addresses out of the address pool. e.g. printers will be 192.168.2.x I will obviously need to change the subnet mask manually on my manually configured devices.

    Read the article

  • Dell R510 vs R710

    - by AX1
    Hello, the Dell R510 and R710 can both hold regular configurations (e.g. X5650, 24 GB RAM, etc.) and these usually come out to about the same price. Is there a particular reason why one would choose the R510 over the R710 or vice versa? There really appears a lack of differentiating factors. The only 'major' factor I found, which doesn't apply to me though, is that the R510 can hold up to 12 3.5in HDDs while the R710 (which is slightly more expensive) can only hold up to 6 3.5in HDDs. Maybe you guys have some input and bought either of these machines (or both) to shed some light on other differences and why someone should choose one over the other as the pricing is pretty much the same with my configuration. Thanks!

    Read the article

  • portable cross-platform WebDAV Client

    - by theduke
    I am looking for a portable application that will allow me to do this: Browse a WebDAV share and open a file. Edit the file locally. Save the file, and automatically propagate the change to WebDAV. Is there any CROSS-PLATFORM application out there that will let me do this and exists as a portable? The reason I need this functionality is that I regularily have to access files via WebDAV from public machines where I do not have the neccessary permissions to natively mount a webdav share, or to install the neccessary components.

    Read the article

  • How to set up a software VPN when moving a server to the cloud

    - by Neal L
    I work in a small company with one office in Dallas and another in Los Angeles. We run a Fedora server at our Dallas location and use a Linksys RV042 at each location to create a VPN connection between the sites. Every time the power or internet goes out in Dallas, our server is inaccessible so the entire company goes down. Because of this, we would like to use a shared server in the cloud (something like Linode) to avoid this problem. As a relative novice to VPN configurations, I would like to know if it is possible to set up a software VPN on the cloud server and connect our local networks in Dallas and LA to that VPN. I've read about openvpn and ssh vpns, but I don't know it is the best option. Could anyone with some experience point me in the right direction on the right combination of software VPN and hardware for this? We're open to new hardware to make this happen. Thanks!

    Read the article

  • kvm memory changes via virsh not propagating to vm

    - by kevintmckay
    Hi I just started using kvm on rhel6 and after creating a vm I tried to increase the memory but the changes I amde in the xml file do not propogate to vm, even after bouncing vm and restarting libvert? [root@kvm01 qemu]# virsh dominfo dev-kvm01 Id: 2 Name: dev-kvm01 UUID: 9b2bf581-2807-3116-b176-60e9c0559943 OS Type: hvm State: running CPU(s): 2 CPU time: 1975.3s Max memory: 7864320 kB Used memory: 7864320 kB Persistent: yes Autostart: disable Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c47,c760 (enforcing) [iknowmed@dev-kvm01 ~]$ free total used free shared buffers cached Mem: 3632284 3614508 17776 0 3980 3491676 -/+ buffers/cache: 118852 3513432 Swap: 5668856 0 5668856

    Read the article

  • Monit and Thin and Unfound Gems

    - by TenJack
    I've been using Monit to monitor my Thin server and everything was working until I upgraded my Rails version from 2.3.4 to 2.3.14. Now when I try and start Thin using monit it gives me an unfound gem error: Missing the Rails 2.3.14 gem. Please `gem install -v=2.3.14 rails` I thought this may be a GEM PATH issue and also tried setting the GEM_HOME and PATH variables in the start command: check process thin3001 with pidfile /home/blahblah/apps/Vocab/shared/pids/thin.3001.pid start program = "/usr/bin/env PATH=/usr/lib/ruby/gems/1.8/gems GEM_HOME=/usr/lib/ruby/gems/1.8/gems /usr/bin/ruby /usr/bin/thin -C /etc/thin/vocab.yml start -o 3001" stop program = "/usr/bin/ruby /usr/bin/thin -C /etc/thin/vocab.yml stop -o 3001" if totalmem > 150.0 MB for 5 cycles then restart group thin It's strange because if I run the start command in the console it works fine, it's only within monit that I get the missing Gems error.

    Read the article

  • What is /opt/sun_docs used for, in Solaris 10?

    - by benc
    Solaris 10, SPARC. While trying clean up my "/opt" directory, I saw the "sun_docs" directory. I scanned the contents with "du -a", and also found a single, possibly related file (/var/opt/sun_docs/sundocs.html). If I understand correctly, it looks like a local set of HTML files, designed to be ready by a locally running browser? It looks like it could be shared via http, if an admin knew how to turn that on. I did google and check docs.sun.com. -ben

    Read the article

  • Creating/Editing Existing Documents

    - by Caroline Jones
    A document was shared with me, enabling me to edit it. However, whenever I open it, it only lets me view it, and anytime I try to open the "Open with Google Docs" tab, it acts as if I never pressed it, same as if I try to download it. Same with whenever I try to create a new document. I'll click "Create" and then "Document" and it won't be registered. I really need these issues fixed pronto, as I have to use this software every day. Please help!!

    Read the article

  • java.rmi.UnmarshalException: unable to pull client classes by server

    - by andrews
    Hi, I have an RMI client/server set-up on two machines that works fine in a simple situation when the server doesn't require a client-side defned class. However, when I need to use a class defined on the client side I am unable to have the server unmarshall those classes. I suspect this is an issue with my java.rmi.server.codebase property that I pass in as argument to the client app. I followed Sun's RMI Tutorial trail and I think I have followed the steps exactly except that I don't specify a classpath argument when executing client and server because they execute in the directory right above the root package directory (however I tried that too with no effect). The exceptions I get when attempting to execute the different client-side combinations described in detail below are all the same: RmiServer exception: java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is: java.lang.ClassNotFoundException: test.MyTask at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:353) at sun.rmi.transport.Transport$1.run(Transport.java:177) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:173) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:178) at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:132) at $Proxy0.execute(Unknown Source) at test.myClient.main(myClient.java:32) The details are: My client/server rmi is set up over a home network behind a router. The router is assigned to a static ip address I will call myhostname. Appropriate port-mapping is set-up in the router that points to the right machines. role, machine, os, ip-address: server, venice, linux ubuntu 9.10, 10.0.1.2 client, naples, mac os x leopard, 10.0.1.4 I startup the server side as follows inside /home/andrews/workspace/epsilon/bin: 1 starting registry on the default port 1099: venice% rmiregistry & 2 starting web-server on port 2001 pointing to code base for common interfaces: venice% java webserver/ClassFileServer 2001 /home/andrew/workspace/epsilon/bin 3 starting server app (main class in test/myServer) which registers the server object: venice% java -Djava.rmi.server.codebase="http://myhostname:2001/" -Djava.security.policy=server.policy -Djava.rmi.server.hostname=myhostname test/myServer & Now the client side inside /Users/andrews/Development/Java/workspace/epsilon/bin: 1 start a local web server that can server client-side classes to the server (not sure if this is needed, but I added I tried it, and still no success; I have added port-mapping to the router for 2001 to venice, for 2002 to naples) naples$ java webserver/ClassFileServer 2002 /Users/andrews/Development/Java/workspace/epsilon/bin/ Trying to run the client (note: I don't specify the -cp argument because client executes right above the root package directory): 1 try #1 using an http hostname naples$ java -Djava.rmi.server.codebase=http://10.0.1.4:2002/ -Djava.security.policy=client.policy test.myClient myhostname Note 1: the myhostname argument at the end is passed-in to the client so that it resolves to server's rmi hostname. Note 2: I tried using localhost:2002 instead of 10.0.1.4:2002 too. Note 3: I tried using myhostname:2002 since myhostname is assigned to the router and I have proper port-mapping set-up, this address should resolve to naples and not venice 2 try #2: naples$ java -Djava.rmi.server.codebase=file:/Users/andrews/Development/Java/workspace/epsilon/bin/ -Djava.security.policy=client.policy test.myClient myhostname Note 1: the code base url format is correct, I created a small program to convert current file directory path into a url and used that. using file:///Users... has same effect. Other notes: 1 my server and client policy files correctly specify the path, as I've tested this setup with good and bad paths, and getting a security exception for bad path 2 this setup works if I don't use client-side defined objects, the client connects correctly to the server and the server executes. 3 when I place the client-side class on the server in the server's classpath, all executes fine. All help is appreciated.

    Read the article

  • User Permissions: Daemon and User

    - by Eddie Parker
    Hello: I often run into this issue on Linux, and I'd love to know the proper way of solving it. Say I have a daemon running. In my example, I'll use LigHTTPD, a webserver. Some software, like Wordpress, enjoys having read/write access to files for updating applications via a web interface, which I think is quite handy. At the same time, I enjoy being able to hack on my files using vim, using my local user account, 'eddie'. Herein lies the rub. Either I chown everything to lighttpd or eddie and a shared group between them both, and chmod it 660, or perpetually sudo to edit the damned things. The former isn't a bad solution, until I create a new file in which case I have to remember to chmod it appropriately, or create some hack like a cron job that chmods for me. Is there an easier way of doing this? Have I overlooked something? Cheers, -e-

    Read the article

  • Torque and maui node status

    - by Lafada
    I am new for torque and maui. I was checking for node state to looking for which nodes are free and which nodes are in use. For torque one command is pbsnodes. Which gives status and other info related to node. When I was checking for maui then I found command diagnose -n which also shows status of the node. I was wondering between these 2 status. Both are giving different status for the same situation. When I do man pbsnodes I got the possible states for node "free", "offline", "down", "reserve", "job-exclusive", "job-sharing", "busy", "time-shared", or "state-unknown" But this type of different state I cant find for diagnose -n. How pbsnodes and diagnose -n get the status for node. Is there any database like xCAT use for torque or maui? Thx in advance for your valuable time.

    Read the article

  • Slow performance by PHP directory operations on virtual machine (Ubuntu libvirt)

    - by thonixx
    Some days ago I installed an Ubuntu server and two running virtual machines with libvirt. Everything works fine except one performance problem. Everytime when I call a PHP script with directory operations the operations are very slow and not performant. Here is an example: http://zother.white-tiger.ch/ And here you see an example without a directory operation and how fast it is: http://michaeltanner.ch/ It's all on the same virtual server. The virtual machine uses 6 cores (8 are available) and 7500 megabytes RAM (8 Gigabyte are available). The disk image format is qcow2. How can I improve the performance?

    Read the article

  • Proper umask on linux webservers?

    - by Xeoncross
    Most VPS have a team of 1+ user(s) that don't do anything but configure the system and work on the web site and/or database. I would assume all the team members would be a group like "developers" so they could all work on files in the web root as needed. With this in mind, would umask 007 be a much better setting than the default of 022? After all, there shouldn't be any "other/world" users since this machines primary purpose is to serve web pages. All the developers have access and there aren't any "guests" logging in...

    Read the article

  • linux ssh -X graphical applications will not start when system load is high

    - by Chrisv
    So I am using ssh -X to access a server. I am at a Xubuntu desktop accessing a Ubuntu server that is in the next room. Usually everything works fine, but when the system load gets high, any graphical applications I have freeze and fail to be restarted. This happens even if the process that is causing the high load has been niced to a low priority with "nice -n 19". And even though the system load is high, the command line works fine with no delay, and other applications I have running on the server (e.g. virtual machines) run fine. But any graphical application running through X dies. When the graphical applications fail they usually give out an error message that suggests a time-out. It seems that something connected to X has a low priority and times out. But what is it, and how does one fix it?

    Read the article

  • Excel 2003 opening files on network

    - by Luke
    The network is laid out with an XP Pro computer as the server hosting files, then 3 XP computers connecting to it for filesharing, all on it's own router. One computer can open .xls files no problem, and she runs Office XP. The other two computers run Office 2003, and cannot open any shared files by double-clicking them, or by selecting File-Open in Excel. If the file gets copied to the local computer, it opens instantly. I have tried disabling the AV on all computers, disabling the Windows firewall, and doublechecking permissions on the server. I have also tried disabling DDE, but that doesn't help at all, just like Tools-Options-unticking Ignore other applications. Any ideas? This apparently started a couple days ago

    Read the article

  • Swiching webhosting company & database erros.

    - by gipap
    Well here comes the situation. I used to have CompanyA for webhosting. (The hosting plan was a shared one). I decided to change the hosting provider and transfer my website to CompanyB, (exclusive IP). The issue that i face is that my webpage is now displayed in two different IP addresses. So i decided to turn-off the website served by the CompanyA. Now the problem is that my database driven website, served by CompanyB, is not driven anymore, although i have added the A record mssql.mywebsite.com with the ipaddress of the database. (The database is served by dedicated db's server). So, what am i doing wrong here?

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >