Search Results

Search found 18845 results on 754 pages for 'the machine charmer'.

Page 536/754 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Reasons why ports below 1024 cannot be opened

    - by Sitoplex
    I'm root on a machine I don't know how it was configured. I try to open SSHD on another port than 22 but it does not work. I changed the /etc/ssh/sshd_config file and added a new Port line extra to the Port 22. but it does only work when this second port is a number above 1024. Why is that? How can I find the reason? Infos: I'm restarting it using /etc/init.d/sshd restart as root. "netstat -apn" does not show the port is open by any other service (anyway I tried different ports and only above 1024 work). "telnet localhost port" also shows the service works only when they are above 1024. In iptables all tables are empty. Thanks!

    Read the article

  • Masquerade traffic from certain source IP to VPN connection

    - by Shuo Ran
    Network Setup: 10.0.0.1 Router: to internet 10.0.0.70 Server: Ubuntu based server,default gateway is 10.0.0.1 10.0.0.51 PC I created a PPTP connection(interface: ppp0) on Server to a machine on the internet, what I want to do is route all the traffic from certain IP address(10.0.0.51) through the PPTP connection and then to the internet. What I did are: Set the gateway on PC(10.0.0.51) as 10.0.0.70 Enabled ipv4 forward on 10,0,0,70 Add the masquerade rule to iptable: iptables -t nat -A POSTROUTING -o ppp0 -s 10.0.0.51 -j MASQUERADE After that, it seems none of the traffic from 10.0.0.51 be redirected to ppp0, instead these traffic are still going through 10.0.0.1 directly. Any thoughts on it?

    Read the article

  • Can't ping my computer - "Transmit failed. General failure."

    - by Vaccano
    I am having an issue with my computer. My IIS services are not working. I have narrowed it down to the fact that my computer cannot find itself via its name. I try pinging my computer by its name and I get this: C:\Users\18773ping MyComputerNameHere Pinging MyComputerNameHere [::1] with 32 bytes of data: PING: transmit failed. General failure. PING: transmit failed. General failure. PING: transmit failed. General failure. PING: transmit failed. General failure. Ping statistics for ::1: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), I tried having someone else ping my machine and it works fine for them. Any ideas?

    Read the article

  • How to fix legacy code that uses <string.h> unsafely?

    - by Snowbody
    We've got a bunch of legacy code, written in straight C (some of which is K&R!), which for many, many years has been compiled using Visual C 6.0 (circa 1998) on an XP machine. We realize this is unsustainable, and we're trying to move it to a modern compiler. Political issues have said that the most recent compiler allowed is VC++ 2005. When compiling the project, there are many warnings about the unsafe string manipulation functions used (sprintf(), strcpy(), etc). Reviewing some of these places shows that the code is indeed unsafe; it does not check for buffer overflows. The compiler warning recommends that we move to using sprintf_s(), strcpy_s(), etc. However, these are Microsoft-created (and proprietary) functions and aren't available on (say) gcc (although we're primarily a Windows shop we do have some clients on various flavors of *NIX) How ought we to proceed? I don't want to roll our own string libraries. I only want to go over the code once. I'd rather not switch to C++ if we can help it.

    Read the article

  • Should the virtualization host be allowed to run any service?

    - by Giordano
    I recently setup a virtualization server for the small company I'm running. This server runs few virtual machines that are used for development, testing, etc... My business partner works from a remote location, thus I also installed a vpn server on the virtualization host to make it possible for him to safely reach the company services. Moreover, again on the virtualization host, I installed bacula to perform the backup of the data. Is it advisable/good practice to do so or should I create one more virtual machine to do backups and VPN? Is it a bad idea to run these services on the host itself? If yes, why? Thanks in advance!

    Read the article

  • redundant http load balancer

    - by jrydberg
    Got a simple scenario with two web servers for redundancy and to scale. But how do I make a two web-server setup fully redundant? I can think of two solutions; two web servers, one load balancer spreading the load. one extra machine for the load balancer. but how will the load balancer be redundant? two machines, each running the web server AND running a load balancer, spreading the load over. have a DNS entry point to both of the machines. no extra machines needed for load balancing. How do you guys normally solve this kind of problem?

    Read the article

  • Single web app, multiple web servers

    - by Ramakrishna
    I have a problem of load balancing. We developed a web app for nearly 1500 users. As the number of users increased we are unable to serve the requests in a timely manner. It takes around 10 to 20 seconds to load a page. Under heavy load it can take one minute to serve the page. We need to solve this situation so that each request is served in 2 or 3 seconds. App develped in : asp.net Hosted in : IIS 7.5 Machine configuration : Windows Server 2008, 8GB RAM, 1Mbps bandwidth

    Read the article

  • Could 1 GB of RAM work better than 1.25?

    - by user67082
    This is for a server running Ubuntu Server 10.10. The server is an old desktop PC. It had 2 sticks of 256 MB of 182-pin DDR 400 MHz RAM in it (total 512 MB of RAM). I just ordered a 1 GB stick of compatible RAM for the machine (now would have a total of 1.25 GB of RAM). A friend told me that it might run better if I removed both sticks of 256 MB RAM and used just the 1 GB stick I will be receiving. This seems counterintuitive since then there would only be 1 GB of RAM instead of 1.25; is it possible that it would be better to run with 1 GB or is he totally wrong? Thanks for the help.

    Read the article

  • How can I automatically switch audio to my speakers when my TV-as-2nd-monitor is not in use?

    - by Michael McGowan
    I have a normal LCD monitor as my primary monitor and an HD LCD television as a 2nd monitor (connected through HDMI). I also have a set of normal speakers for the computer (a Windows 7 machine) that I previously used (before I was using the TV as a 2nd monitor). When I am using the TV as a 2nd monitor, I would like audio to come from it. However, I'm oftentimes using the TV as a TV, in which case I would like the audio from my computer to come from my speakers. Is there any way to accomplish this? It seems that if I have the TV set up as the default audio, then even if I turn the TV off (or, more likely, to the input from my cable box), then the audio still goes through that rather than my speakers. Is there a solution that does not require me to manually change the settings every time I want to switch contexts?

    Read the article

  • Vmware Workstation 10 connect remote server (Debian, Guest-Windows XP) Does not allow raw disk access nor shared folders

    - by Alex
    The setup: Ubuntu with local Vmware Workstation 10 (everything works locally) Connects(File- Connect to Server) Debian server with the same Vmware Workstation 10 (Windows XP Guest) Debian setup does not allow raw disk access nor shared folders (most options does not exist) No shared folder No physical disk option I use root user for this machine. Default install. I've tried to add shared folder from command line - it does not work. How to enable shared folders or raw disk access? I have created new Windows 8 64 bit template from scratch - I cannot use physical HDD either, and no SharedFolder option. I think this is something about security policy of remote server.

    Read the article

  • Resize2fs at 81h and counting

    - by Adam
    Setup: 12x 1TB drives in a RAID6 (MDADM) crypt-setup running ontop of MDADM LVM running on the crypted drives EXT4 on the LVM Background: I added a new drive to the RAID (increasing from 11 to 12 drives), and 'bubbled' up through the layers (MDADM, etc...) to reizing the ext4 partition. This machine is used as a centralized repository for photography and as a backup server (for both Windows and Mac machines) so bringing it down to add the drive and wait for the resizing and everything wasn't really an option. So I started the resize operation several days ago. HTOP is reporting the resize2fs operation as running for 81h now. DMESG and syslog are both clear, and the drives are still accessable. The resize command reports it's started an online resize of the partition, so the process IS running, and it is burning through 100% of one of my cores. Question: Is it normal for the operation to take this long or has something gone horribly wrong? Where would I start looking for signs of trouble?

    Read the article

  • Random BSOD ntoskrnl.exe (Windows 7)

    - by nordbjerg
    I get BSOD at random times and have been for a while now on my Windows 7 machine. It is really new, and I already tried wiping the graphic card drivers and installing them again (making sure that they are of course up to date). I get a variety of bug check strings on a few different drivers. The single file that is in all of my BSODs is ntoskrnl.exe Bug Check Strings - SYSTEM_SERVICE_EXCEPTION - KMODE_EXCEPTION_NOT_HANDLED - DRIVER_IRQL_NOT_LESS_OR_EQUAL - SYSTEM_THREAD_EXCEPTION_NOT_HANDLED - PAGE_FAULT_IN_NONPAGED_AREA I would rather not resort to getting a completely new PC, as I have already thrown a lot of money on my current one. Here is a .zip file with my dumps.

    Read the article

  • How come there is still so much programming work?

    - by jd_505
    I wonder why programming jobs haven't yet "dried up" because of the software evolution, for example, I am a developer myself, which means that I do care about software (I mean I am not of the type of guys that needs a computer mainly to just browse the Internet), and still I wouldn't mind if I never receive any more updates on my Ubuntu machine. I find that it provides everything I need, and while the updates provide various bug fixes/improvements, I wouldn't mind using it with its current state for the rest of my life, for 2 years of Ubuntu usage I have never bumped at a serious bug/problem. Another example is Windows, almost half of it's users still use XP, which is practically ancient, yet they find it satisfying all their needs (and I agree with them). I could go with many more examples, but by now you are understanding my point and my question. While new "trends" appears all of the time (like a new mobile OS) which runs on new platforms and requires some fresh development work, still the majority of the software effort goes in to what I consider as "completed projects", or at least a state of a project which is enough to be considered as completed. Do you have an explanation? I can't think of the right tags for this question; please edit it the way you find it to be most appropriate.

    Read the article

  • How to make a backup VPN server?

    - by akalenuk
    I have a small VPN network with a bunch of clients working mostly with each other and a VPN server. Everything works fine, except, obviously I can't shut VPN server down without breaking the network. I have a spare machine, which worked as an VPN server for the same network before so it is signed with the same SA as the first one and basically configured just the same as the first one. Technically I can make my clients work with it with little adjustment (by setting remote in etc/openpvn/clientx.conf), but it would be great make this switch automated. So basically I want two VPN servers running in the same network to work completely interchangeable without clients even knowing this. Can I do this with VPN or should I dig deeper into physical network layer?

    Read the article

  • How to tell Mercurial to never create hard links

    - by scrapdog
    I am planning to use Mercurial in the near future on some projects. These projects will normally reside in a directory on my Windows machine, but I will be sharing these directories using VirtualBox so I can work on them directly from within Linux. I understand that Mercurial will sometimes create hard links when cloning repositories. I'm not sure how a VirtualBox shared directory handles these hard links (or if it even can), so I'd rather just tell Mercurial to never attempt to make hard links and always make a copy. My question: how do I globally disable Mercurial from hard linking? (Although if someone has gotten Mercurial and VirtualBox shared folders to work nicely with hard linking, I'd like to hear about it!)

    Read the article

  • Haproxy, configure for one host

    - by Michal K.
    I have to use haproxy on one machine. I want to do redirect requests from Ip to the same ip (with another port). My configuration (doesn't work): lobal maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 1 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode http clitimeout 600000000 srvtimeout 600000000 contimeout 400000000 log 127.0.0.1 local0 log 127.0.0.1 local1 notice option httpclose # Disable Keepalive listen http_proxy 127.0.0.1:8080 balance leastconn # Load Balancing algorithm acl acl_apache path_end .avi .jpeg #option httpchk option forwardfor # This sets X-Forwarded-For ## Define your servers to balance server DE2 127.0.0.1:8080 weight 1 maxconn 15 check

    Read the article

  • Java on 256MB system?

    - by Mike S.
    For a school project, I've registered a free VPS on a hosting provider (pipni.cz). It has 256MB RAM: Mem: 262144k total, 148104k used, 114040k free, 0k buffers It's running on Debian Squeeze. I always get this error when I run a Java program: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I tried to use Xms, Xmx, Xss with low values and still same result. ulimit -v gives me "unlimited". My application will be pretty simple and I need to use rmiregistry also. Can somebody help?

    Read the article

  • Restoring raid 5 array after bios flash

    - by cogergo
    I have just flashed my BIOS and now my machine does not detect my raid5 array! It has three 2TB drives in it so that is a LOT of data that will be lost! I have NOT deleted the array and it does not offer me to reboot I'm using Nvidia MediaShield! and Windows 7. Any ideas guys? Thanks! Update: here is the GUI raid configuration. As you see it shows one disk in the array but for some reason not the other two! Click for image of raid configuration error

    Read the article

  • Can't get PHP Errors in IIS7.5

    - by Erling Thorkildsen
    I've tried the previously answered questions regarding this to no avail. Just installed PHP5 and IIS on my Windows 7 machine and I'm having trouble getting PHP Errors displayed instead of 500 Errors. In PHP; error_reporting = E_ALL display_errors = On In IIS I have htmlErrors set to Detailed which shows a detailed IIS 500 Error page. If set to Custom it shows a basic 500 Error page. If I set it to PassThrough I get a blank page (view-source reveals no code). My PHP log file is showing a Fatal PHP Runtime Error.

    Read the article

  • RAID strategy - 8 1TB drives

    - by alex
    I'm setting up a backup storage device- This machine has Windows Server 2008, on a separate boot drive. It has 8x 1TB drives, and uses a hardware RAID card. My question is, which RAID configuration should I go for? Initially, I was going to go with RAID 5 across all 8 drives, however members on serverFault have advised against it. I was just wondering why? Some people have suggested 2 lots of RAID 5 configuration on 4 of the drives, then striping them... I want to maximise the storage space, as this is a backup unit - will store SQL backups, Acronis Images, files, etc... It won't be for public access, so the I/O won't be that high I wouldn't think.

    Read the article

  • Single Sign on for RDS

    - by stumct
    I currently have an RDS farm which is used for running remote apps for a large group of users. When logging in these users must authenticate to the RDS web page to view their list of applications. Then they authenticate again upon clicking the application the require. How can I enable this to use the users current windows authentication so they do not have to log in at each step. Ideally since the user is logged in to their local machine with their windows account, they should not need to continuously authenticate.

    Read the article

  • Is there a way to import a scheduled task from windows 2003 (.job) to windows 2008 (.xml) ?

    - by Rodrigo
    I had some jobs to be moved from the old production server (windows 2003 server standard) to the new machine (windows 2008 server standard), but the new server is unable to read the old .job format, also the import wizard only imports from .xml job files (same version). Obviously I don't want to rebuild all the jobs by hand, but can't find a tool that makes the process a very little easier. I don't trust in Microsoft for this kind of tools, my previously experiences had been to bad (DTS - SSIS). Any ideas? Thanks in advance.

    Read the article

  • Chrooted user does not start in his home directory and does not load his bash_profiles

    - by Stuffy
    If the users logs in, he starts in / of the chroot (Which is /var/jail on the real machine). I would like him to start in his home-dir. Also, he seems not to load any of his profile-files (.bash.rc etc). I followed this tutorial to create the chroot environment. This is what my /etc/passwd looks like: test:x:1004:1008:,,,:/var/jail/home/test:/bin/bash this is what my /var/jail/etc/passwd file looks like: test:x:1004:1008:,,,:/home/test:/bin/bash I also found out that, if I remove Match User test ChrootDirectory /var/jail AllowTCPForwarding no X11Forwarding no from my /etc/ssh/sshd_config, the user starts in his correct home-folder and with his bash-settings loaded. However, he is able to leave the chroot-environment if I remove that part. This question I asked before is somewhat related, since I think the wrong look of the commandline is caused from the not loaded profile-files. So any ideas how to fix this?

    Read the article

  • strange memory usage pattern on windows server 2008 on login through remote desktop..

    - by headsling
    I'm running Windows Server 2008 Datacenter Service Pack 2 on a VM Ware instance with 10Gb ram allocated. I'm not running IIS or SQL Server. Under 'normal' conditions, the machine uses ~5.5Gb of memory. However, when I login to the server through remote desktop, the memory usage slowly climbs up to 9.8Gb of memory in use. After several minutes the memory slowly creeps back down to the 5.5Gb mark. I've tried killing all the processes associated with my login, on login, barring the taskmanager without success, and I can't see any process that is growing in memory usage when the memory is increasing. I'm assuming this is some system level cache that is growing / shrinking... but why is it doing this?

    Read the article

  • Web hosting deciding to pay for hosting or host your own?

    - by pllee
    Is there a guide out there on how to choose when to pay for web hosting vs. hosting your own? Assuming that root access is a must I would like to compare things like cost, scalability and personal stress. Here is what I could come up with. Paying for web hosting: Benefits: Much cheaper for a small scale. I assume anything under $50 a month would be cheaper than paying for the bandwidth of hosting. No stress in dealing with power outages, server restarts or internet going down. For the most part less busy work involved with setting up. Negatives: Cost goes way up when higher specs are needed (for example monthly cost triples with ability to use 8gb of ram that you can buy for $90 ). This means you have to target a particular ram usage and monitor so your instance stays within the threshold. root access for the most part is a premium. You may get tied into a vendor specific deployment process. Hosting on own : Positives: 100% control of specs and software. When you get past paying for the bandwidth you get much more bang for your buck by building your own machine. Negatives: Doesn't make financial sense if bandwidth costs are more than web hosting costs. Having to deal with power outages, server restarts or internet going down. I think the best of both worlds would be if there was a place that dealt with bandwidth, power outages and server restarts but you provided your own server. Kind of like a 24 hour day care for a server. Does anything like that exist?

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >