Search Results

Search found 50994 results on 2040 pages for 'simple solution'.

Page 35/2040 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • Solution to time shifting requirement in Active Directory

    - by MikeR
    Hi, I currently have an active directory that has several child domains (consisting of nothing other than a DC and bespoke application servers) set-up for testing our CRM software, as some of it is date/time sensitive these have been set to dates in the future at some point in the past, which is causing replication errors. I'm working on getting rid of these child domains, but still have a requirement for our testers to be able to time shift. Does anyone know of any solutions that would allow our test environments to have their time changed (always forward), without affecting the production active directory? Is it as simple as creating a separate Forest on the same LAN or would that interfere with my production Forest? Thanks for any advice.

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • I need a reverse proxy solution for SSH

    - by Bond
    Hi here is a situation I have a server in a corporate data center for a project. I have an SSH access to this machine at port 22.There are some virtual machines running on this server and then at the back of every thing many other Operating systems are working. Now Since I am behind the data centers firewall my supervisor asked me if I can do some thing by which I can give many people on Internet access to these virtual machines directly. I know if I were allowed to get traffic on port other than 22 then I can do a port forwarding. But since I am not allowed this so what can be a solution in this case. The people who would like to connect might be complete idiots.Who may be happy just by opening putty at their machines or may be even filezilla.I have configured an Apache Reverse Proxy for redirecting the Internet traffic to the virtual machines on these hosts.But I am not clear as for SSH what can I do.So is there some thing equivalent to an Apache Reverse Proxy which can do similar work for SSH in this situation. I do not have firewall in my hands or any port other than 22 open and in fact even if I request they wont allow to open.2 times SSH is not some thing that my supervisor wants.

    Read the article

  • Best solution top keep data secure

    - by mrwooster
    What is the simplest and most elegant way of storing a small amount of data in a reasonably secure way? I am not looking for ridiculous levels of advanced encryption (AES-256 is more than enough) and I am only looking to encrypt a small number of files. The files I wish to encrypt are mostly comprised of password lists and SSH keys for servers. Unfortunately it is impossible to keep track of ever changing passwords for my servers (and SSH keys) and so need to keep a list of the passwords. Obviously this list needs to be secure, and also portable (I work from multiple locations). At the moment, I use a 10MB encrypted disk image on my mac (std .dmg AES-256) and just mount it whenever I need access to the data. To my knowledge this is very secure and I am very happy using it. However, the data is not very portable. I would like to be able to access my data from other machines (especially ones running linux), and I am aware that there are quite a few issues trying to mount an encrypted .dmg on linux. An alternative I have considered is to create a tar archive containing the files and use gpg --symmetric to encrypt it, but this is not a very elegant solution as it requires gpg to be installed on every system. So, what over solutions exist, and which ones would you consider to be the most elegant? Ty

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Failed to bring up eth1 in a dual ips solution in ubuntu

    - by lxyu
    I'm using ubuntu 12.04. I tried to assign two ips to two ethernet cards in my server. The content of /etc/network/interfaces is like this: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 114.80.156.a netmask 255.255.255.224 gateway 114.80.156.b auto eth1 iface eth1 inet static address 114.80.156.c netmask 255.255.255.240 gateway 114.80.156.d a b c d have different values, which means the two ips are in different vlans. But I can only bring up eth0 with this command: $ /etc/init.d/networking restart RTNETLINK answers: File exists Failed to bring up eth1. ...done. I have checked the question here which shows the same problem like the one I encountered: Can only bring up one of two interfaces But it seems it's not really solved. And in my situation, I need the 2 ips to use 2 different gateways. So how to fix this problem? Edit1, changed the example config ip from 192.168.0.0/16 subnet to another 'real' subnet. Edit2, the purpose of doing this is fairly simple. Because the ip range I previous in don't have more room for new servers, and I have to move to another ip range. So I want to make the public servers bind to 2 ips for the transition period. I only have really limited knowledge about routing and subnet. @BillThor @rackandboneman, would you please give me some keywords or links on how to setup route for 2 ips? and @Mike Pennington, how do you know I speak chinese?

    Read the article

  • Default Gateway solution on NAT'd network (best options)

    - by kwiksand
    I've recently changed a network from a bunch of machines exposed to the net on a network to a more security conscious Firewall-fronted network with a DMZ for public services. Everything's mostly working perfectly now, but I've got the old problem of NAT Loopback where a machine within the LAN wants to access a public service via the public/external IP. I've solved this problem previously in a small/SOHO environment simply using NAT loopback features of the router in use or a simple iptables rule to do the same, but I want to make sure I make the most resilient choice with the least concern. It seems I can: Use iptables as I've said to DNAT and MASQUERADE the change source/destination so the connection works correctly i.e iptables -A PREROUTING -t nat -d ip.of.eth0.here -p tcp --dport 8080 -j DNAT --to 192.168.0.201:8080 iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -p tcp --dport 8080 -d 192.168.0.201 -j MASQUERADE Use split DNS, with internal mappings for public IP's Potentially do some route nastyness by setting the Default Gateway to use a different externally exposed IP to then come back in the public route (messy) Someone mentioned putting the Default Gateway within the DMZ as well (on serverfault), but I can't find the post again. I'm sure this is a common issue for many with NAT'd networks, but I've not really seen the perfect solve all when it comes to fixing this problem. What is your opinion?

    Read the article

  • Suggestions for Backup solution

    - by jiewmeng
    i am considering between windows home server simple nas extra HDD's in desktop btw, i will be the main user i am looking to fulfil the following needs: reliability (i am think RAID 1 or 5) not so prone to virus/malware infections (will using a separate NAS or home server help? say windows home server is still a windows pc except separated by network?) power efficiency (eg. spin down when not in use) download (eg. i may want to dl big files/torrents overnight and i may not want to use a full powered PC for it? does a full pc vs NAS provide significant power usage to justify cost of new system esp. since i am only user?) performance (i guess i like to write/access my files fast, on 2nd thought, maybe for backup i can forgo this? maybe for a WD Green HDD? but how much slower will it be? plus since i am the only user, i think the whole HDD will be mine?)

    Read the article

  • Visual Studio 2010 Solution Find all References Not Working

    - by Jeremiah
    I have a Visual Studio 2010 Solution that was imported from a Visual Studio 2008 solution that the Find all References does not work on. I've tried doing some searches on Google to try and figure this out but have come up empty handed. The find all references in VS2008 worked like a charm, we upgraded to 2010 and now no matter what file I'm in the Find All References doesn't return anything. Anyone have any idea how to possibly fix this or some good ways to "debug" the issue.

    Read the article

  • global.asax and ASP.NET MVC solution Organization

    - by nachid
    I am refering to this article from Jimmy Bogard He is suggesting two projects to organize your ASP.NET MVC solution. Basically, one project for the code and another for the rendering My concern is about global.asax file. Jimmy suggested separating global.asax from global.asax.cs and put them in two differents projects When I did this, I could not compile my solution. I got this error : Could not load type 'MyProject.Web.Global'. Can someone help and show me how to solve this? Thanks

    Read the article

  • healthy DLL reference broken after compile multi-project solution

    - by Code Sherpa
    Hi. I have a solution with multiple class libraries. When I compile each individual library (and the web site by itself) compilation always succeeds. But, when I compile the solution as a whole, one of the library references fails with a little yellow exclamation mark next to the failed library. I am guessing this has to do with the build order? Can somebody suggest what i have to do to resolve this? Thanks in advance.

    Read the article

  • Secure, simple php faq creating/editing scripts?

    - by Tchalvak
    I'm looking to build a simple site centered around a simple faq system in php. The faq concept is simple, but I want to have an administrative-access backend for editing and creating the entries, and securing a login seems more complex and time-consuming, so I'm looking for suggestions for code to start me off. Does anyone know of any open source php scripts or snippets that would work for administrative login to some php scripts that could be used as a simple faq system? Or both, the faq php code + web administrative access code?

    Read the article

  • Visual Studio 2008 marks solution files as version 10.00

    - by bja
    Hi After trying out VS2010b2 also my VS2008 installation changes the versions of solution and project files to "Version 10.00". The MSBuild.exe on our CI Server does not support them. Is there a way to make VS2008 generate sln files with version number 9.00 again? I know, i can fix that manually. But each time I open a solution, the version gets changed back, which is annoying. Cheers, bja

    Read the article

  • Pay per view video solution

    - by Bassem Hefny
    Hello, We are planning on building a pay per view (PPV) video solution but we have no idea from where to start. Here are the current givens: it will be hosted on Linux using PHP Database: MySQL And by PPV I mean: - going to website, selecting a movie to watch/download - going to payment portal and paying - being now able to watch/download So here is my question, from where to start? is there an existing (recommended) solution that we can download/buy? Any information would be really appreciated

    Read the article

  • Visual Studio Macro To Switch Solution Configuration

    - by Eddie Parker
    I'm trying to write a macro that toggles between release/debug solution configurations in Visual Studio. It appears I can switch the configuration by using 'DTE.ExecuteCommand("Build.SolutionConfigurations", "Debug")'. Is there a way I can 'read' the value? Or is there a way I can use macros to 'focus' on the solution configuration UI element?

    Read the article

  • Dynamic programming solution to the subset-sum decision problem

    - by Gail
    How can a dynamic programming solution for the unbounded knapsack decision problem be used to come up with a dynamic programming solution to the subset-sum decision problem? This limitation seems to render the unbounded knapsack problem useless. In the unbounded knapsack, we simply store true or false for if some subset of integers sum up to our target value. However, if we have a limit on the frequency of the use of these integers, the optimal substructure at least appears to fail. How can this be done?

    Read the article

  • How do I create a Solution Wide Connection String

    - by Renier
    Hi. Does anyone know if it is possible to create a single connection string that will be accessible to all the projects in a solution (we have about 6). I can create a text file with this information, but we need design time support as well, and it is not practical to have a connection string in every App.Config and Web.config file in the solution. We basically want a Single connection string that is easy to change should the location of the db change, that will also be used by the IDE for design time support Regards, Renier

    Read the article

  • Simple 2-color differential image compression

    - by Groo
    Is there an efficient, quick and simple example of doing differential b/w image compression? Or even better, some simple (but lossless) streaming technique which could accept a number of frames as input? I have a simple b/w image (320x200) stream, displaying something similar to a LED display, which is updated about once a second using AJAX. Images are pretty similar most of the time, so if I subtracted them, result would compress pretty well (even with simple RLE). Is something like this available?

    Read the article

  • Adding resources to solution explorer in experimental hive

    - by Brian Webb
    Hi, I'm currently working on a project using DSL tools in Visual Studio 2008. Is there a way to automatically add a resource into the solution explorer of the experimental hive at runtime? I'm creating new diagrams based on what is on screen, and saving them into the directory the project is stored in. I would like to know if there is a way to get them to automatically get added to the solution explorer? (I don't want to have to drag the files in manually each time)

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >