Search Results

Search found 9236 results on 370 pages for 'discrete space'.

Page 294/370 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by nfm
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • Moving server room to another part of the building

    - by PHLiGHT
    This question is a bit different than the typical we are moving our server room to an off site location or we are moving the whole office to a new building. Management wants to add some more office space and to do so they want to move the server room to another location. The server room has Verizon smart jacks, a few servers, PBX and all the office network drops go into this room. I'm going to go over there to scout out an alternate location for the equipment because that is still TBD. This sounds like quite a pain since the Verizon equipment for our MPLS will need to be moved (never done that) and the office jacks will need to be re-run. How do you handle the jacks? I was thinking of keeping them in the same location and having new wall plates put in with half the ports going to the current location and the other half to the new location. Or do you think that 40 drops could just be done over the weekend so the old stuff would be ripped out and replaced with the new? Currently the wiring is a mess so this could be a blessing in the long run.

    Read the article

  • Hard drive degredation from large memory usage and paging files?

    - by Stephen R
    I've had a question(s) regarding computer degradation going through my head for a while and haven't found many good resources for researching it. 1) First off, when is the virtual RAM/paging file on a hard drive used by Windows? Is it used when the RAM is full? Or does it use the Virtual RAM/paging file as intermediate caching between the RAM and actual hard drive space all the time? 2) If I were to run many applications on my computer at the same time and have a bad habit of doing this for the entire lifetime of the computer, does it use more of the virtual RAM/paging file than if I were to have fewer programs running? Just to note, the RAM never fills up on my computer but it is used heavily. 3) By extension of question 2, if the virtual RAM/paging file is used more heavily, would that result in rapid hard drive degradation? I have seen a pattern among all of the computers that I have owned or used in the past 5 years. I am the kind of person to leave my web browser up with 40 tabs among other programs which will eat up 40% of my memory typically. Over time my computer will slow down, browsers start crashing, programs start seizing up or crashing themselves, eventually the computer becomes essentially unusable. I have been trying to rack my mind to come up with a solution other than to purchase a new PC to have it die on me in the next couple years as well. This is the only thought that has come to mind that might have a simple hardware fix...Windows ReadyBoost...Maybe? I'd like to be able to discuss this so I can learn something about all of the above. Thanks.

    Read the article

  • Orphaned SQL Recordsets/Connections with IIS

    - by Damian
    I have an IIS 6 site running on Windows 2003 Server x86 with MS SQL2005 Enterprise edition running ASP Classic (no choice). The site runs very fast with about 8000 page views per hour. All of my SQL tables are indexed and I have used the profiler to check my queries, the slowest of which is only about 10-15ms. I have autoshrink disabled, autogrow is set to 250mb, database is 2gb with 800mb of free space. My problem is that every now and then the site will slow to a crawl for no reason. Pages that just have a simple 'connect to databse and increment a hit counter' work ok, but more SQL intensive pages that normally execute in about 60ms take 25,000ms to run. This happens for about 30 seconds and then goes away. I was having an issue with orphan recordsets and connections due to the way I was releasing them. I have fixed this up and the issue is much better, but I am still getting them. Is there a way with permon, etc. to track when SQL Server or Windows closes these Orphan connections? At least if I can monitor the issue I will know if I am making progress or if I am even looking at the right things. Is there anything else I might be missing? Thank you!

    Read the article

  • Performance tweaks and upgrades for VMWare Server 2

    - by sjohnston
    Our software department has a server running VMWare Server 2. We typically have 8-10 VMs running as test environments (Win XP and Server 08) for various versions of our software, and one VM that is used as a build server (Win XP). The host is running Server 2003 R2. It has 32GB RAM, 8 core Xeon 3.16GHz CPU, one disk for host OS and two raid disks for VMs. The majority of the time, this setup behaves very well and there are no complaints. Other times, the VMs can be very laggy. This is sometimes, but not always, correlated to heavy load on the build server. I'm a software developer, not an IT pro, but it seems to me that this machine should be beefy enough to handle this many VMs. Is this occasional performance hit likely just because we're hitting the limits of the hardware, or should I be looking for another culprit? From what I've read, I'm guessing if there's a bottleneck, it's probably disk I/O with all these VMs running off two disks (especially the build server). Would spreading the VMs over more disks, and/or switching to SSDs give us a significant performance boost? Other things I've read may increase performance: single virtual processor per VM removing/disabling unused virtual hardware preallocated disk space not using snapshots setting a reserved memory limit on the host and disabling VM memory swapping Can anyone confirm or deny if any of these improve performance? What other good tweaks have I missed?

    Read the article

  • Can't Install Win2k8 On KVM - Classic 0x80070013 error

    - by javano
    I am trying to install Win2k8 Std as a KVM guest on Debian Squeeze. As you can see from these screen shots; No drives are detected (I have blanked out a 20GB image for testing) - screenshot1 I am using this driver CD: - screenshot2 I have signed the Win7 driver (I assume this was the most appropriate one?) - screenshot3 I can now see an unpartitioned drive - screenshot4 But I can't create a new partition on here, getting the error code 0x80070013 - screenshot5 I have had this error code before but only on a physical server. If I remember correctly it was complaining because the disks were partitioned as GPT (because it was a server that was being re-purposed) so repartitioning with an MS-DOS table fixed that. This is a blank disk image though. What is wrong here, and how can I correct this? Thank you. UPDATE I have booted the VM with a Gparted-Live disk and formatted this volume with an MS-DOS partitioning scheme, and a single 20GB NTFS file system. Now when I boot the Win2k8 CD, load my drivers, I get a different error. As you can see at the bottom of screenshot6 "Windows cannot be installed on this hard drive space. Windows must be installed to a partition formatted as NTFS". Clicking format produces the error (0x80004005) on the screen, so I think this is still a driver issue because Windows can see the drive but not interact with it properly. Is that insane thinking?

    Read the article

  • How to access previous VHD versions of system backup?

    - by feklee
    Quote from the 31 Oct 2009 TechNet article "Learn more about system image backup": During the first backup, the backup engine scans the source drive and copies only blocks that contain data into a .vhd file stored on the target, creating a compact view of the source drive. The next time a system image is created, only new and changed data is written to the .vhd file, and old data on the same block is moved out of the VHD and into the shadow copy storage area. Volume Shadow Copy Service is used to compute the changed data between backups, as well as to handle the process of moving the old data out to the shadow copy area on the target. This approach makes the backup fast (since only changed blocks are backed up) and efficient (since data is stored in a compact manner). When restoring the image, blocks will be restored to their original locations on the source disk. If you want to restore from an older backup, the engine reads from the shadow copy area and restores the appropriate blocks. For the last days, a daily system backup of drive C: to drive E: has been scheduled and run by Windows 7 Backup and Restore. Drive C: currently holds 233 GB of data, which fits comfortably on drive E:, a 1 TB drive, with 727 GB of free space remaining. How do I access the previous version of a VHD? I right clicked on files and folders in E:\WindowsImageBackup, and I looked for Previous Versions but always: There are no previous versions available

    Read the article

  • Group Policy for DNS Server Addresses

    - by John
    This question deals strictly with the methodology of using Active Directory to publish DNS settings and controlling the DNS server address order to domain attached clients. This is not asking about if this is the right function, role, or best use of group policy; but rather, how to make it work. I would like to be able to publish in group policy DNS addresses and possibly control the order in which the client consumes the addresses. I have tried following the information from this site without success. I set the following setting within group policy, but the client never shows the settings within the TCP/IP properties. Computer Configuration/Administrative Templates/Network/DNS Client/DNS Servers I did list them as a single space separated list. For example, the following: 192.168.0.1 192.168.10.3 192.168.34.2 192.168.2.67 192.168.56.99 192.168.99.23 This would be for Windows 7, Windows Server 2003, and Windows Server 2008 clients. I am not sure what I am doing wrong or how to get this to work. Am I missing a setting? Do I need to set something differently?

    Read the article

  • Trouble printing to local printer when connected to VPN with split-tunneling enabled

    - by Marve
    I'm a volunteer network admin for a multi-tenant non-profit office space. One of our new tenants uses a VPN to connect to remote resources using RRAS and Small Business Server 2008. They also have a local network printer for the workstations in our office. When connected to the VPN, they cannot print to the local printer. I informed their network admin that they need to enable split-tunneling to fix this. Their network admin enabled split-tunneling, but apparently printing still didn't work. He told me that I need to open port 1723 on our office firewall to allow it to work. I'm just a novice administrator and not familiar with RRAS, but this doesn't sound right to me and I haven't been able to find anything on the web to validate it. Additionally, my understanding of split-tunneling is that it is handled entirely by the VPN client and should work irrespective of firewall settings. Is my understanding of the situation incorrect? What steps should I take to resolve this problem?

    Read the article

  • Small office network setups

    - by user39822
    I work at a small office and we're overhauling our network setup there. We're a web dev company and at the moment we have 50+ production sites running on the same machine that runs our internal email, which is just plain stupid. We're moving all our client hosting off site and are now looking for something to run our internal office requirement. Below is a brain dump: Equal amount of Mac & PC, about 25 machines in total. We need a central "server" to host files that should be accessible everyone as a "network drive". If possible we'd like to use low cost hardware for this (Mac or Win based). Disk space should be upward of 1TB. Ideally we should also be able to run a small web server on this machine (LAMP stack) to run some planning and billing applications we wrote ourselves. We need some sort of MS Exchange alternative for things like a shared calendar and especially being able to set Out of Office replies. We have one printer that is connected to the network Setup should be something can preferably be managed easily via a graphical interface and NOT require command line skills. Users want to keep using Apple Mail or MS Outlook After a quick google I came across the Zimbra collaboration suite, can anyone recommend this or any other solution for our office?

    Read the article

  • How to clone a HDD and then use the clone with VMware (so that Windows works!)?

    - by Ahmad
    I have a system on which Windows 7 is installed, and I am trying to make a clone of its HDD image, which I then want to use in my main PC with VMware, so that I can boot Windows 7 off the cloned HDD. I used Ultimate Boot CD v5.1.1 with the system whose HDD I wanted to clone, and I cloned it using EaseUs Disk Copy, which comes with Ultimate Boot CD. The source HDD was 250 GB in size which had 3 partitions, while the USB HDD I attached to the system, which was supposed to be the destination/clone HDD, was 320 GB in size. I chose to create an exact replica, and so 250 GB worth of data (partitions, etc.) was copied exactly, and the rest of the space was un-allocated. I now connected this USB HDD to my main PC, fired up VMware Workstation 8 and defined a new Virtual Machine, and chose to boot off the USB HDD. Result is that when Windows is booting (from the cloned HDD inside VMware), I get the blue screen error before I reach the login screen. How can I change my methodology so that Windows even boots from the clone? I can change any tools I use, etc.

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • MRTG Monitoring of Disk

    - by Antitribu
    I am trying to monitor Disk usage over SNMP using MRTG on CentOS5.2. I'm open to any suggestions as to the best way to achieve this (I would also like to do other metrics like CPU). Please don't assume I know anything about MRTG. I am using the following config: LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt,/usr/share/snmp/mibs/TCP-MIB.txt workdir: /var/www/html/mrtg/temp/ # # Disk Usage Monitoring # Target[servername.]: dskPercent.0&dskPercent.0:[email protected] Title[servername.]: / on servername routers.cgi*Desc[servername.]: / on servername routers.cgi*ShortDesc[servername.]: / MaxBytes[servername.]: 100 AbsMax[servername.]: 100 Options[servername.]: growright,nopercent,gauge YLegend[servername.]: used disk space ShortLegend[servername.]: % used Legend1[servername.]: usage Legend2[servername.]: usage Legend3[servername.]: peak usage Legend4[servername.]: peak usage LegendI[servername.]: usage LegendO[servername.]: usage routers.cgi*Icon[servername.]: disk-sm.gif routers.cgi*Options[servername.]: noo,nomax,noabsmax Unscaled[servername.]: dwmy I receive the errors: Unknown SNMP var dskPercent.0 at /usr/bin/mrtg line 2035 Unknown SNMP var dskPercent.0 at /usr/bin/mrtg line 2035 From forum surfing etc the suggestion is to use the fully qualified OIDs, I'd like to avoid this (for readability). So essentially I'm wondering where can I find a mib file compatible with mrtg for it's reference or a working config file.

    Read the article

  • Parking domains and avoiding so called "search engine penalities"

    - by senthilkumar-c
    I have purchased two domains from one particular registrar and hosting from GoDaddy. Assume they are domain1.com and domain2.com Assume my hosting IP address is 111.111.111.111 I added both domain1.com and domain2.com in my domain management control panel and gave the same two nameservers for both domains at my registrar's control panel. So, now, both domains should show the same website. When I ping "domain1.com" or "domain2.com" the results say - Pinging domain1.com [111.111.111.111] with 32 bytes of data: Pinging domain2.com [111.111.111.111] with 32 bytes of data: respectively. So, they both point to the same hosting IP. BUT, internally, I have configured IIS to point them to different folders so that different websites are shown. (My hosting plan is expensive and I intend to use the space and bandwidth for many websites). But still, technically, all domains point to same IP address. Is this a bad thing? Is this what is called "domain parking"? I read some search engine forum posts that two domains pointing to the same IP/Website will be penalised by search engines and stuff. I have also read that simply "parking" the domains won't attract penality. I don't know whether what I have done is parking or the so called "wrong" thing. Can someone shed light on what I have done and what I should do? I don't want to be blacklisted by any search engine. P.S. I know this is not a search engine forum, but I am new to website hosting and domains and I am very weak in nearly all technical terms and concepts relating to web hosting and domains. I thought this will be a good place to understand these things.

    Read the article

  • Why does Windows Event Log stop logging events before maximum log size is reached?

    - by Tuure Laurinolli
    I have a service that produces a lot of event log output. Currently the event log is configured to overwrite any old events to keep the log from ever getting full. We have also increased the event log size considerably (to about 600 MB). Recently the service started reporting errors to its clients, and the error message it was sending to its clients is "The event log file is full". How can this be, when event log is configured to overwrite as necessary? In our hurry to get the service back up we cleared the event log without saving its contents, but most likely it had not reached 600 MB yet, judging from sizes of some earlier log dumps. There is also MS KB entry 312571, which reports that a hot fix to a similar issue is available, but the the configuration that the fix applies to is not exactly the same we have. Specifically, the fix only applies if event logs are configured to never overwrite old events. I wonder if this has something to do with the fact that the log files apparently are memory-mapped. What happens if the system runs out of address space to map files to?

    Read the article

  • Help trying to figure out why IIS7 is crashing / locking up / denying connections

    - by Pure.Krome
    Hi folks, I've got a pretty busy website that is running on a single web front end machine, on W2K8 + IIS7. Every now and then - eg. maybe monday at 3am or something, then a few days later .. some early morning time .. then nothing for 2 weeks ... etc - the website fails to respond to any client connections. ie. no one can connect to the website. I can remote desktop to the machine, etc no probs. I restart the app pool (the website is running in intergrated mode), still nothing. I try and get a crash dump of the process (it's around 600 mb maybe even more) ... that fails after about a min of trying (and i have plenty of HD space). The only way to fix this issue, is to manually stop the www service and then start it again. The stopping takes a while (a minute?) while starting is nearly instant. I'm at a loss to figure out what part of my code is causing this. At first, I thought it might be a stack overflow because of some error that might be going to the error page, which in turn errors .. rinse repeat boom. But i've had a look at the error page and it feels ok. So, I'm hoping someone might be able to help and say how I can correctly get a proper dump of the IIS process so i can then do some more autopsy on it. I would email Tess Ferrandez (the goddess of crash debuging) but I thought I'd try here before I spam her. Can anyone have any suggestions to how I can figure out how to start to debug this issue?

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Delete on windows vista and seven -- discovery process

    - by M'vy
    Hi SUs! I've recently encountered a problem. Using svn at work I needed to clear some space. As you may know svn directories are full of sub-directories and files. So the delete process begins with a step of discovering the items to be deleted (I guess this is for displaying the progress bar). But in my case it ended up to be still running after I watched Braveheart (Off-topic: good film in my opinion. On-topic: and it last 2h50) and counting 440 000+ files. I finally decided to cut off the process and use the good old cmd with a del <directory> to do the job. (Done in some minutes) So I'm wondering if someone know how to override the system to make it actually begins the process while scanning the other items? At the end, I just want the file to be deleted and I don't care the number of files to be deleted. On the contrary I care about the time it takes. Thanks

    Read the article

  • How can I avoid my web browser from redirecting to localhost using WAMP in Windows7?

    - by Josh
    I'm currently using Windows 7 with WAMP to try and work on some software, but my web browsers will not accept cookies from the "localhost" domain. I tried creating a few bogus domains in my hosts file by pointing them to 127.0.0.1 but when I type them in I am automatically redirected back to localhost. I have also configured virtualhosts in apache to correspond with the domains I added to the hosts file and it still redirects back to localhost. Is there anything special I must do on Windows 7 to get around this localhost redirect? Thanks for looking :) I'll include my host file here: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 magento.localhost.com www.localhost.com Thanks for looking :)

    Read the article

  • How can I upgradge from Ubuntu Intrepid Ibex 8.10 to Jaunty 9.04 when old-releases no longer has the necessary packages?

    - by tommy chheng
    I changed my sources.list to: deb http://old-releases.ubuntu.com/ubuntu/ intrepid main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ intrepid-updates main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ intrepid-security main restricted universe multiverse I tried installing sudo apt-get install update-manager-core but i get this error: 1 upgraded, 3 newly installed, 0 to remove and 40 not upgraded. Need to get 2506kB/2555kB of archives. After this operation, 4346kB of additional disk space will be used. Do you want to continue [Y/n]? Y Err http://old-releases.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found Err http://old-releases.ubuntu.com intrepid-security/main dpkg 1.14.20ubuntu6.3 404 Not Found Failed to fetch http://old-releases.ubuntu.com/ubuntu/pool/main/d/dpkg/dpkg_1.14.20ubuntu6.3_amd64.deb 404 Not Found Failed to fetch http://old-releases.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Running apt-get update or --fix-missing returns the same errors. How can I successfully upgrade from Intrepid Ibex to Jaunty 9.04?

    Read the article

  • Why are ISP's installing routers on my site when the feed is a form of ethernet already?

    - by Cosmin Prund
    I'm connected to 3 ISP's right now. Two of them already have routers at my site, the third one announced me "they need to install some equipment" when I requested BGP session. I can only assume they need to install a Router, since that connection is now working fine, using the usual /30 net block for the connection, and the "last-mile" solution is not going to change since they only installed it last week and the BGP was in the contract from the beginning. I simply don't understand this: the "feed" is already a form of ethernet. Even those they're using different technologies for the last mile, they're all entering the ISP router using an RJ45 WAN port. I assume the ISP router does something really important that can't be done by the Big Router on the other end of the connection. It must also be something that can hurt them if miss-configured, since they don't trust us (the client) to do the stuff on our router. And I'm not talking cheap throw-away routers here: One of the routers is Cisco 2800. Edit to add network details: I'm connected to 3 ISP's, two over Radio links, one over Fiber Optic. One of the radio links is going to get dropped and the other radio link will be turned into fiber sometime next year. The fiber is 20 Mbit, radio 1 is 40 Mbit and radio 2 is 2 Mbit. I've got a /24 of provider independent address space. I'm not doing out-of-the ordinary stuff with my network, I'm overly connected because my network needs to be "up" all the time.

    Read the article

  • Strange network connectivity problem

    - by Marc
    Here is my network connectivity: cable modem | |(WAN) wrt54g (default gateway, 192.168.1.1) -- earth |(LAN) | Simple Switch1 | | | | | SimpleSwitch2- neptune | | | | mars mercury | |- venus | |- laptop | saturn (Windows AD DC) simpleSwitch2 was hanging off the wrt54g. I moved it to SW1 during troubleshooting. Nothing described below was any different. earth is connected via wireless to the wrt54g. I can ping from laptop to mars, neptune & mercury. I can ping from earth to venus, saturn & laptop. However, pinging mars, mercury or neptune from earth gives the following result. Pinging mars.XXX.XXX [192.168.1.105] with 32 bytes of data: Reply from 192.168.1.122: Destination host unreachable. Reply from 192.168.1.122: Destination host unreachable. Reply from 192.168.1.122: Destination host unreachable. Reply from 192.168.1.122: Destination host unreachable. Ping statistics for 192.168.1.105: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), .122 is the address of the machine from which I am pinging. earth is a Vista machine. Windows firewall is off. saturn is my DNS & DHCP server. Can anyone give me any ideas what the h*ll is going on? Clearly the topology is a factor And yes, I am a space geek.

    Read the article

  • Move a screen session back to its original PID

    - by cron410
    Installed McMyAdmin (minecraft manager) on Ubuntu 12.04 32 bit. Wrote my own service to start McMyAdmin (.net app running in Mono) in its own screen session, and be able to inject proper McMyAdmin commands into that session with the init.d script. Its been running great! Today, I decided to start installing a Source dedicated server (counterstrike pro mod) I determine its going to be a long download process so I quit the process and fire up a fresh screen session called "source". I paste the command in, but it has a space at the begining and bash complains of ignoring semaphores or some such. I detach and reattach the session. Its sliding like butter. I ctrl+a-d out of the session and start exploring the new folder structure and figure out where I need to place a symbolic link. I go to resume that screen and this is what I see: $screen -r source There are several suitable screens on 20091.source (12/02/12 22:59:53) (Detached) 19972.source (12/02/12 22:57:31) (Detached) 917.minecraft (11/30/12 15:30:37) (Attached) It appears I am connected to the minecraft screen?!?!?! So I attach to the other screens one at a time. minecraft is running in 19972.source and sourceds is running in 20091.source So how the hell did I move the minecraft process to another session called source and my main terminal is now "attached" to the minecraft screen? more: I just used exit to quit the putty session, then logged back in, its still the same. did that 3 more times and now the minecraft screen is gone and everything is acting as it should except, of course, for the session name and start time of the "new" minecraft screen. Should I just submit this as a bug for GNU screen?

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >