Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 142/192 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Analyze a BSOD (irql_less_than_or_equal)

    - by Bruno Reis
    Hello. About 2 months ago I bought a new system and built it at home: Mother board: XFX X58i Processor: Core i7 920, using the stock cooler Memory: 3x2GB Corsair DDR3 1600 Video card: NVIDIA GTS 250 (1GB) Hard disk: 2x WD 500GB, 7200rpm I have 2 screens plugged into the video card, and the system is connected to a 550W PSU. Nothing is overclocked. After building the system, I stressed it a lot with Prime95 and rthdribl to check its stability. All my tests were perfect. So I reinstalled Win 7 x64 Professional and started using it normally. The first week (2010-03-15) I got the infamous irql_less_than_or_equal BSOD. Ten days after (2010-03-24) I got another one. Then on 2010-04-09, 2010-05-04. Since 2 days ago it became worse: I got one bluescreen per day! (2010-05-12, 2010-05-13, 2010-05-14). I installed BlueScreenView to try to obtain some information, but I'm not able to extract any useful information apart from the bug check string (irql_less_than_or_equal), and that it was caused by ntoskrnl.exe (the first three at ntoskrnl.exe+71f00, the last 4 at ntoskrnl.exe+70600 -- which I suspect could be the same thing, as Microsoft could have patched this file in the mean time, so the address of the function causing it changed). Then I stressed my memory sticks with memtest, they worked perfectly. After booting, I've stressed my GPU with FurMark and RTHDRIBL, everything was fine. Then I stressed the CPU with 4 instances of Prime95 while monitoring the temperature -- that never exceeded 85oC with the case closed --, everything fine. Finally I've stressed the whole system with HeavyLoad for a looooong time, everything worked just fine. So, I have stressed most of the components of the system, but couldn't get any useful information from it. Do you have any hint on what else can I do to find the culprit? Thanks Bruno

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • Windows XP can use a wired network port, but MacBook (OS X) fails on the same port

    - by Dean Hill
    I wired the Cat5 in my house seven years ago. The wired ports have worked fine with both my Windows XP laptop and MacBook. My wireless network also works fine, but I like to use wired occasionally. One of the Cat5 runs wasn't terminated with a jack, so I recently terminated this wire with a port/jack on the wall end and a standard Cat5 plug on the end that plugs into my router. This is the same setup as my other runs. Unfortunately, the MacBook isn't working well with the new wired port. The OS X Network System Preferences show the IP, Subnet, Router, etc., and everything looks fine. A "netstat -ibd" shows no errors or dropped packets. However, when I open a page in Safari, the status says "Contacting 'www.google.com'" and appears to hang. If I wait for a couple minutes, part of the Google page starts to display, but it is still not the full page load. When I use a Windows XP laptop on the same wired port, everything works fine. An internet speed test shows good results and all web pages load fine. A "netstat -e" under Windows shows no errors. I've used a Cat5 tester, and the cable tests fine (wires 1-8 light up in sequence). I've replaced both the port/jack and the connector twice to make sure I wired things correctly. I'd really like this Cat5 to work with the MacBook (and I'm trying to avoid running a new length of cable). Any ideas what the problem could be?

    Read the article

  • CouchDB Errors: "undefined symbol: js_fgets" on Ubuntu 10.04

    - by MattEzell
    Hello, ServerFault Community. I have been wrestling with this problem for weeks without resolution and was hoping that someone here might be able to help. As indicated above, this is in Ubuntu 10.04 (x86) using CouchDB 0.11.0. I have build and installed CouchDB 0.11.0 from source. Everything with the dependencies and with the CouchDB install itself 'goes off without a hitch' - no errors or complaints in the configure, make or install... CouchDB seems to be running 'fine'. I can access Futon without issue and utilize all CouchDB functionality found in Futon... Unfortunately, when I attempt to use any shows/views for an installed Couch App, I get the above js_fgets error before terminal (and the Couch log) fills up with TONS of JSON. Nothing ever renders in the browser, though Firebugs reports. I have used the official instructions (paying special attention to the 10.04 instructions) and have followed pretty much every Google thread that I can find on similar issues. I have chase SpiderMonkey (and Rhino) as well as Erlang as the culprit, but despite reinstalls and tests with these components, I still cannot get past this CouchDB issue on my system... Ideas? Pointers? Suggestions? Has anyone successfully installed and used CouchDB 0.11.0 on an Ubuntu 10.04 system to RUN APPLICATIONS? I have come across multiple individuals who immediately respond 'yes, I have it installed - it works great' only to have them realize in the end (as I did) that just because Futon thinks things are working, doesn't mean CouchDB is properly handling ALL requests. Thank you for your time and assistance!

    Read the article

  • Creating basic, redundant gigE or IB storage network for Xen?

    - by StaringSkyward
    With only a modest budget, I want to move my 4 xen servers over to network storage -either NFS or iSCSI which will be determined based on how well it performs when we test it (we need good throughput and it must continue to work through link and switch failure tests). We may add another couple of xen servers at some point when this is done. I don't know much about the design and operation of storage networks, so would really appreciate some hints from those with experience. The budget is around $3,800 excluding the storage appliance. I am currently thinking these are my options to remain on budget: 1) Go for used infiniband hardware and aim for 10gb performance. 2) Stick with gig ethernet and buy some new switches (cisco or procurve) to create a storage-only ethernet LAN. Upgrade to 10gigE later but try to use hardware capable of it where possible to reduce upgrade costs. I have seen used, warrantied infiniband switches at reasonable prices (presumably because big companies are converging on 10gbit ethernet?) and the promise of cheap 10gb is attractive. I know nothing about IB, so here come the questions: Can I buy 2 x switches and have multiple HBAs in my xen and storage nodes to get redundancy and increased performance without complexity or expensive management software costs? If so, can you point me to some examples? Do NFS and iSCSI work just the same regardless? Is IB a sensible choice or could/should I use ethernet or FC on the same budget - I'm keen not to get boxed into a corner for future upgrades, however. For the storage I am likely to build a storage server using nexentastor with the intention that I can later add more disks, SSDs and add another server to provide a failover option at the storage level. An HP LeftHand starter SAN is also under consideration, too. Thanks in advance.

    Read the article

  • Drobo FS vs Lime Technology unRAID vs FreeNAS

    - by elluca
    I already decided to by a drobo fs until I just found these two tests: http://www.digitalversus.com/data-robotics-drobo-fs-p889_9543_487.html http://www.digitalversus.com/lime-technology-unraid-p889_8992_473.html The two cons agains drobo for me: loudness price What disadvantages has the unraid stuff against the drobo fs? Has it also got that ease of use like swapping drives on the go, simply extend capacity by plugging in new drives, notify me of drive errors, disk failure protection, dynamic space of "partitions", better/worse effective capacity, etc. Which is more secure? Am I able to simply replace a bad drive with a new one on unraid? What happens if my pc fails? Lets say the cpu overheats. Since I have a complete pc which is going to be replaced, I only have to pay the software to use unraid. I am going to use my nas for: music library (how well does it integrate with iTunes? ) picture library movie library development (i need to be able to be to use time machine) I am going to use this nas with a MacBook pro. My current disks: 2x 500Gb 1x 1.5Tb 1x 2Tb On a drobo fs I would have 2.26 Tb of space. What would it be on unraid? Is FreeNAS also an alternative?

    Read the article

  • Win 2008 R2 - copying TO disk is very slow, copying FROM is more or less okay

    - by avs099
    I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C - C: about 50MB/sec C - E: about 55MB/sec E - E: 8MB/sec!!! E - C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..

    Read the article

  • LDAP authentication issue with Kerio Connect

    - by djk
    Hi, We have Kerio Connect (mail server) running on a Windows Server 2003 server on a domain. In the webmail client, users are able to change their domain password. This functionality used to work fine until a user tried to change their password a few days ago, when every password they'd try would result in the webmail client claiming their password was "invalid". I spoke to Kerio about this and they claim that this error is returned by the domain controller, which supports my initial investigations. The error that the DC is logging when an attempt is made to change the password is this: "80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece" The "data 52e" part indicates that this is an "invalid credentials" error. I don't see how this can be as I've tried (in the Kerio Connect configuration) various accounts that have privileges to modify accounts, including my own as I am a domain admin. I have ran 'dcdiag' (all tests) on the DC and it came back passing every single one of them. I've searched high and low for an answer to this and came up empty. Does anyone have any idea why this may have suddenly started happening? Thanks! Edit: I should mention that the passwords we are changing to do comply with the complexity policy.

    Read the article

  • Cannot access domain from windows 2003 client

    - by Peuge
    Hey all, First off I am a novice at AD and DNS so please bear with me. This is my current situation: I have one server which is a DC and DNS server (win2k3) - Machine 1. I have another machine which is trying to join this domain - Machine2. This machine is also a win2k3 server. This is what I have done so far: I have setup DNS on the DC and its tcp/ip dns is pointing to itself. On machine2 I have set its dns to point to the dc. The DNS has been setup with a forward lookup zone with the same name as the domain (accdirect.com). I can ping machine1 from the machine2 by its FQDN and ip. I have set up forwarders on the DC for our ISP dns and can browse the internet on both machines. In the DNS mmc on the DC I can see a host (A) has been created for machine2. The problem is I still cannot join the domain. When I try join the domain via my computer - properties then it brings up the username/password box and after I go "ok" it says cannot find domain accdirect.com If I run this from machine2 dcdiag /s:accdirect.com /u:accdirect.com\admin /p: then I get the following: Performing initial setup: ** Warning: could not confirm the identity of this server in the directory versus the names returned by DNS servers. If there are problems accessing this directory server then you may need to check that this server is correctly registered with DNS [accdirect.com] Directory Binding Error 1722: Win32 Error 1722 This may limit some of the tests that can be performed. Done gathering initial info. On the dc all dcdiag and netdiag results pass. If anyone could help me I would really appreciate this! Sorry if any of my terminology is a bit off, I have only been doing this for two days. thanks Peuge

    Read the article

  • Installing mod_pagespeed (Apache module) on CentOS

    - by Sid B
    I have a CentOS (5.7 Final) system on which I already have Apache (2.2.3) installed. I have installed mod_pagespeed by following the instructions on: http://code.google.com/speed/page-speed/download.html and got the following while installing: # rpm -U mod-pagespeed-*.rpm warning: mod-pagespeed-beta_current_x86_64.rpm: Header V4 DSA signature: NOKEY, key ID 7fac5991 [ OK ] atd: [ OK ] It does appear to be installed properly: # apachectl -t -D DUMP_MODULES Loaded Modules: ... pagespeed_module (shared) And I've made the following changes in /etc/httpd/conf.d/pagespeed.conf Added: ModPagespeedEnableFilters collapse_whitespace,elide_attributes ModPagespeedEnableFilters combine_css,rewrite_css,move_css_to_head,inline_css ModPagespeedEnableFilters rewrite_javascript,inline_javascript ModPagespeedEnableFilters rewrite_images,insert_img_dimensions ModPagespeedEnableFilters extend_cache ModPagespeedEnableFilters remove_quotes,remove_comments ModPagespeedEnableFilters add_instrumentation Commented out the following lines in mod_pagespeed_statistics <Location /mod_pagespeed_statistics> **# Order allow,deny** # You may insert other "Allow from" lines to add hosts you want to # allow to look at generated statistics. Another possibility is # to comment out the "Order" and "Allow" options from the config # file, to allow any client that can reach your server to examine # statistics. This might be appropriate in an experimental setup or # if the Apache server is protected by a reverse proxy that will # filter URLs in some fashion. **# Allow from localhost** **# Allow from 127.0.0.1** SetHandler mod_pagespeed_statistics </Location> As a separate note, I'm trying to run the prescribed system tests as specified on google's site, but it gives the following error. I'm averse to updating wget on my server, as I'm sure there's no need for it for the actual module to function correctly. ./system_test.sh www.domain.com You have the wrong version of wget. 1.12 is required.

    Read the article

  • MacBook Pro (OSX Lion) - shutdown automatically before reaching login screen

    - by mkk
    When I try to lunch my MacBook Pro I can see a progress bar on loading screen. It goes to 1/15 or something like this and then it shut downs - I cannot reach even login screen. It happened to me 2 months ago, I have 'fixed' this by formatting my hard drive and installing OSX (Lion) again. This time I think that situation is a little bit different - I am able to enter single-user mode by pressing cmd + s. I then type /sbin/fsck -yf, I get the error: ** Checking Journaled HFS Plus volume. The volume name is Macintosh HD ** Checking extents overflow file. ** Checking catalog file. Invalid node structure (4, 24704) ** The volume Macintosh HD could not be verified completely. /dev/rdisk0s2 (hfs) EXITED WITH SIGNAL 8 but when I type exit, I can the login screen and I can log in. I tried a lot of things, booting from recovery partition and choosing disk utility to repair the disc, but I get error that it cannot be repaired. I have googled for hours and the only real solution I have found was to buy Disc warrior that might fix the issue. Any other suggestions? Secondary question is what causes this issue? I thought the reason are bad sectors, but Smart Utility haven't found any. I found suggestion that RAM could cause this kind of issue as well, so I downloaded rember and made memory test - all tests passed. Right now I have used my solution of entering single-mode user and then typing exit, however I am not sure how long it will 'work'. Of course I have back-uped what I considered important. Thanks for the help in advance! UPDATE: I guess Smart Utility was not very useful, I mnaged to get input/output error, which I believe is equivalent to bad sector.

    Read the article

  • Western Digitial My Book: Can't access the data on the drive

    - by Bryan Denny
    My girlfriend has this external hard drive by Western Digital called a My Book. When the external drive is connected, it does not show it as an accessible disk drive on the computer. However, it shows up fine in Device Manager: I can also see it in Disk Management, but the volume is not mapped to a drive letter, nor can I change the drive letter: It only gives me access to Delete Volume: I would rather not lose the data on the drive if possible. What can I do from here to get to the data? Things I've tried/know: Uninstall drivers and re-install them Device does the same thing when attach to either her Win7 laptop or my Win8 laptop I don't think there's an issue with the HDD itself. No clicking noises, etc. I ran Western Digital Data LifeGuard Diangostics (DLGDIAG) and the SMART Status was a "PASS", all of the SMART Disk Information looked fine. I haven't had the time to run the diag tests yet but I do not believe it's a mechanical issue. The hard drive is inside of an enclosure, I have not attempted to pry the drive out yet. How can I get Windows to properly detect this drive?

    Read the article

  • How can I debug solutions in Visual Studio 2010 from a network share?

    - by alastairs
    I've recently got a new Mac laptop and am running VS2010 in a Parallels virtual machine. It's mostly working out well for me, but I'm having some problems with debugging specific project types, related to the fact that the projects are being accessed via a network share. Test projects don't run because the test runner can't load the tests' DLL. Web projects fail to run in the Visual Studio mini web server, throwing the following exception: 'An error occurred loading a configuration file: Failed to start monitoring changes to path\to\web.config'. I've spent the evening trawling the web with little luck on this. After reading these two posts, I tried out the usual CasPol changes, but then found this post from one of the early VS2010 betas indicating that CasPol is no longer needed/supported in .NET 4.0 and VS2010. The network share is accessible via both a mapped drive and the UNC path. The virtual machine runs its applications under the administrator account, which appears to have all the necessary permissions on the network share to create, read, write and delete files and folders. I say "appears to have" as I can't view the Security Properties of the appropriate folder via Explorer: the Security tab just isn't present. Has anyone managed to successfully load and debug web and test projects from a network share in VS2010?

    Read the article

  • How to bypass vpn talking to VMWare Guest?

    - by marc esher
    Greetings. Network/VPN n00b question here. I'm running VMWare Workstation with a Guest Windows 2003 Server. It has SQL Server 2000 installed. The sole purpose for this Guest is to house SQL Server... it needn't have internet access or access to any other resources on the network other than the host. When launch Check Point VPN software, the host routes through the company network before it connects to the guest ... i.e. it's no longer a direct connection. I assume this is just how things are supposed to work. However, what's happening is that the connection between my host and the SQL Server instance on the guest intermittently drops. It's not consistent, and some databases on the server will be responsive while others aren't. It appears that the databases with the most traffic on the guest (the ones I'm hitting with load tests) are the ones that become intermittently unresponsive. This problem only manifests when VPN is on; when it's off, I can pound away on this database with no troubles. Thanks for any advice!

    Read the article

  • How to set CA cert file for LDAP backend server in smbpasswd configuration

    - by hayalci
    I am having a problem with smbpasswd, an LDAP backend server and SSL/TLS certificates. The client machine that I run smbpasswd on is a Debian Etch machine, and the Ldap server is Sun DS running on Solaris. All the following occurs on the client. When I disable SSL, by setting "ldap ssl = no" in smb.conf, the smbpasswd program works without errors. When I set "ldap ssl = start tls", the following messages are printed by smbpasswd and there is a long timeout period before any password is asked by it Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! ..... long delay ..... New SMB password: Retype new SMB password: Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! smbpasswd: /tmp/buildd/openldap2-2.1.30/libraries/liblber/io.c:702: ber_get_next: Assertion `0' failed. Aborted I conducted some tests with "ldapsearch -ZZ". It was not working at first, but after I added the TLS_CACERT line to /etc/ldap/ldap.conf, /etc/libnss-ldap.conf and /etc/pam_ldap.conf, it started working. So relevant TLS sections in all those files are: ssl start_tls tls_checkpeer no tls_cacertfile /path/to/ca-root.pem TLS_CACERT /path/to/ca-root.pem But the smbpasswd program continued giving the error. I tried creating /etc/smbldap-tools/smbldap.conf file with following content (after consulting debian docs for smbldap-tools package) But as I see, smbpasswd comes with samba-common package and does not use the configuration for smbldap-tools utilities. verify="optional" cafile="/path/to/ca-root.pem" My question is: How can I set which SSL CA Certificate is used by smbpasswd program ?

    Read the article

  • Performance experiences for running Windows 7 on a Thin-Client?

    - by Peter Bernier
    Has anyone else tried installing Windows 7 on thin-client hardware? I'd be very interested to hear about other people's experiences and what sort of hardware tweaks they had to do to get it to work. (Yes, I realize this is completely unsupported.. half the fun of playing with machines and beta/RC versions is trying out unsupported scenarios. :) ) I managed to get Windows 7 installed on a modified Wyse 9450 Thin-Client and while the performance isn't great, it is usable, particularly as an RDP workstation. Before installing 7, I added another 256Mb of ram (512 total), a 60G laptop hard-drive and a PCI videocard to the 9450 (this was in order to increase the supported screen resolution). I basically did this in order to see whether or not it was possible to get 7 installed on such minimal hardware, and see what the performance would be. For a 550Mhz processor, I was reasonably impressed. I've been using the machine for RDP for the last couple of days and it actually seems slightly snappier than the default Windows XP embedded install (although this is more likely the result of the extra hardware). I'll be running some more tests later on as I'm curious to see particularl whether the streaming video performance will improve. I'd love to hear about anyone's experiences getting 7 to work on extremely low-powered hardware. Particularly any sort of tweaks that you've discovered in order to increase performance..

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • Bash script dosn't open in terminal on reboot

    - by twigg
    Quick overview, I have created a script that reboots the laptop after x amount of time and x amount of cycles. I have added the script to the start-up applications and the script does seem to be running in the background but never opens a terminal Window. Am I missing something? Adding Code (this is saved in a file called countdown.sh) #!/bin/bash # check if passed.txt exists if it does, send to soak test if [ -f passed.txt ]; then echo reboot has passed $nol cycles sleep 5; echo Starting soak tests sleep 5; rm testlog.txt; rm passed.txt; phoronix-test-suite run quick-test exit 0; fi # check if file testlog.txt exists if not create it if [ ! -f testlog.txt ]; then echo >> testlog.txt; fi # read reboot file to see how many loops have been completed exec < testlog.txt nol=0 while read line do nol=`expr $nol + 1` done # start the countdown, x is time limit let x=10; while [ $x -gt 0 ]; do clear; figlet "Rebooting in..."; figlet $x; let x-=1; sleep 1; done; echo reboot success $nol >> testlog.txt; shutdown -r now; # set how many times the script should shutdown the laptop reboot_count=1 # if number of reboots matches nol's then stop the script # create a new text file called passed.txt if [ "$nol" == "$reboot_count" ]; then echo reboot passed $nol cycles >> passed.txt; fi

    Read the article

  • Why are my socks proxies slow

    - by vps_newcomer
    I have a linux vps, and i have tried a few socks proxy setups to test their performance: All tests were using speedtest.net The standard ssh tunnel proxy 0.8mbit/s download and 0.1-0.2mbit/s upload speeds dante-server proxy 1.3mbit/s download and 0.4-0.5mbit/s upload I am wondering why are these speeds so slow? Is anything shaping them? Is it just the nature of socks proxies? I know that the ssh tunnel has to do encryption and what not so that is why its slow, but i was surprised to see that the second setup was also quite slow. On the VPS i have received download speeds of 25MB/s per second (thats about 200mbit/s and upload speed of atleast 5MB/s (haven't got a good enough pipe to test anything faster). The other option i was going to try is to setup OpenVPN and see how that goes, however i need to find a good tutorial as it's fairly complicated to setup. So why is it so slow? How can i test to see where the bottleneck is? How can i make it faster :D

    Read the article

  • Internal/External Moodle - DNS

    - by Chief17
    Network diagram: I have a moodle (a VLE) setup that I want to be internally and externally accessible. The green route on the diagram below is the route I would like the traffic to take when the user is inside the LAN, and the red route is seemingly what it does take. The website has a domain name (like most websites do). From the User PC, if I ping the domain name, I get the internal IP of the webserver (because of a hosts file entry), if I nslookup the domain name I also get the internal IP of the webserver (because of an A record on my DNS server). Running the same two commands on the webserver gives me the webservers external IP. (going well so far) If I use PHPs gethostbyname() on the moodle website and use domain name as a parameter (getting php/apache to resolve the hostname) it returns the exernal IP of the webserver (good news, DNS seems to be doing what I want it to). All things so far seem to be going well. The only thing that is confusing me and preventing the moodle single sign on from working is the fact that if I get moodle to show my IP address, it says that it is an external one (outside my NATting firewall) when it should show an internal IP. This is the issue, any ideas on how to go about resolving this? Any ideas on tests I can perform (I have also tried a tracert and the request goes directly to the webserver), anything? Thanks all!

    Read the article

  • Monitoring multiple sites on a single server using OpsView

    - by Kev
    We have several web servers. On each of these servers there can be ~250 web sites. I need to add a HTTP check for each site on each server. Each site has a reserved host header that we know can always be resolved in the format of: w10000.hostchecks.mycompany.com w10020.hostchecks.mycompany.com w11992.hostchecks.mycompany.com ..and so on.. What I want is for there to be a master ping check on the web server's main IP address and then separate HTTP checks for each of the sites on the server. If the master ping test fails then I want the HTTP tests to cease until the master ping check goes OK. I had a stab at this and tried do the following: Create a parent host that does a ping check on the server's main ip address (e.g. server is named WEB0001). For each of the sites that reside on WEB0001: Create a separate Host with a Primary Hostname of wXXXXX.hostchecks.mycompany.com Make WEB0001 the parent host Add a monitor (HTTP check to a special url that is mapped into each site using a virtual directory: H- $HOSTADDRESS$ -u /__hostcheck/IsAlive.aspx -w 5 -c 10 -p 80 However I find that if I down the parent server (WEB0001) the http checks seem to continue. Am I going about this completely the wrong way?

    Read the article

  • Windows 7 VPN Error 619

    - by TravisPUK
    So I am running Windows 7 Enterprise. This morning I was able to VPN using the built in VPN (Connect to Work Network etc). I had to change my network's IP address range and now the VPN will not work. It just stalls on the Verifying user name and password... message. But then it returns the 619 error. Anybody know why changing my machine's IP address would cause this problem? Where should I be looking to try and fix this issue? I have tried this on a Windows XP machine that also had the IP address range change and this still connects fine using exactly the same connection details. EDIT The internal network range changed from 192.x.x.x to 10.x.x.x. This was done on the entire Active Directory. All machines are running fine and the Windows XP machine, that works going to the same client VPN mentioned above is on the same network. Both the XP and the Win 7 machines are using DHCP served by the Domain Controller. The client domain is not performing any IP range checks/restrictions. The VPN is outside the internal network, connection is being made via the Internet and not passing through any other machine, other than the normal domain machines, ie DNS etc. This is passing through a router and the router has the relevant VPN passthrough options configured. All internal machines are working correctly with other forms of VPN, ie Cisco, Sonic etc (these were tested on other machines, they are not installed on the Vista or Win7 machines). After further testing, this is occurring on all Win7 and Vista machines where they can no longer connect to the client VPN, however all XP machines can still connect fine. This has been tested on three Vista, two Win7 and five XP machines. All machines are on DHCP and tests have been done with both the firewalls turned on and off, as well as with fixed IPs being used. Thanks Travis

    Read the article

  • Using Toshiba 22EL833 as PC display through HDMI input

    - by Oleg V. Volkov
    I had another Toshiba TV - 19SL738 - connected to this same PC and video card (GTX 8800) through DVI<-HDMI (DVI on PC side, HDMI on TV) before, that was working perfectly at it's native resolution 1360x768. Some time ago I had to change to 22EL833 and immediately faced problem with Windows 7 control panel and NVIDIA control panel both reporting native resolution for new TV as 1080i, 1920x1280, despite TV documentation saying that it have same 1360x768 as previous one. Practical tests confirmed that true native resolution is indeed 1360x768, because plugging in through DVI<-VGA and setting custom resolution through NVIDIA panel shown clear colors and crisp image, while setting anything different with either DVI<-VGA or DVI<-HDMI produced horribly distorted or squished images, with almost unreadable slim lines (as in letters, for example). Now, my problem is that there's no drivers for this TV and I'm unable to get good image while connecting it through DVI<-HDMI directly. The best I've achieved is editing EDID/driver manually, to persuade system that native resolution should be 1360x768, and while image became mostly clear, colors turned to some strange washed out effect, with pools of pure yellow, cyan and magenta there and there filling place of other colors. Gradients also became noticeably stripped as well. Somehow it looks like dithering gone bad and makes me suspect that image is still down/upscaled several times internally somewhere along the line. How can I connect this TV to DVI output of my video card to get best possible clear image, correct colors and correct native resolution?

    Read the article

  • How to open semicolon delimited CSV-files in US-version of Excel

    - by Holgerwa
    When I double-click on a .csv file, it is opened in Excel. The csv-files have columns delimited with semicolons (not commas, but also a valid format). Using a German Windows/Excel setup, the opened file is displayed correctly, the columns are separated where the semicolons existed in the csv-file. But when I do the same on an (US-) English Windows/Excel setup, only one column is imported, showing the whole data including the semicolons in the first column. (I don't have an English setup available for tests, users have reported the behavior) I tried to change the list separator value in Windows regional settings, but that didn't change anything. What can I do to be able to double-click-open those CSV-files on an English setup? EDIT: It seems to be the best solution not to rely on CSV-files in this case. I was hoping that there is some formatting for CSV-files that makes it possible to use them internationally. The best solution seems that I'll switch to creating XLS-files. Thanks to all for your suggestions and helpful tips!

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >