Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 172/328 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • Ubuntu on an XPS 14 Ultrabook with mSATA cache and 500GB HD - how to partition for dual boot?

    - by JDS
    I am getting an XPS 14 ( http://www.dell.com/us/p/xps-14-l421x/pd ) and I want to dual-boot Windows and Ubuntu. This thing has a 500GB standard HD and a 32GB mSATA that can be used as cache. Does anyone know how this thing is partitioned? Is the OS installed on the mSATA drive and data is on the big HD? Is there a BIOS controller or maybe even a Windows driver that makes the mSATA drive and 500GB HD appear contiguous? I get the impression that something makes the mSATA be used invisibly as cache, but I can't find any technical documentation how that works. My primary concern here is wrt dual-booting Ubuntu. I want to know if I need to partition the mSATA separately, or the big HD, or just partition the "magic" contiguous disk space that appears available to the OS.

    Read the article

  • Fluent Nhibernate - how do i specify table schemas when auto generating tables in SQL CE 4

    - by daffers
    I am using SQL CE as a database for running local and CI integration tests (normally our site runs on normal SQL server). We are using Fluent Nhibernate for our mapping and having it create our schema from our Mapclasses. There are only two classes with a one to many relationship between them. In our real database we use a non dbo schema. The code would not work with this real database at first until i added schema names to the Table() methods. However doing this broke the unit tests with the error... System.Data.SqlServerCe.SqlCeException : There was an error parsing the query. [ Token line number = 1,Token line offset = 26,Token in error = User ] These are the classes and associatad MapClasses (simplified of course) public class AffiliateApplicationRecord { public virtual int Id { get; private set; } public virtual string CompanyName { get; set; } public virtual UserRecord KeyContact { get; private set; } public AffiliateApplicationRecord() { DateReceived = DateTime.Now; } public virtual void AddKeyContact(UserRecord keyContactUser) { keyContactUser.Affilates.Add(this); KeyContact = keyContactUser; } } public class AffiliateApplicationRecordMap : ClassMap<AffiliateApplicationRecord> { public AffiliateApplicationRecordMap() { Schema("myschema"); Table("Partner"); Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.CompanyName, "Name"); References(x => x.KeyContact) .Cascade.All() .LazyLoad(Laziness.False) .Column("UserID"); } } public class UserRecord { public UserRecord() { Affilates = new List<AffiliateApplicationRecord>(); } public virtual int Id { get; private set; } public virtual string Forename { get; set; } public virtual IList<AffiliateApplicationRecord> Affilates { get; set; } } public class UserRecordMap : ClassMap<UserRecord> { public UserRecordMap() { Schema("myschema"); Table("[User]");//Square brackets required as user is a reserved word Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Forename); HasMany(x => x.Affilates); } } And here is the fluent configuraton i am using .... public static ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database( MsSqlCeConfiguration.Standard .Dialect<MsSqlCe40Dialect>() .ConnectionString(ConnectionString) .DefaultSchema("myschema")) .Mappings(m => m.FluentMappings.AddFromAssembly(typeof(AffiliateApplicationRecord).Assembly)) .ExposeConfiguration(config => new SchemaExport(config).Create(false, true)) .ExposeConfiguration(x => x.SetProperty("connection.release_mode", "on_close")) //This is included to deal with a SQLCE issue http://stackoverflow.com/questions/2361730/assertionfailure-null-identifier-fluentnh-sqlserverce .BuildSessionFactory(); } The documentation on this aspect of fluent is pretty weak so any help would be appreciated

    Read the article

  • Explorer hijacked by a trojan

    - by hsdb
    I managed to catch a newish trojan that my Anti virus program wasn't able to detect. I'm usually not that bad at technical stuff regarding computers but I really don't know what to do with this one anymore. The trojan somehow managed to get my Windows to recover it every time I try to remove it by deleting it's exe (called some weird name located in Programs "asdza/saddasn.exe" (example only)). It also sets an entry in my autostart. It starts a second explorer.exe and tries to connect to some server obviously trying to steal data. (glad I still have my firewall) How can I get rid of this trojan, I really need help. It is new and probably unrecognized by all major virus scanners.

    Read the article

  • Configuring BitLocker on existing Windows Server 2008

    - by neildeadman
    On our file server, we want to enable BitLocker so that we can encrypt a single drive and I know that we have to also encrypt the drive the system is installed on. In my tests with VMs, I have to create a partition on the same disk as Windows is installed either before I install Windows or after (I can use the BitLocker Drive Preparation Tool). This tool shrinks the C: drive to make way for a 1.5GB partition. I'm a little concerned about doing this with a live system so wanted to see if it was possible to get BitLocker to use a partition on another drive?

    Read the article

  • Can't serve files without extension because they "appear to be script" on IIS7.5

    - by madd0
    I created a certain number of static JSON files with no extension in a subfolder of my site. I want to use them for tests. The problem is that IIS is refusing to serve them because : HTTP Error 404.17 - Not Found The requested content appears to be script and will not be served by the static file handler. The folder is a subfolder of an ASP.NET application and I can't create an application just for it, neither can I change the parent application's application pool. Actually, I don't have access to the IIS configuration other than through the web.config file in the folder in question. I assume there must be a way to get a web server to serve static files, right?

    Read the article

  • Using Apache Environment Variables to set custom ErrorDocument

    - by Tad
    I've got a set of RewriteCond rules that test for various mobile devices and then set environment variables like "env=device:.iphone" or "env=device:.smartphone" if the useragent matches an iPhone or Android device. I'm trying to now redirect the user to custom-styled 404/500 server error pages for each device, by way of the error pages. Ideally I'd like to be able to test for a variable being there, and then write in a custom ErrorDocument string. But an apache doesn't seem to work in this case. Any ideas how I can construct if/else tests in an apache conf file for environment vars?

    Read the article

  • How realistic/easy is it to host our own web servers?

    - by morpheous
    It is increasingly looking like we will need to host our own servers, because we need modems physically attached to the server machines. I (think) we will need a T1 line to our office for starters. Dont know what else is involved other than the obvious redundancy and failover requirements. My questions are: Do we really have to do it our selves or can we find a service that allows the modems to be remote as well? If we have to host the servers ourselves, what are the steps (technical and operational) required?

    Read the article

  • How does one delete a directory filled with files and other subdirectory permanently, bypassing the trash, from the command line in OS X?

    - by Jon
    So my command line skills are a little rusty and I'm having trouble remembering the differences between the meanings of flags in different distro's os's. I also don't really remember all my technical lingo so manpages seem really unclear. Basically I'm on Mac OS X and want to delete a directory along with all of its contents. What I'm mainly concerned about, I suppose, is that it'll delete literally ALL of the references within the directory, including ../ and ../<everything else, including ../'s own ../> and then just totally screw up my entire system. Which of these do I want to run? $ rm -R dir-name/ or $ rm -r

    Read the article

  • Intermediate SSL Certificates on Azure Websites

    - by amhed
    I have successfully configured an Extended-Validation Certificate on an Azure Website following this article: http://www.windowsazure.com/en-us/documentation/articles/web-sites-configure-ssl-certificate/ The main (non-technical) stakeholder of the web application went through great lengths to validate that our site is secure. He went to this site to check the validity of our SSL: http://www.whynopadlock.com/ The site throw the following error: `SSL verification issue (Possibly mis-matched URL or bad intermediate cert.). Details: ERROR: no certificate subject alternative name matches`` The certificate is installed using IP Based SSL instead of SNI. This is done this way because some site visitors still use Internet Explorer 8 on Windows XP, which has no support for SNI and throws a security warning. Is my certificate correclty installed? I received three .CRT files from my SSL provider: PrimaryIntermediate.crt SecondaryIntermediate.crt EndCertificate.crt This is how I exported our certificate as a .PFX file to Azure: openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt

    Read the article

  • Is it possible to combine two internet connections to increase performance?

    - by cornjuliox
    I've got a small home network, 3 PCs plus a laptop or two when the relatives come to visit, connected to a single cable internet connection. Now, as soon as everyone starts using the 'net the performance starts to suffer and if the load is heavy enough nobody can get anything done and everyone complains. At one point it was so bad that only one of us could use it at a time. I was researching possible solutions to this problem and I heard that internet cafes that utilize 2 internet connections, possibly from different providers, and have some sort of router that allows them to split the traffic between the both of them, with online games going through one and web traffic going through another. Is this possible? What is the technical term for it, and can/should it be applied to a home network setup or is there another solution to this problem?

    Read the article

  • Login sometimes failing immediately after restoring a database

    - by Ian Ringrose
    We have a set of automated tests that restore a database then run some .net code against the database. Sometimes after the database is restored the login from Ado.net fails. If I re-run the test, then the restore and login works OK. The restored database looks OK when viewed with Management Studio. This is only an problem on some machines. We are using SQL server 2008. Is there a known issue with a database restore “returning “ a very short time before the restored database is up and running.

    Read the article

  • What has changed to make it possible to develop USB3.0

    - by RoboShop
    Like I know the transfer speeds are vastly different. But I don't understand is why they are faster. And why couldn't they have implemented USB 3.0 when they released 1.0? Like what technical breakthrough was required to get transfer speeds that fast? Was it cost? capacities of computer? like they couldn't read the data fast enough? Although USB is still well below hard drive transfer speeds engineering breakthrough? They found some new material which could transfer at a faster rate? Was this in the cable itself? In the hardware?

    Read the article

  • How do I negotiate for colo space?

    - by randy melder
    I guess this isn't a technical question, but it definitely is something IT teams deal with, so here goes: I'm looking at getting a rack at a local colocation facility. I'm weighing the options versus building out in a cloud platform. We are REALLY low bandwidth and power. There's a total of six hosts for the total operation. You can assume we use <= 10 amps of power and <= 2Mbps 95th percentile. Do you have any advice for getting the best deal?

    Read the article

  • How to make the jump from consumer support to enterprise support?

    - by Zac Cramer
    I am currently a high level consumer break/fix technician responsible for about 300-400 repairs a month. I am good at my job, but bored, and I want to move into the enterprise side of my company, dealing with Server 2008 R2 and exchange and switches and routers that cost more than I make in a month. How do I make this transition? Whats the best thing to learn first? Is there a standard trajectory for making this leap from consumer to business? I am full time employed, so going back to school is not a great option, but I have no life, so spending my nights and weekends reading and practicing is totally within my realm. I am basically overwhelmed by the number of things to learn, and looking for any advice you may have on the best way to proceed. PS - I apologize if this is a not quite the right forum for this, I know its not a technical question exactly, but I also know the sorts of people I want to answer this question are reading this website.

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZFS version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • Refactoring a leaf class to a base class, and keeping it also a interface implementation

    - by elcuco
    I am trying to refactor a working code. The code basically derives an interface class into a working implementation, and I want to use this implementation outside the original project as a standalone class. However, I do not want to create a fork, and I want the original project to be able to take out their implementation, and use mine. The problem is that the hierarchy structure is very different and I am not sure if this would work. I also cannot use the original base class in my project, since in reality it's quite entangled in the project (too many classes, includes) and I need to take care of only a subdomain of the problems the original project is. I wrote this code to test an idea how to implement this, and while it's working, I am not sure I like it: #include <iostream> // Original code is: // IBase -> Derived1 // I need to refactor Derive2 to be both indipendet class // and programmers should also be able to use the interface class // Derived2 -> MyClass + IBase // MyClass class IBase { public: virtual void printMsg() = 0; }; /////////////////////////////////////////////////// class Derived1 : public IBase { public: virtual void printMsg(){ std::cout << "Hello from Derived 1" << std::endl; } }; ////////////////////////////////////////////////// class MyClass { public: virtual void printMsg(){ std::cout << "Hello from MyClass" << std::endl; } }; class Derived2: public IBase, public MyClass{ virtual void printMsg(){ MyClass::printMsg(); } }; class Derived3: public MyClass, public IBase{ virtual void printMsg(){ MyClass::printMsg(); } }; int main() { IBase *o1 = new Derived1(); IBase *o2 = new Derived2(); IBase *o3 = new Derived3(); MyClass *o4 = new MyClass(); o1->printMsg(); o2->printMsg(); o3->printMsg(); o4->printMsg(); return 0; } The output is working as expected (tested using gcc and clang, 2 different C++ implementations so I think I am safe here): [elcuco@pinky ~/src/googlecode/qtedit4/tools/qtsourceview/qate/tests] ./test1 Hello from Derived 1 Hello from MyClass Hello from MyClass Hello from MyClass [elcuco@pinky ~/src/googlecode/qtedit4/tools/qtsourceview/qate/tests] ./test1.clang Hello from Derived 1 Hello from MyClass Hello from MyClass Hello from MyClass The question is My original code was: class Derived3: public MyClass, public IBase{ virtual void IBase::printMsg(){ MyClass::printMsg(); } }; Which is what I want to express, but this does not compile. I must admit I do not fully understand why this code work, as I expect that the new method Derived3::printMsg() will be an implementation of MyClass::printMsg() and not IBase::printMsg() (even tough this is what I do want). How does the compiler chooses which method to re-implement, when two "sister classes" have the same virtual function name? If anyone has a better way of implementing this, I would like to know as well :)

    Read the article

  • OS X Login Authentication Against Leopard Server

    - by mattdwen
    I am doing a few tests with OS X Server before I have to do a deploy in a few months. I have configured Open Directory, and created a few users. I've configured Directory Utility on a 10.5 client, but the login authentication doesn't work the way I would expect. I would expect I could user a username/password from any user created in Open Directory and be able to log into the client. Instead, it appears I need to create a local user, which you then sync with a directory user using Directory Utility. Alternatively, if I add an Active Directory config to the client, I can use any AD user, as I would expect. Am I hoping for the impossible, or is something likely wrong with the configuration?

    Read the article

  • Update saved password for basic authentication using a script

    - by Kalamane
    I have a website that uses basic authentication as described on this webpage. Each of the computers I manage have the password saved in their browser. There is only one username and password for this. After someone logs in to the site this way, they are presented with their individual username and password prompt as part of the web page. The purpose of the initial username/password is to discourage non-technical employees that aren't supposed to be using the page from even viewing it. So far, when we've had to change this password, I've manually gone to each computer and updated the saved password. I'm writing a startup script to configure other aspects of these systems so that I can maintain them easier. I'd like to be able to update the saved password via this script. The operating system running on these machines is Windows XP SP3 and the browsers they're using to access this site are IE8 and IE9. How can I update the saved basic authentication information for a website via a script?

    Read the article

  • kernel warning disk error for command write - solaris svm

    - by help_me
    Recently this warning came up on my message logs, scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd0): Oct 27 00:14:44 Error for Command: write(10) Error Level:Retryable Oct 27 00:14:44 scsi: [ID 107833 kern.notice] Requested Block: 101515828 Error Block: 101515828 Oct 27 00:14:44 scsi: [ID 107833 kern.notice] Vendor: SEAGATE Serial Number: 0441B9B5H Oct 27 00:14:44 scsi: [ID 107833 kern.notice] Sense Key: Hardware Error Oct 27 00:14:44 scsi: [ID 107833 kern.notice] ASC: 0x19 (defect list error), ASCQ: 0x0, FRU: 0x2 This is showing signs of disk failing in my opinion. I have not seen the messages re-occurring. This is on a Solaris 9 Sparc system V240. The disks are managed by SVM and "metadb" is showing the flags as "a" Are there any tests or indications as to check/see if the disk is actually failing or was that error message initiated by something else. Thank you!

    Read the article

  • Why don't I just build the whole web app in Javascript and Javascript HTML Templates?

    - by viatropos
    I'm getting to the point on an app where I need to start caching things, and it got me thinking... In some parts of the app, I render table rows (jqGrid, slickgrid, etc.) or fancy div rows (like in the New Twitter) by grabbing pure JSON and running it through something like Mustache, jquery.tmpl, etc. In other parts of the app, I just render the info in pure HTML (server-side HAML templates), and if there's searching/paginating, I just go to a new URL and load a new HTML page. Now the problem is in caching and maintainability. On one hand I'm thinking, if everything was built using Javascript HTML Templates, then my app would serve just an HTML layout/shell, and a bunch of JSON. If you look at the Facebook and Twitter HTML source, that's basically what they're doing (95% json/javascript, 5% html). This would make it so my app only needed to cache JSON (pages, actions, and/or records). Which means you'd hit the cache no matter if you were some remote api developer accessing a JSON api, or the strait web app. That is, I don't need 2 caches, one for the JSON, one for the HTML. That seems like it'd cut my cache store down in half, and streamline things a little bit. On the other hand, I'm thinking, from what I've seen/experienced, generating static HTML server-side, and caching that, seems to be much better performance wise cross-browser; you get the graphics instantly and don't have to wait that split-second for javascript to render it. StackOverflow seems to do everything in plain HTML, and you can tell... everything appears at once. Notice how though on twitter.com, the page is blank for .5-1 seconds, and the page chunks in: the javascript has to render the json. The downside with this is that, for anything dynamic (like endless scrolling, or grids), I'd have to create javascript templates anyway... so now I have server-side HAML templates, client-side javascript templates, and a lot more to cache. My question is, is there any consensus on how to approach this? What are the benefits and drawbacks from your experience of mixing the two versus going 100% with one over the other? Update: Some reasons that factor into why I haven't yet made the decision to go with 100% javascript templating are: Performance. Haven't formally tested, but from what I've seen, raw html renders faster and more fluidly than javascript-generated html cross-browser. Plus, I'm not sure how mobile devices handle dynamic html performance-wise. Testing. I have a lot of integration tests that work well with static HTML, so switching to javascript-only would require 1) more focused pure-javascript testing (jasmine), and 2) integrating javascript into capybara integration tests. This is just a matter of time and work, but it's probably significant. Maintenance. Getting rid of HAML. I love HAML, it's so easy to write, it prints pretty HTML... It makes code clean, it makes maintenance easy. Going with javascript, there's nothing as concise. SEO. I know google handles the ajax /#!/path, but haven't grasped how this will affect other search engines and how older browsers handle it. Seems like it'd require a significant setup.

    Read the article

  • ASUS EeePC 1001PX, hard disk clicking in Ubuntu Maverick

    - by MeanEYE
    I just received my new Asus EeePC 1001px netbook. After installing Ubuntu 10.10 on it, I've noticed that my hard drive is making a clicking noise. Now this is not a loud clicking noise nor it's constant (only sounds occasionally and when hard disk is not writing or reading anything). Another strange thing is, this only happens when netbook is using battery power, the moment I plug in AC power clicking stops. Additionally I noticed that when I go into BIOS I can hear the click only once, same thing happens if I boot Ubuntu from USB. That led me to believe the problem is within operating system. I did all the surface scans and SMART tests and everything seems to be fine. Now noise sounds like heads are trying to "park" themselves so I tried disabling "spin down" option in Power Management but it didn't help. Any idea?

    Read the article

  • Sendmail Alias for Nonlocal Email Account

    - by Mark Roddy
    I admin a server which is running a number of web applications for a software dev team (source control, bug tracking, etc). The server has sendmail running solely as a transport to the departmental email server over which I have no control. We have someone who is still in the department but no longer on the dev team so I need to configure the transport agent to redirect all outgoing email (which would be coming from these applications) to the person that has taken their place. I added an entry in /etc/aliases like such: [email protected]: [email protected] But when I run /etc/init.d/sendmail newaliases I get the following error: /etc/mail/aliases: line 32: [email protected]... cannot alias non-local names So clearly I'm doing something I shouldn't. Is there a way to get aliases to work with non-local names or alternatively is their a way to accomplish my goal of redirecting outgoing mail for this user to another one? Technical Specs if the matter: Ubuntu 6.06 sendmail 8.13 (ubuntu provided package)

    Read the article

  • Overloading '-' for array subtraction

    - by Chris Wilson
    I am attempting to subtract two int arrays, stored as class members, using an overloaded - operator, but I'm getting some peculiar output when I run tests. The overload definition is Number& Number :: operator-(const Number& NumberObject) { for (int count = 0; count < NumberSize; count ++) { Value[count] -= NumberObject.Value[count]; } return *this; } Whenever I run tests on this, NumberObject.Value[count] always seems to be returning a zero value. Can anyone see where I'm going wrong with this? The line in main() where this subtraction is being carried out is cout << "The difference is: " << ArrayOfNumbers[0] - ArrayOfNumbers[1] << endl; ArrayOfNumbers contains two Number objects. The class declaration is #include <iostream> using namespace std; class Number { private: int Value[50]; int NumberSize; public: Number(); // Default constructor Number(const Number&); // Copy constructor Number(int, int); // Non-default constructor void SetMemberValues(int, int); // Manually set member values int GetNumberSize() const; // Return NumberSize member int GetValue() const; // Return Value[] member Number& operator-=(const Number&); }; inline Number operator-(Number Lhs, const Number& Rhs); ostream& operator<<(ostream&, const Number&); The full class definition is as follows: #include <iostream> #include "../headers/number.h" using namespace std; // Default constructor Number :: Number() {} // Copy constructor Number :: Number(const Number& NumberObject) { int Temp[NumberSize]; NumberSize = NumberObject.GetNumberSize(); for (int count = 0; count < NumberObject.GetNumberSize(); count ++) { Temp[count] = Value[count] - NumberObject.GetValue(); } } // Manually set member values void Number :: SetMemberValues(int NewNumberValue, int NewNumberSize) { NumberSize = NewNumberSize; for (int count = NewNumberSize - 1; count >= 0; count --) { Value[count] = NewNumberValue % 10; NewNumberValue = NewNumberValue / 10; } } // Non-default constructor Number :: Number(int NumberValue, int NewNumberSize) { NumberSize = NewNumberSize; for (int count = NewNumberSize - 1; count >= 0; count --) { Value[count] = NumberValue % 10; NumberValue = NumberValue / 10; } } // Return the NumberSize member int Number :: GetNumberSize() const { return NumberSize; } // Return the Value[] member int Number :: GetValue() const { int ResultSoFar; for (int count2 = 0; count2 < NumberSize; count2 ++) { ResultSoFar = ResultSoFar * 10 + Value[count2]; } return ResultSoFar; } Number& operator-=(const Number& Rhs) { for (int count = 0; count < NumberSize; count ++) { Value[count] -= Rhs.Value[count]; } return *this; } inline Number operator-(Number Lhs, const Number& Rhs) { Lhs -= Rhs; return Lhs; } // Overloaded output operator ostream& operator<<(ostream& OutputStream, const Number& NumberObject) { OutputStream << NumberObject.GetValue(); return OutputStream; }

    Read the article

  • Ping from windows 7 get no reply but sets errorlevel to 0

    - by Doron
    From a Windows 7 machine, I ping an IP address of a turned-off machine. C:\>ping 192.168.1.222 Pinging 192.168.1.222 with 32 bytes of data: Reply from 192.168.1.222: Destination host unreachable. Reply from 192.168.1.222: Destination host unreachable. Reply from 192.168.1.222: Destination host unreachable. Ping statistics for 192.168.1.222: Packets: Sent = 3, Received = 3, Lost = 0 (0% loss) Even though there is no reply, the errorlevel is set to 0. What I am trying to do, is figure out if a remote machine is replying to ping. One of my tests is to turn off the machine and ping it. For some reason, ping sets errorlevel to 0.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >