Search Results

Search found 25519 results on 1021 pages for 'virtual machine'.

Page 219/1021 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Visual Studio 2008 Build question x64 vs x86

    - by Brett
    Hi Everyone, I have written an application on my x64 machine in Visual Stuido 2008. The application will be sent to someone, and I have two questions that I need answers to. What requirements will they need to have installed. I am assuming the .NET 3.5 redistributable. Are there anything else though? (The application does not call any external dependencies). This is my realy question that I can't find the answer to. I have developed and build the application on my x64 machine using the "Any CPU" option (as versus x64 or x86 specifically). Will this run on a 32 bit machine? (I don't have one to test). Or do I need to build it specifically for x86 in order to run it on a 32 bit machine? Many thanks, Brett

    Read the article

  • Why does Process Explorer cause highly targeted failure of some applications / basic UI functions in a high-power EC2 Windows instance?

    - by Dan Nissenbaum
    Update: I have determined that Process Explorer itself - the program I am using to debug a performance issue - seems to be the cause of the issue. See note, with updated question, at end. I am running a high-power (cc2.8xlarge) Amazon AWS EC2 Windows instance off of a boot EBS volume, provisioned at 2500 PIOPS, which was created from a snapshot of a previous boot volume. My purpose with the instance is to use it as a development workstation with many developer tools installed, such as Visual Studio, a local XAMPP stack, etc. I have upwards of 40 programs installed on the machine. The usability of the instance as a development machine often works quite well. The RDP lag is adequately small. I have used it for hours on end without problems for some of my most intense development tasks. As a result, I have just purchased a reserved instance, and I opted to rebuild my development machine starting from scratch with a Windows Server 2012 AMI. After having installed all of my desired/required applications for development over this past week, again the machine seems to often work well and I have worked for up to an hour at a time without problems doing heavy development work. However, I continue to run into catastrophic OS usability issues that may prevent me from being able to rely on this machine as a development machine. I would like to track down the source of the problem, if there is an easily identifiable source. (Update: I have tracked down the source to be Process Explorer, the very program I was using to debug the problem. See update at end.) The issues are as follows. (These are some primary examples) Some applications, after a period of adequate responsiveness, suddenly begin to respond very, very slowly to basic user interface actions such as clicking on menus and pressing Ctrl-Tab to switch between open documents. Two examples are UltraEdit and PhpEd. It typically takes ~2 seconds for a menu to appear, and ~4 seconds to switch between open documents. Additionally, insertion point motion in the editor is lagged by upwards of ~2 seconds. Process Explorer, which I am using to help debug the problem, seems to run acceptably for a couple of minutes, but on multiple occasions Process Explorer itself hangs completely. It hangs at the same time as the problems noted above. When it hangs, it is 100% unresponsive. Clicking on its taskbar icon neither causes it to come to the top or go behind, and its viewable area is filled with nothing but a region partially containing pure white and partially containing incomplete windows widgets that are unreadable, and that never change. Waiting 10 minutes does not clear the problem. Attempting to force-quit Process Explorer by right-clicking on its taskbar icon and choosing "Close Window" takes about 5 full minutes to exit (Process Explorer itself can't be used to exit Process Explorer, and it is registered as a Task Manager substitute). Other programs work just fine during this time. For example, Chrome tabs flip very quickly back and forth, menus pop open instantly, web pages load quickly, and typing in forms/web applications inside the browser works promptly. Another example of an application that works crisply is Filemaker - its menus open instantly, and switching views in this application occurs promptly. Other applications also work without issue. Also, switching between applications occurs promptly as well. It is only a handful of applications that exhibit the problem, with some primary examples given above. At first I thought that EBS IOPS might be a problem. Therefore, I ran Performance Monitor, and watched the "Disk Transfers/sec" monitor in real time. At no point did this measure come anywhere close to hitting the 2500 PIOPS provisioned for the EBS volume. The RAM was also well under the limit (~10 GB used out of 60 GB). I did notice that one CPU core (out of 32 logical cores) was fully thrashing at 100% (i.e., ~3.1%) during the problematic periods. This seems to indicate that a single CPU core is handling the menus / flipping between open documents (for some applications only) / managing the Process Explorer user interface, and that this single core was hosed for some reason during the problematic periods. Also note that I have a desktop workstation (Windows 7) that I also use as a development machine, via a remote connection, with a nearly identical set of programs installed, and this desktop workstation does not exhibit any of the problems I've discussed above. I have been using it heavily for well over a year now. Any suggestions regarding either the source of the problem, or steps I might take to investigate the source of the problem, would be appreciated. Thanks. Note: After extensive testing & investigation, I have noticed that when I quit Process Explorer, the problem vanishes and the system performance returns to normal, and then reappears quickly when I run Process Explorer again (note: again, the performance problems only appear for a subset of applications - other applications work perfectly fine during the same period). My question is therefore (thankfully) more specific: Why does Process Explorer cause highly targeted failure of some applications (including itself) and basic UI functions, in a high-power EC2 Windows instance?

    Read the article

  • Can't access web service when connected to the network :: HTTP 407

    - by Ian
    Hi All, I have a console application that communicates with a web service. Both of them are on the same machine. When I am accessing the web service with the LAN disabled, it connects without a problem. But if the LAN is enabled and connected to our office network, I receive this error: "HTTP 407 Proxy Authentication required - The ISA Server requires authorization to fulfill the request. Access to the Web Proxy service is denied." We've been hunting the source of the problem for three days now. We have tried everything that we can think of. Any ideas what's causing the problem? Additional notes: - The machine is in a Workgroup setup but with DNS suffix (computer.local). When accessing the web service, we type the address as "http://machine.computer.local/service.asmx" I talked to the IT guys and they said that we don't have an ISA server installed There is no "proxy" set in IE. The machine is in mint condition.

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Research

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involves the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image operations and seamless transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For carrying out research in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For computer vision research work, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your research, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your research group? Thanks in advance. Edit: As suggested, I've opened the question to both academic and non-academic computer vision/machine learning/pattern recognition researchers and groups.

    Read the article

  • Unique ID Defined by Most-Derived Class accessible through Base Class

    - by Narfanator
    Okay, so, the idea is that I have a map of "components", which inherit from componentBase, and are keyed on an ID unique to the most-derived*. Only, I can't think of a good way to get this to work. I tried it with the constructor, but that doesn't work (Maybe I did it wrong). The problem with any virtual, etc, inheritance tricks are that the user has to impliment them at the bottom, which can be forgotten and makes it less... clean. *Right phrase? If - is inheritance; foo is most-derived: foo-foo1-foo2-componentBase Here's some code showing the problem, and why CRTP can't cut it: (No, it's not legit code, but I'm trying to get my thoughts down) #include<map> class componentBase { public: virtual static char idFunction() = 0; }; template <class T> class component : public virtual componentBase { public: static char idFunction(){ return reinterpret_cast<char>(&idFunction); } }; class intermediateDerivations1 : public virtual component<intermediateDerivations1> { }; class intermediateDerivations2 : public virtual component<intermediateDerivations2> { }; class derived1 : public intermediateDerivations1 { }; class derived2 : public intermediateDerivations1 { }; //How the unique ID gets used (more or less) std::map<char, componentBase*> TheMap; template<class T> void addToMap(componentBase * c) { TheMap[T::idFunction()] = c; } template<class T> T * getFromMap() { return TheMap[T::idFunction()]; } int main() { //In each case, the key needs to be different. //For these, the CRTP should do it: getFromMap<intermediateDerivations1>(); getFromMap<intermediateDerivations2>(); //But not for these. getFromMap<derived1>(); getFromMap<derived2>(); return 0; } More or less, I need something that is always there, no matter what the user does, and has a sortable value that's unique to the most-derived class. Also, I realize this isn't the best-asked question, I'm actually having some unexpected difficultly wrapping my head around it in words, so ask questions if/when you need clarification.

    Read the article

  • CustomErrors section does not handle 404 properly on production server

    - by Adrian Grigore
    Hi, I'd like my ASP.NET MVC application to redirect failed requests to matching action methods of a certain controller. This works fine on my development machine running Windows 7, but not on my production machine running Windows 2008 R2. I set up my web.config as follows: <customErrors mode="On" defaultRedirect="/Error/ServerError/500"> <error statusCode="403" redirect="/Error/AccessDenied" /> <error statusCode="404" redirect="/Error/FileNotFound" /> </customErrors> This customErrors section works fine on both of my machines (production and development) for 500 Internal Server errors. It also works fine for 404 errors on my development machine. However, it does not properly redirect 404 errors on the production machine. Instead of /Error/FileNotFound, I get the standard 404 page that comes with IIS 7. What could be the problem here?

    Read the article

  • Accessing SQL Server using an IP Address and Port Number ... Help!

    - by Mike
    I need to access an SQL Server that is on a machine behind a firewall and you access this machine using an ip address like 95.95.95.33:6930 (not the real ip address) ... But, you get my point that by accessing 95.95.95.33 on port 6930, the firewall routes the requests to that particular machine ... My question is ... How do you construct a connection string to access the machine at address 95.95.95.33:6930 and then further access the SQL Server on port 1433 or maybe a different port like 8484 ??? Thanks Mike

    Read the article

  • Plink SSH: '-m file' option not working

    - by Technext
    Hi, I am trying to use Plink for running commands on remote server. Both, local & remote machine are Windows. Though I am able to connect to the remote machine using Plink, i am not able to use the '-m file' option. I tried the following three ways but to no avail: Try 1: plink.exe -ssh -pw mypwd gchhabra@machine -m file.txt Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found file.txt only contains one command i.e., dir Try 2: plink.exe -ssh -pw mypwd gchhabra@machine dir Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found Try 3: plink.exe -ssh -pw mypwd gchhabra@machine < file.txt In this case, I get the following output: Using username "gchhabra". ****USAGE WARNING**** This is a private computer system. This computer system, including all ..... including personal information, placed or sent over this system may be monitored. Use of this computer system, authorized or unauthorized, constitutes consent ... constitutes consent to monitoring for these purposes. dirCould not chdir to home directory /home/gchhabra: No such file or directory Microsoft Windows [Version x.x.xxx] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\OpenSSH After I get the above prompt, it hangs. Can anyone please help me with this? Regards, Gaurav

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • Minimum Hardware requirements for Android development

    - by vishwanath
    I need information about minimum hardware requirement I need to have better experience in developing Android application. My current configuration is as follows. P4 3.0 GHz, 512 MB of ram. Started with Hello Android development on my machine and experience was sluggish, was using Eclipse Helios for development. Emulator used to take lot of time to start. And running program too. Do I need to upgrade my machine for the development purpose or is there anything else I am missing on my machine(like heavy processing by some other application I might have installed). And If I do need to upgrade, do I need to upgrade my processor too(that counts to new machine actually, which I am not in favor of), or only upgrading RAM will suffice.

    Read the article

  • Sanity check on this idea for an Image Viewer in a web app

    - by Charlie Flowers
    I have an approach in mind for an image viewer in a web app, and want to get a sanity check and any thoughts you stackoverflowers might have. Here's the whirlwind nutshell summary: I'm working on an ASP.NET MVC application that will run in my company's retail stores. Even though it is a web application, we own the store machines and have control over them. We have a "windows agent" running on the store machine which we can talk to via http post (it is a WCF service, and our web app has permission to talk to it from the browser). One of the web pages needs to be an "image viewer" page with some common things like Rotate & Zoom. Now, there are some WebForms controls that offer Rotate and Zoom. However, they take up server resources and generate a good bit of traffic between the server and the browser. For example, the Rotate function would cause an ajax call to the server, which would then generate a new image written to a .NET Canvas object, which would then be written to a file on the server, which would then be returned from the ajax call and refreshed inside the browser. Normally, that's a pretty good way of doing things. But in our case, we have code running on the store machine that we can communicate with. This leads me to consider the following approach: When the user asks to view an image, we tell our "windows agent" to download it from our image server to the store machine. We then redirect our browser to our image viewer page, which will pull the image from the local file we just wrote to the store machine. When the user clicks "Rotate", we cause JavaScript code in the browser to call our "windows agent" software, asking it to perform the "Rotate" function. The "windows agent" does the rotation using the same kind of imaging control that would formerly have been used on the server, but it does so now on the store machine. Javascript in the browser then refreshes the image on the page to show the newly rotated image. Zoom and similar features would be implemented the same way. This seems to be much more efficient, scalable, and responsive for the end-users. However, I've never heard of anything like it being done, mostly because it's rare to have this combination of a web app plus a "windows agent" on the client machine. What do you think? Feasible? Reasonable? Any pitfalls I overlooked or improvements / suggestions you can see? Has anyone done anything like this who would like to offer the wisdom of experience? Thanks!

    Read the article

  • Why packets injected with libpcap are duplicated?

    - by r0u1i
    I'm using sharppcap in order to send packets as part of a monitoring system. Usually it works well but I've encountered the strangest bug on a hosted vista machine and I would like your help. On that virtual vista machine, injected packets are duplicated. That is, if I send a ping request using libpcap, it somehow gets duplicated and I get two requests on the destination machine. The two requests are almost identical byte-wise, and the only difference between them is that the second packet's TTL field is one minus the original packet's value. Using wireshark I can see the packet gets duplicated before it (and its clone) leave the vista machine. The problem is manifested even when using other tools for injecting packets using libpcap (namely PlayCap). Any ideas?

    Read the article

  • Auto-resolving a hostname in WCF Metadata Publishing

    - by Mike C
    I am running a self-hosted WCF service. In the service configuration, I am using localhost in my BaseAddresses that I hook my endpoints to. When trying to connect to an endpoint using the WCF test client, I have no problem connecting to the endpoint and getting the metadata using the machine's name. The problem that I run into is that the client that is generated from metadata uses localhost in the endpoint URLs it wants to connect to. I'm assuming that this is because localhost is the endpoint URL published by metadata. As a result, any calls to the methods on the service will fail since localhost on the calling machine isn't running the service. What I would like to figure out is if it is possible for the service metadata to publish the proper URL to a client depending on the client who is calling it. For example, if I was requesting the service metadata from a machine on the same network as the server the endpoint should be net.tcp://MYSERVER:1234/MyEndpoint. If I was requesting it from a machine outside the network, the URL should be net.tcp://MYSERVER.mydomain.com:1234/MyEndpoint. And obviously if the client was on the same machine, THEN the URL could be net.tcp://localhost:1234/MyEndpoint. Is this just a flaw in the default IMetadataExchange contract? Is there some reason the metadata needs to publish the information in a non-contextual way? Is there another way I should be configuring my BaseAddresses in order to get the functionality I want? Thanks, Mike

    Read the article

  • How do I make a script that properly introduces hstore in both 9.0 and 9.1?

    - by Igor Zinov'yev
    I am trying to introduce the hstore type into the database of a project I'm developing. But the problem is that I have a slightly newer version of the Postgres server installed on my development machine than there is on the production machine. While I can simply execute the CREATE EXTENSION command locally, this command is unavailable on the production machine. Is there a way to create a script that will install hstore both on 9.1 and 9.0?

    Read the article

  • ASP.net file operations delay

    - by mtranda
    Ok, so here's the problem: I'm reading the stream from a FileUpload control, reading in chunks of n bytes and writing the array in a loop until I reach the stream's end. Now the reason I do this is because I need to check several things while the upload is still going on (rather than doing a Save(); which does the whole thing in one go). Here's the problem: when doing this from the local machine, I can see the file just fine as it's uploading and its size increases (had to add a Sleep(); clause in the loop to actually get to see the file being written). However, when I upload the file from a remote machine, I don't get to see it until the the file has completed uploading. Also, I've added another call to write the progress to a text file as the progress is going on, and I get the same thing. Local: the file updates as the upload goes on, remote: the token file only appears after the upload's done (which is somewhat useless since I need it while the upload's still happening). Is there some sort of security setting in (or ASP.net) that maybe saves files in a temporary location for remote machines as opposed to the local machine and then moves them to the specified destination? I would liken this with ASP.net displaying error messages when browsing from the local machine (even on the public hostname) as opposed to the generic compilation error page/generic exception page that is shown when browsing from a remote machine (and customErrors are not off) Any clues on this? Thanks in advance.

    Read the article

  • Issues with SharePoint 2010 Development

    - by Rahul Soni
    I am planning to use my laptop for SharePoint 2010 development and I have only 4 GB RAM which is not even upgradable. Just because of RAM constraint, my VS 2010 keeps crawling if I try to run it along with SharePoint 2010 on the same machine. Hence, I've reformatted my machine and looking for alternate solutions until I get a new laptop. Currently, I have installed VS 2010 ONLY on my laptop and wanted to create an empty SharePoint project. Once done with my project, I want to deploy it on a different machine (which is a 4GB RAM machine as well, but contains only SharePoint 2010). I thought this will work and give me a bit of breather if everything is configured well. Unfortunately, when I tried creating a new SharePoint Empty Project in VS 2010, it says... A SharePoint server is not installed on this computer. A SharePoint server must be installed to work with SharePoint projects. Is there a way out?

    Read the article

  • Loading embedded resource on Windows 7

    - by Flack
    Hello, I have an app that works just fine on my WinXP machine. However, when I try running it on my Win7 machine, it fails whenever it tries to load an embedded resource. The resources are all there (I can see them using Reflector). The lines that fail are all of the form: Splash.Image = new Bitmap(typeof(ContainerForm).Assembly.GetManifestResourceStream("SplashTest.Resources.Logo.gif")); And they all fail with the same exception: Exception='System.ArgumentException: Parameter is not valid. at System.Drawing.Bitmap..ctor(Stream stream) I don't understand why this is not working on my Win7 machine but does on my usual WinXP dev machine. Any ideas?

    Read the article

  • Parameters with dots on the URI

    - by robertokl
    In my application, there is a resource, machine, and a nested resource from machine: ip. I want to be able to access the URI of an Ip typing the ip address. The URI should be something like this: /machines/m123/ips/192.168.0.1.xml Where "m123" is the name of the machine and "192.168.0.1" is one of the ips of that machine. The problem here is that rails miss understand the dots from the ip and the format. When I try to access this page, i get: No route matches "/machines/m123/ips/192.168.0.1.xml" And if I replace the dots for any other character it works, witch means that rails isn't handling the dots on the URI. Is there any way to enter a more complex regexp on the routes to make sure rails can treat it the way I want? I'm using rails 2.3.5 and ruby 1.8.7. Thank you.

    Read the article

  • Svn auto updater

    - by uzay95
    I am writing my code in my virtual machine and always committing the folder that contains published web site to the free svn server. There is also another remote machine which is test server. I would like to make auto update in the remote machine. Is there any program can make auto update in every 30 seconds?

    Read the article

  • Resuming MySQL indexing

    - by gmemon
    Hello All, I have been building index on a 200 million row table for almost 14 hours. Due to resource over-consumption on the machine (because of a separate incident), the machine cashed. Clearly, I want to avoid another 14 hours to re-construct the index. Is there a way that I can resume the construction of index from the point (or slightly back) where the machine crashed? I can see the temporary files created. Thanks

    Read the article

  • Work with a local copy of the "master" SVN repo

    - by Werner
    Hi, at work, we have a SVN repo, which we access to thorugh http, like: svn co http://user@machine/PATH even at work and for some misterious reasons, teh connections between local machines and the repo machine are very slow, but the connection between home and work is almost impossible. I wonder if I could do somethin like: 1- get a copy of the "master" SVN repo to my local machine 2- each time i make modifications etc, use svn co http://user@MYLOCALmachine/PATH instead of svn co http://user@machine/PATH 3- when I am back at work, "merge" somehow all the modifications in my local repo to the master one. Sorry, I am ewally new to SVN, any hint would be appreciated. Thanks

    Read the article

  • Who calls the Destructor of the class when operator delete is used in multiple inheritance.

    - by dicaprio-leonard
    This question may sound too silly, however , I don't find concrete answer any where else. With little knowledge on how late binding works and virtual keyword used in inheritance. As in the code sample, when in case of inheritance where a base class pointer pointing to a derived class object created on heap and delete operator is used to deallocate the memory , the destructor of the of the derived and base will be called in order only when the base destructor is declared virtual function. Now my question is : 1) When the destructor of base is not virtual, why the problem of not calling derived dtor occur only when in case of using "delete" operator , why not in the case given below: derived drvd; base *bPtr; bPtr = &drvd; //DTOR called in proper order when goes out of scope. 2) When "delete" operator is used, who is reponsible to call the destructor of the class? The operator delete will have an implementation to call the DTOR ? or complier writes some extra stuff ? If the operator has the implementation then how does it looks like , [I need sample code how this would have been implemented]. 3) If virtual keyword is used in this example, how does operator delete now know which DTOR to call? Fundamentaly i want to know who calls the dtor of the class when delete is used. Sample Code class base { public: base() { cout<<"Base CTOR called"<<endl; } virtual ~base() { cout<<"Base DTOR called"<<endl; } }; class derived:public base { public: derived() { cout<<"Derived CTOR called"<<endl; } ~derived() { cout<<"Derived DTOR called"<<endl; } }; I'm not sure if this is a duplicate, I couldn't find in search. int main() { base *bPtr = new derived(); delete bPtr;// only when you explicitly try to delete an object return 0; }

    Read the article

  • Powershell 4 compatibility with Windows 2008 r2

    - by Acerbity
    In my environment I have a single server that has access to pretty much my entire network. That server is running Windows 2008 r2, and I have upgraded Powershell to version 4.0. The question I have is this... Can I run cmdlets from that machine on other machines that are version 4 specific? For instance, when I am using Powershell, even though it is version 4, it doesn't give me an intellisense autocomplete for "Get-Volume" like it would on a 2012 r2 machine. I understand that it won't run on that machine because the infrastructure won't allow for it, but what about a 2012 r2 machine remotely? I am looking to run batch scripts from there for various purposes.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >