Search Results

Search found 5287 results on 212 pages for 'physical computing'.

Page 30/212 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Raspberry Pi cluster, neuron networks and brain simulation

    - by jokoon
    Since the RBPI (Raspberry Pi) has very low power consumption and very low production price, it means one could build a very big cluster with those. I'm not sure, but a cluster of 100000 RBPI would take little power and little room. Now I think it might not be as powerful as existing supercomputers in terms of FLOPS or others sorts of computing measurements, but could it allow better neuronal network simulation ? I'm not sure if saying "1 CPU = 1 neuron" is a reasonable statement, but it seems valid enough. So does it mean such a cluster would more efficient for neuronal network simulation, since it's far more parallel than other classical clusters ?

    Read the article

  • what is a data serialization system?

    - by Yang
    according to Apache AVRO project, "Avro is a serialization system". By saying data serialization system, does it mean that avro is a product or api? also, I am not quit sure about what a data serialization system is? for now, my understanding is that it is a protocol that defines how data object is passed over the network. Can anyone help explain it in an intuitive way that it is easier for people with limited distributed computing background to understand? Thanks in advance!

    Read the article

  • Using Parallel Extensions In Web Applications

    - by Greg
    I'd like to hear some opinions as to what role, if any, parallel computing approaches, including the potential use of the parallel extensions (June CTP for example), have a in web applications. What scenarios does this approach fit and/or not fit for? My understanding of how exactly IIS and web browsers thread tasks is fairly limited. I would appreciate some insight on that if someone out there has a good understanding. I'm more curious to know if the way that IIS and web browsers work limits the ROI of creating threaded and/or asynchronous tasks in web applications in general. Thanks in advance.

    Read the article

  • What considerations should be made for a web app to be released on a cloud hosted system?

    - by Rhubarb
    I have a web app that is primarily a WordPress app, but it pulls content from a Django app, simply by calling a service that uses Django models. My understanding of cloud computing is a bit vague. If the site needs to scale up with short notice, does the cloud provider (Amazon, Rackspace, whomever) simply spin up new instances (copies) of my initially configured server? How is state managed between all of them? Are there any good primers on this subject? It's hard to find much out there without getting caught up in the marketing.

    Read the article

  • c++ programming for clusters and HPC

    - by Abruzzo Forte e Gentile
    HI All I need to write a scientific application in C++ doing a lot of computations and using a lot of memory. I have part of the job but due to high requirements in terms of resources I was thinking to start moving to OpenMPI. Before doing that I have a simple curiosity: If I understood the principle of OpenMPI is the developer that has the task of splitting the jobs over different nodes calling SEND and RECEIVE based on node available at that time. Do you know if it does exist some library or OS or whatever that has this capability letting my code reamain as it is now? Basically something that connects all computers and let share as one their memory and CPU? I am a bit confused because of the high material available on the topic. Should I look at cloud computing? or Distributed Shared Memory? Can you help me or address me a bit? Thanks

    Read the article

  • what changes when your input is giga/terabyte sized?

    - by Wang
    I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)

    Read the article

  • How can I call an executable to run on a separate machine within a program on my own machine (win xp

    - by Mr. H.
    My objective is to write a program which will call another executable on a separate computer(all with win xp) with parameters determined at run-time, then repeat for several more computers, and then collect the results. In short, I'm working on a grid-computing project. The algorithm itself being used is already coded in FORTRAN, but we are looking for an efficient way to run it on many computers at once. I suppose one way to accomplish this would be to upload a script to each computer and then run said script on each computer, all automatically and dependent on my own parameters. But how can I write a program which will write to, upload, and run a script on a separate computer? I had considered GridGain, but the algorithm is already coded and in a different language, so that is ruled out. My current guess at accomplishing this task is using Expect (wiki/Expect), but I have no knowledge of the tool. Any advice appreciated.

    Read the article

  • cheap way to scale a rails application

    - by VP
    I have an application, that is becoming big, but until now, its not giving me a good revenue. That means, short money to re-invest on that. In this scenario, i found a way to make a "cheap distributed rails" deployment. I've got 4 VPS. All of them are in the same physical server. I added a load balance server running HAproxy in one dedicated VPS. There i pointed my virtual ip address where my domain name is associated. Behind this HAproxy i have more two VPS running my rails APP, passenger and memcache. Both apps servers are looking to the same database server, my 4th VPS. So with $44/month, i mounted a distributed environment. It won't be my final choice, but now, that the budget is short, is that a good way to deploy a rails application? Any pros or cons? It worth my $44/month?

    Read the article

  • "cloud architecture" concepts in a system architecture diagrams

    - by markus
    If you design a distributed application for easy scale-out, or you just want to make use of any of the new “cloud computing” offerings by Amazon, Google or Microsoft, there are some typical concepts or components you usually end up using: distributed blob storage (aka S3) asynchronous, durable message queues (aka SQS) non-Relational-/non-transactional databases (like SimpleDB, Google BigTable, Azure SQL Services) distributed background worker pool load-balanced, edge-service processes handling user requests (often virtualized) distributed caches (like memcached) CDN (content delivery network like Akamai) Now when it comes to design and sketch an architecture that makes use of such patterns, are there any commonly used symbols I could use? Or even a download with some cool Visio stencils? :) It doesn’t have to be a formal system like UML but I think it would be great if there were symbols that everyone knows and understands, like we have commonly used shapes for databases or a documents, for example. I think it would be important to not mix it up with traditional concepts like a normal file system (local or network server/SAN), or a relational database. Simply speaking, I want to be able to draw some conclusions about an application’s scalability or data consistency issues by just looking at the system architecture overview diagram. Update: Thank you very much for your answers. I like the idea of putting a small "cloud symbol" on the traditional symbols. However I leave this thread open just in case someone will find specific symbols (maybe in a book or so) - or uploaded some pimped up Visio stencils ;)

    Read the article

  • When should one use the following: Amazon EC2, Google App Engine, Microsoft Azure and Salesforce.com

    - by vicky21
    I am asking this in very general sense. Both from cloud provider and cloud consumer's perspective. Also the question is not for any specific kind of application (in fact the intention is to know which type of applications/domains can fit into which of the cloud slab -SaaS PaaS IaaS). My understanding so far is: IaaS: Raw Hardware (Processors, Networks, Storage). PaaS: OS, System Softwares, Development Framework, Virtual Machines. SaaS: Software Applications. It would be great if Stackoverflower's can share their understanding and experiences of cloud computing concept. EDIT: Ok, I will put it in more specific way - Amazon EC2: You don't have control over hardware layer. But you can take your choice of OS image, Dev Framework (.NET, J2EE, LAMP) and Application and put it on EC2 hardware. Can you deploy an applications built with Google App Engine or Azure on EC2? Google App Engine: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing Java or Python application and port it to GAE? Or vice versa, can applications that were built on GAE be taken out of GAE and ported to any Application Server like Websphere or Weblogic? Azure: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing .NET application and port it to Azure? Or vice versa, can applications that were built on Azure be taken out of Azure and ported to any Application Server like Biztalk?

    Read the article

  • RMI no such object in table, Server communication error

    - by ben-casey
    My goal is to create a Distributed computing program that launches a server and client at the same time. I need it to be able to install on a couple of machines and have all the machines communicating with each other, i.e. Master node and 5 slave nodes all from one application. My problem is that I cannot properly use unicastRef, I'm thinking that it is a problem with launching everything on the same port, is there a better way I am overlooking? this is part of my code (the part that matters) try { RMIServer obj = new RMIServer(); obj.start(5225); } catch (Exception e) { e.printStackTrace(); } try { System.out.println("We are slave's "); Registry rr = LocateRegistry.getRegistry("127.0.0.1", Store.PORT, new RClient()); Call ss = (Call) rr.lookup("FILLER"); System.out.println(ss.getHello()); } catch (Exception e) { e.printStackTrace(); } } this is my main class (above) this is the server class (below) public RMIServer() { } public void start(int port) throws Exception { try { Registry registry = LocateRegistry.createRegistry(port, new RClient(), new RServer()); Call stuff = new Call(); registry.bind("FILLER", stuff); System.out.println("Server ready"); } catch (Exception e) { System.err.println("Server exception: " + e.toString()); e.printStackTrace(); } } I don't know what I am missing or what I am overlooking but the output looks like this. Listen on 5225 Listen on 8776 Server ready We are slave's Listen on 8776 java.rmi.NoSuchObjectException: no such object in table at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:359) at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source) at Main.main(Main.java:62) line 62 is this ::: Call ss = (Call) rr.lookup("FILLER");

    Read the article

  • Moving a ASP.NET application to the cloud

    - by user102533
    I am new to cloud computing, so please bear with me here. I have an existing ASP.NET application with SQL Server 2008 hosted on a Virtual Private Server. Here's what it briefly does: The front end accepts user's requests and adds them to a DB table A Windows Service running in the background picks up the request, processes it and sets a flag. The Windows Services also creates a file for the user to download. User downloads file I'd like to move this web application with the service to the cloud. The architecture I envision is that I'll have 1 Web server in which I will install the front end and the windows service. I'll also have a cloud files server for file storage. The windows service should somehow create a file and transfer it to the cloud file server (I assume this is possible?) My questions: Does the architecture look like I am going in the right direction? I know Amazon has been providing cloud services for a long time. If I want to do minimal changes to my application, should I go with Amazon, Rackspace, Azure or some other provider? I understand that I would not only pay for file storage and web server but also for the bandwidth of users downloading the file and the windows servic uploading the file to the cloud server. Can I assume these costs are negligible? Should I go with VPS + Cloud Files combination to begin with? Any other thoughts/suggestions?

    Read the article

  • Computing pixel's screen position in a vertex shader: right or wrong?

    - by cubrman
    I am building a deferred rendering engine and I have a question. The article I took the sample code from suggested computing screen position of the pixel as follows: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition = output.Position; } PixelShaderFunction() { input.ScreenPosition.xy /= input.ScreenPosition.w; float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } The question is what if I compute the position in the vertex shader (which should optimize the performance as VSF is launched significantly less number of times than PSF) would I get the per-vertex lighting insted. Here is how I want to do this: VertexShaderFunction() { ... output.Position = mul(worldViewProj, input.Position); output.ScreenPosition.xy = output.Position / output.Position.w; } PixelShaderFunction() { float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); ... } What exactly happens with the data I pass from VS to PS? How exactly is it interpolated? Will it give me the right per-pixel result in this case? I tried launching the game both ways and saw no visual difference. Is my assumption right? Thanks. P.S. I am optimizing the point light shader, so I actually pass a sphere geometry into the VS.

    Read the article

  • How do you manage the testing of your Android software on physical devices?

    - by Philip Regan
    I'm in charge of managing mobile application development at my company, and I am currently building a mobile device "library" for testing. Essentially, we want to have a representative device in-house for each of the OSes we are developing for, currently iOS (iPhone-only), Blackberry, and Android. Simulators only go so far, but I'm placing into the process a step to test software on the devices themselves. The problem we're finding is with Android. I don't think any of us here ever really understood just how fragmented the whole platform is until we started looking at devices to acquire. We are going to wait until v2.3 of Android is released, but which products to choose? Do we go by the most popular by market share? Do we get a small range of products by specs from least to most powerful overall? We're trying to avoid having to manage a dozen different devices to test each app, if not because of cost if only for the repeated time sink. How do you manage the testing of your Android software on physical devices?

    Read the article

  • QNAP NAS 509 (LINUX) - how to unmout busy volume and find physical disk?

    - by Horst Walter
    On my NAS QNAP TS 509 I do have a technical issue. I need to run e2fsck. This works fine for me on md0 (see below), but how can I unmount the busy devices md9 and sda4 in order to do the same. Whenever I try, I fail because the device is busy. [This part is solved, see below] In order to further track down the issue, I'd need to sort out the physical disk to device relationship. How can I find out this, e.g. md0 is a stripped volume on 2 disk (but I need to find out on what physical disk). Remark: As you can easily derive from my questions, I am not a Linux expert, but manage to get along. /dev/ram0 124.0M 94.1M 29.8M 76% / tmpfs 32.0M 80.0k 31.9M 0% /tmp /dev/sda4 310.0M 103.9M 206.1M 34% /mnt/ext /dev/md9 509.5M 39.2M 470.2M 8% /mnt/HDA_ROOT /dev/md0 1.8T 1.4T 444.7G 76% /share/MD0_DATA tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp -- Added -- QNAP seems to be based on Busybox. I do not find something like init / telinit / runlevel. At busybox docs it says that I need to run the below. But in /var/service sv is not available. I want to go to single user mode to unmount the devices. # cd /var/service # sv d * # sv u getty* -- Added, thanks A4L -- This QNAP Box runs a special flavor of Linux, so not all SOPs do apply. In my particular case I found a services.sh script, stopping all services. After that the drive could be unmounted. The information passed by A4L is valid and worth reading it, maybe I'll profit from it next time. Links: http://unix.stackexchange.com/questions/19918/umount-device-is-busy and http://unix.stackexchange.com/questions/15024/umount-device-is-busy-why So the unmount issue is solved, still looking for the best option to find the physical to volume mapping.

    Read the article

  • How do I make ESXi 5.0 to shutdown virtual machines when the physical power button is pushed?

    - by Pawel Sawicki
    I have a home NAS/DLNA server built out of an HP Micro Server with the HP branded VMware ESXi 5.0.0 build-623860 (free license) installed. Being a home media center I'd like it to be "manageable" by all my household members. This requires that it needs to be powered on an off (including all the VMs inside) by anybody with the physical access to the server by simply pressing the power button on the chassis. The "startup" part is easy to obtain - all I had to do was to configure the startup/shutdown policy: Once the server powers up, all VMs start as well and that's exactly what I need. Well.. it did work up until 5.0.0U1, but that's a different story: http://blogs.vmware.com/vsphere/2012/03/free-esxi-hypervisor-auto-start-breaks-with-50-update-1.html Unfortunately, pressing the power button doesn't gracefully shutdown the guest machines - they are terminated instead. If I run the "shut down" command from the vSphere Client interface guests are powered off. I'd like to get the same end result when the physical power button is switched. I've poked around a bit on the ESXi server. There's a "/sbin/shutdown.sh" script that seemed to do exactly what I need... but after trying it does exactly what the power off button. The "/etc/inittab" contains an entry for the "shutdown" level but I suppose it's not hooked to the power button. I can't find any acpi related configuration, neither do I know what exactly is executed when the power button is pressed. Does anybody have a clue how can I make the VMs shutdown automatically when the physical power switch is pressed to turn of the computer?

    Read the article

  • How would I force Debian to use the physical sector size on a hard disk?

    - by Confused User
    I just purchased a few new 3TB WD drives. These have physical 4k sectors, but there is some sort of layer which is providing 512B logical sectors (see the partition table below). In order to attempt to get some more speed out of my hard drives, I would like to get rid of this logical layer and actually use the physical 4k sectors. However, I can't figure out how to do this (or even if it's possible) from the man pages of fdisk and parted, or from searching Google. Does anybody know how this could be done? As to why this is relevant, this page demonstrates that meerly aligning the sectors properly can already make up to a 25% speed difference for reads, and more than 2500% for writes in some cases! Getting rid of the logical sectors in favor of the physicals ones should improve speeds even more. Thanks! $ parted /dev/sdc GNU Parted 2.3 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB zfs 9 3001GB 3001GB 8389kB P.S. I don't care about the data on the drives, I was just playing with different file systems. Also, this is my first time posting here, so please let me know if my posts should be formatted differently, etc.

    Read the article

  • Linux: Encryption of a physical LVM volume doesn't imply encryption of its logical subvolumes?

    - by java.is.for.desktop
    Hello, everyone! I installed OpenSuse one year ago on my notebook. I created all partitions except /boot inside an LVM partition. I enabled encryption for it during setup. The system asked me a password on each boot later. Everything seemed fine... But one day I wanted to cancel the boot process and did it with SysRq REISUB. During entering this combination, the system suddenly continued to boot without any password being entered. I had no /home and no swap, but / was mounted! I checked multiple times, it was inside an "encrypted" physical LVM volume. Later I found out that OpenSuse can't encrypt / at all. There is an option to enable encryption for each logical volume, and indeed it fails for /. Later I tried Fedora. The options during partitioning were misleading by same means. I could enable "encryption" of a physical volume and each logical subvolume. With the exception that Fedora actually allowed to encrypt /. Question: What's the point of setting up "encryption" for a physical LVM volume, when it doesn't imply (real) encryption of its logical subvolumes? Did I get something wrong in this whole concept?

    Read the article

  • Obtaining the correct Client IP address when a Physical Load Balancer and a Web Server Configured With Proxy Plug-in Are Between The Client And Weblogic

    - by adejuanc
    Some Load Balancers like Big-IP have build in interoperability with Weblogic Cluster, this means they know how Weblogic understand a header named 'WL-Proxy-Client-IP' to identify the original client ip.The problem comes when you have a Web Server configured with weblogic plug-in between the Load Balancer and the back-end weblogic servers - WL-Proxy-Client-IP this is not designed to go to Web server proxy plug-in. The plug-in will not use a WL-Proxy-Client-IP header that came in from the previous hop (which is this case is the Physical Load Balancer but could be anything), in order to prevent IP spoofing, therefore the plug-in won't pass on what Load Balancer has set for it.So unfortunately under this Architecture the header will be useless. To get the client IP from Weblogic you need to configure extended log format and create a custom field that gets the appropriate header containing the IP of the client.On WLS versions prior to 10.3.3 use these instructions:You can also create user-defined fields for inclusion in an HTTP access log file that uses the extended log format. To create a custom field you identify the field in the ELF log file using the Fields directive and then you create a matching Java class that generates the desired output. You can create a separate Java class for each field, or the Java class can output multiple fields. For a sample of the Java source for such a class, seeJava Class for Creating a Custom ELF Field to import weblogic.servlet.logging.CustomELFLogger;import weblogic.servlet.logging.FormatStringBuffer;import weblogic.servlet.logging.HttpAccountingInfo;/* This example outputs the X-Forwarded-For field into a custom field called MyOriginalClientIPField */public class MyOriginalClientIPField implements CustomELFLogger{ public void logField(HttpAccountingInfo metrics,  FormatStringBuffer buff) {   buff.appendValueOrDash(metrics.getHeader("X-Forwarded-For");  }}In this case we are using 'X-Forwarded-For' but it could be changed for the header that contains the data you need to use.Compile the class, jar it, and prepend it to the classpath.In order to compile and package the class: 1. Navigate to <WLS_HOME>/user_projects/domains/<SOME_DOMAIN>/bin2. Set up an environment by executing: $ . ./setDomainEnv.sh This will include weblogic.jar into classpath, in order to use any of the libraries included under package weblogic.*3. Compile the class by copying the content of the code above and naming the file as:MyOriginalClientIPField.java4. Run javac to compile the class.$javac MyOriginalClientIPField.java5. Package the compiled class into a jar file by executing:$jar cvf0 MyOriginalClientIPField.jar MyOriginalClientIPField.classExpected output is:added manifestadding: MyOriginalClientIPField.class(in = 711) (out= 711)(stored 0%)6. This will produce a file called:MyOriginalClientIPField.jar This way you will be able to get the real client IP when the request is passing through a Load Balancer and a Web server before reaching WLS. Since 10.3.3 it is possible to configure a specific header that WLS will check when getRemoteAddr is called. That can be set on the WebServer Mbean. In this case, set that to be X-Forwarded-For header coming from Load Balancer as well.

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • why would you create two different subnets on the same physical network?

    - by xirtyllo
    I'm working at a messy location, one of the strange (for me) things is that on the same physical network there are two different subnets. Specifically some computers will have 10.0.0.0/24 and some others will have 172.16.0.0/24. There is only one DHCP server, which gives IPs on the 10.0.0.0/24 range, and there are two internet gateways, one with IP 172.16.0.1 and one with IP 10.0.0.1 . To give an example, I can easily swap one PC from one subnet to the other just by changing its IP and gateway settings. I am trying to imagine why they created the network this way, and which may be the possible advantages and/or drawbacks of having two different subnets on the same physical network. Any thoughts?

    Read the article

  • Hyper-V virtual machine unable to get IP address from DHCP server running on same physical box

    - by Bronumski
    We have a Windows server 2008 R2 with two network cards running AD, DHCP, DNS and Hyper-V The first nic is setup with a static IP address and DHCP, WDS, and DNS are bound to it. The second nic is configured in Hyper-V to be only used by Hyper-V and has been automatically configured so that only the virtual switch is enabled on the adapter. DHCP and DNS work fine for all physical machines on the network. It also works for Virtual Machines running on another physical box. Virtual machines that are bound to the virtual switch network adapter are unable to get a IP address. If the virtual machine is given a static IP address with correct subnet, gateway and dns everything works. Has anyone else got this working?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >