Search Results

Search found 2412 results on 97 pages for 'atom computing'.

Page 20/97 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • How do you explain more advanced computing concepts to a non super user?

    - by EvilChookie
    I often have to explain computing concepts to non super users, and I often do it by relating computing concepts to real life situations. I wouldn't mind seeing how other super users do it, and some really good explanations might come in handy instead of me having to wing it. So, how do you explain advanced computing topics to the 'normal' people? Notes: One explanation per answer, and let the best float to the top. CW turned on, since this is subjective. Also, feel free to edit my tags if you can think of better ones =)

    Read the article

  • Windows RPC vs XML-RPC

    - by Y.Z
    Is there any benchmark about encoding/decoding certain common typed data in Microsoft RPC NDR engine (DCE 1.1) in comparison with that in XML-RPC-C/C++ in the de-facto C/C++ implementation in XML-RPC? Actually I have to choose between Windows RPC and XML-RPC-C/C++ to implement my own common object infrastructure for High Performance Computing on Windows. Any recommandation about which with regard to their performance? Thank you. Best Regards, Yang

    Read the article

  • auto updating software on cloud.

    - by iamgopal
    I know wordpress,joomla,drupal etc ( most of php_mysql stack ) can auto detect update in software itself or/and plugin and either ask for user permission to update or auto update it. How to do similar thing on google app engine like cloud computing ? I am creating an open source software which is targeted towards non-computer people. who can not clone my code and update their application easily. what is the easier way to do this ?

    Read the article

  • Can computer clusters be used for general everyday applications?

    - by Matt Pascoe
    Does anyone know how a computer cluster can be used for everyday applications, like for example video games? I would like to build a computer cluster that can run applications over the cluster that were not specifically designed for computer clusters and still see the performance increase. One use would be for video games, but I would also like to utilize the increased computing power for running a large network of virtualized machines.

    Read the article

  • Choice of operating systems for a Rackspace cloud installation

    - by riteshmnayak
    I am planning to use Rackspace cloud services to host a java web application and also run apace for wordpress and trac. What would be a stable operating system to host such an application. My requirements are that the core OS bundle should be minimalistic (so I can install only what I want), consume very little memory and be performant. I would also need it to contain softwares for the common lamp stack, J2EE stack etc. A supported package manager would be lovely. My choices are listed below. RHEL 5.3 or 5.4 Debian Lenny Ubuntu 8.04 onwards Centos 5.3 or 5.4 Arch 2009.02 Gentoo 2008.0 or 10.1 Fedora 11 or 12 PS: can somebody add the rackspace tag to this? Edit to remove this line as well. Thanks

    Read the article

  • Mac Management and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • Automatically Login & Startup A Windows Program On Amazon's EC2 Service

    - by darkAsPitch
    How can I automatically start a program on Amazon's EC2 Windows 2008 web servers? For example, if I wanted to test the "Digg effect" on a web page of mine, how could I open 100 windows 2008 servers at once, each loading one (or two?) instances of the firefox web browser? I have placed a sample batch file in the windows startup folder that echos the time it was called, but it is only started when I actually login remotely via the remote desktop protocol. I don't want to have to login to 100 servers to get my software to run :P What can I do? I am using this Windows 2008 Datacenter, Amazon-supplied AMI specifically: ami-a2698bcb

    Read the article

  • Linux & Windows Boot Up Times in Amazon Web Service and Windows Azure

    - by Adron
    I've been working with Windows Azure and Amazon Web Services EC2 for a good many months now (almost getting to the years range) and I've seen something over and over that seems troubling. With AWS & Linux I commonly get instance startup times with EC2 around the 1-3 minute range. With AWS & Windows OS on an EC2 instance it often takes 10-20 minutes. With Windows Azure Web or Service Role I often get anywhere from 6-30 minutes waiting for a role to startup. I assume of course this involves booting up a windows instance somewhere in the fabric. I know there has always been tons of FUD about windows vs. Linux, but I'd really like to know why it is that Windows 08 or 03 boots so much slower in the cloud than Linux. Any specific technical information regarding this would be greatly appreciated! Thanks.

    Read the article

  • Comparison of cloud hosting providers

    - by Abel
    Is there a place where we can compare* the many new arising cloud hosting providers? From reading into each of them, they seem very different and range from just hosting applications (google) to a semi-full enterprise web serving framework (rackspace). Comparing "by hand" takes a lot of time. All have limitations and different prizing, but which are those and how do they compare? I'm looking for an unbiased comparison site, rather then a discussion on "which is the best". * I don't mean a hosting provider comparison site, of which there are many. The properties of cloud hosting providers are remarkably different and don't compare well on classical hosting provider comparison charts.

    Read the article

  • Raid-1 Western Digital Green AARS, cloning and WD Align Utility

    - by Jaguar
    Hello all, My current setup runs on top of 2x Western Digital 2500KS drives on Raid-1, using the motherboard's 780G raid controller, on WinXP. Everything is fine, but the drives are a bit noisy. I am considering buying 2x WD6400AARS disks which are the 640GB slower 'green' drives, but also feature the Advanced Formatting 4KB sectors. This means that for WinXP the partition will have to be aligned to work properly, else there is a performance penalty. There are 2 questions here: The Green drives from WD are all slower and are (according to WD) susceptible to drop-out's from the controller. Has anyone any experience in this matter? Is there a possibility the controller will drop a drive? If so, can i do anything about it? Secondly, western digital gives a utility to perform the alignment on the partition. The thing is, will the utility see the drives in question as the operating system only sees 1 logical disk? I will be making the transition using a cloning tool (most probably norton ghost) unless i don't find a solution or a clear answer, in which case i'll just buy a win 7 license and make a clean install... thx in advance

    Read the article

  • MySql Data Loss - post mortem analysis - RackSpace Cloud Server

    - by marfarma
    After a recent 'emergency migration' of a RS cloud server, the mysql databases on our server snapshot image proved to be days out of date from the backup date. And yet files that were uploaded through the impacted webapp had been written to the file system. Related metadata that was written to the database was lost, but the files themselves were backed-up. Once I was able to manually access the mysql data files before the mysql server started (server was configured to start mysql on boot), I was able to see that the update time for ib_logfile1, ib_logfile0 and ibdata1 was days old. As with this poster, mysql data loss after server crash, it's as if some caching controller had told the OS / mysql server that it had committed data that was still in cache, and it was lost instead of flushed. I can't quite wrap my head around how the uploaded files got written but the database data did not. I would have thought that any cache would have flushed system wide, rather than process by process. Any suggestions as to how this might have happened?

    Read the article

  • How do I setup a cloud server to share and sync files on ESXi hosted environment?

    - by Manoj Agarwal
    I want to setup my private cloud network for my company for syncing and sharing files. Instead of using existing players like dropbox, google drive, amazon etc. I want to setup my own cloud infrastructure. The requirement is to easily share private data internally within the organization. I already have an ESXi based cloud environment, running several virtual machines in it. Will it be feasible and achievable?

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • Create an AWS HVM Linux AMI from an Existing Paravirtual Linux AMI

    - by javacavaj
    Is it possible to create a hardware virtual machine (HVM) AMI from an existing paravirtual (PV) AMI. My initially thought was to start a new PV instance and use the ec2-create-image command to create a new image while specifying HVM as the virutalization type. However, ec2-create-image does not have a command line parameter to specify the virtualization type. Is there another way to go about doing this?

    Read the article

  • Mac Management Without Permission and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • File sharing for small, distributed, non-technical, non-profit organization?

    - by mnmldave
    Problem: I've started volunteering for a small non-profit with fewer than five non-technical Windows users who need to share 20-30GB of files (Office documents, images, PDFs, etc.) amongst themselves online. Background: The users are accustomed to a Windows network share on a machine that backed up their data locally. An on-site "disaster" has forced them to work from their homes for awhile and to re-evaluate their file sharing needs (office was located in an old building with obvious electrical issues, etc.). Access to time from volunteers with IT experience seems to be difficult. Demonstrably minimizing energy consumption is a nice-to-have. I'm currently considering Jungle Disk (a Desktop account shared amongst the handful of employees since their TOS and my inquiries to their helpdesk seem to indicate this is permissible). It appears easy-to-use, inexpensive, secure, has backup functionality, and can scale to accomodate more data when needed. I've not used it myself though (have only used Dropbox for personal use) and systems isn't my area of expertise, so am worried I might be jumping on a bandwagon. That said, any suggestions, thoughts or similar experiences would be really appreciated.

    Read the article

  • How to upload stuff to Amazon EC2 Windows instance?

    - by JohnIdol
    I've never used Amazon EC2 - I am thinking to test a few instances for running intensive computation processes I have a requirement for rather than buying real hardware. I am given to understand the thing is quite easy to setup - but I have no clue of how it actually works, in terms of transferring data to my ec2 instances. So the question is how can I upload stuff to my instance? Any help appreciated!

    Read the article

  • Should I persist images on EBS or S3 ??

    - by enes
    Hi; I am migrating my Java,Tomcat, Mysql server to AWS EC2. I have already attached EBS volume for storing Mysql data. In my web application people may upload images. So I should persist them. There are 2 alternatives in my mind. 1- Save uploaded images to EBS volume. 2- Use S3 service. The followings are my notes, please be skeptic about them, as my expertise is not on servers, but software development. EBS plus: S3 storage is more expensive. (0.15 $/Gb 0.1$/Gb) S3 plus: Serving statics from EBS may influence my web server's performance negatively. Is this true? Does Serving images affect server performance notably? For S3 my server will not be responsible for serving statics. S3 plus: Serving statics from EBS may result I/O cost, probably it will be minor. EBS plus: People say EBS is faster. S3 plus: People say S3 is more safe for persistence. EBS plus: No need to learn API, it is straight forward to save the images to EBS volume. Namely I can not decide, will be happy if you guide. Thanks

    Read the article

  • VmWare / Citrix Xen type environment vs Ubuntu Cloud / Amazon EC2 type environment.

    - by Nick Gorbikoff
    Hello. A bit of background. We run a small in house data center: about 20 virtualized servers (Debian Lenny, Windows 2003, Windows xp and Windows 7 machines), in a Citrix Xen pool running on 3 host servers and a SAN, plus a few standalone machines running legacy or specialized software that can't be vritualized. There is a big push everywhere now to move to cloud so we considering Ubuntu Cloud. I was wondering what are the pros / cons of running virtualized pool vs cloud to run all those machines? Thank you

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >