Search Results

Search found 11836 results on 474 pages for 'cloud dev'.

Page 35/474 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • /dev/input/uinput Device appears to be 'broken'

    - by Adam Luchjenbroers
    I'm trying to setup Pystromo so that I can remap the keys on my Belkin N52TE gamepad. Pystromo basically captures the key strokes and then outputs the remapped keystrokes to the uinput device. However, at the moment it simply swallows the input and outputs absolutely nothing. I've tracked the issue to something being wrong with my uinput device, with the smoking gun being: # ls -l /dev/input/uinput crw-rw---- 1 root plugdev 10, 223 Dec 31 2009 /dev/input/uinput # cat /dev/input/uinput cat: /dev/input/uinput: No such device The uinput module is loaded, and can be clearly seen via lsmod. Anyone seen this before, or can think of something worth attempting? Current Setup Gentoo Linux Kernel 2.6.32 (Gentoo Sources 2.6.32-r1) HP DV7 Laptop Output dmesg dmesg | grep uinput does nothing, and no new lines appear if I run modprobe -r uinput && modprobe uinput. Yet the uinput module can clearly be seen when running lsmod: # lsmod | grep uinput uinput 6200 0 lsusb # lsusb Bus 005 Device 003: ID 050d:0200 Belkin Components Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 1532:0101 Razer USA, Ltd Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 5986:0143 Acer, Inc Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 03f0:171d Hewlett-Packard Wireless (Bluetooth + WLAN) Interface [Integrated Module] Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsusb -v PasteBin Update Hmm, updating evdev and hal seems to have partially fixed it. /dev/input/uinput still can't be accessed but Pystromo is now remapping keys successfully. I'm a little bit mystified about what's going on here, but it seems that my understanding of how all this works is flawed. Since I've posted a bounty, I'll leave this here for someone to post an explanation for how user-space input devices work under the hood.

    Read the article

  • /dev/fuse "permission denied" even when member of fuse group

    - by steeef
    I have a backup script scheduled on a Debian 5.0 x86 server, via sshfs. However, when I attempt to mount the remote directory, I receive: failed to open /dev/fuse: Permission denied ls -l /dev/fuse returns: crwxrwxr-x 1 root fuse 10, 229 2010-11-12 09:08 /dev/fuse id backup returns: uid=501(backup) gid=501(backup) groups=501(backup),46(plugdev),108(fuse) The only way I can get the directory to mount is if I run chmod a+w /dev/fuse, but this is reset at some point during the day. It's a kludge though, and I'd rather figure out why the group permissions aren't working.

    Read the article

  • DegradedArray event on /dev/md0 without actually having a RAID

    - by J. Stoever
    Since I upgraded from Ubuntu LTS 10 to LTS 12, I have been getting error messages like: N 60 mdadm monitoring Mon Sep 3 06:38 31/1022 DegradedArray event on /dev/md2:Ubuntu-1004-lucid-64-minimal N 61 mdadm monitoring Mon Sep 3 06:38 31/1022 DegradedArray event on /dev/md0:Ubuntu-1004-lucid-64-minimal N 62 mdadm monitoring Mon Sep 3 06:38 31/1022 DegradedArray event on /dev/md1:Ubuntu-1004-lucid-64-minimal We do not have a RAID setup, and only have a single hard drive. Ideas ?

    Read the article

  • /dev/shm (shared memory) on linux

    - by Kirzilla
    Hello, Let's imagine that we have 8Gb of RAM on server. I'm mounting /dev/shm with 4Gb on board. mount -o remount,size=4G /dev/shm Will this memory be strictly reserved for shared memory or if /dev/shm is empty this memory could be used by regular applications (web server, php etc.)? PS:Sorry for my English. I'm asking it because I've just checked df -h and found tmpfs 6.0G 0 6.0G 0% /dev/shm on 8Gb RAM sever. I don't know who made this setup, but it seems to me awful. Thank you!

    Read the article

  • Is there an alternative to /dev/urandom?

    - by altCognito
    Is there some faster way than /dev/[u]random? Sometimes, I need to do things like cat /dev/urandom /dev/sdb The random devices are "too" secure und unfortunately too slow for that. I know that there are wipe and similar tools for secure deletion, but I suppose there are also some on-board means to that in Linux.

    Read the article

  • Timestamp Updating Constantly on /dev/null

    - by motorleague
    I've been working on a problem with a /dev/null file on an AIX system (just for background it looks as though it was inadvertently deleted and recreated as a normal file by somebody), but in trying to determine what caused the problem, I noticed that the timestamp on it seems to update every minute. I've observed this on several AIX servers at my workplace. At present I can't entirely rule out this be something specific to the Application being used at my workplace, so I compared with CentOS and Debian based computers at home last night. The CentOS box, which runs 24 hours, had a mod time on /dev/null of around 4 days ago (during which time it was essentially just being used as a web browser and multimedia player, although it would have had active but essentially unused Apache, MySQL and VMM processes running in the background). The timestamp on /dev/null on the Debian machine, which was a just booted laptop, pretty much reflected the boot time, but I tested redirecting STDIN from, and STDOUT to it, and the modification time was unchanged (I'm not sure 100% sure if directing data to /dev/null constitutes "writing to it" in the way it would a normal file). So my question is essentially, could anybody please offer any advice with regards to what circumstances (permissions changes etc.. aside) might cause the timestamp on /dev/null to update? Thanks very much for any suggestions. Alex.

    Read the article

  • How to set up multi users on dev server with git and github

    - by Derek Organ
    I'm working on lamp application. We have 2 servers (Debian) Live and Dev. I constantly work on dev main to add new features and fix bugs. When happy all works well I scp the relevant code to the Live system. Database (mysql) is local to each machine. Now this is pretty basic setup really and I want to improve the workflow a bit. I use git and github for version control. Admittedly I've only really used one branch. Their can be 3 different developers who work on the code at different times. We all use the same linux username to connect to the dev server and edit the code directly when needed. I usually then commit and push the code at the end of the day to github. One thing to bare in mind is it isn't easy to run this code on a local machine as there are many apache and subdomain configurations that wouldn't work on a local machine so it is important to work on the dev server not locally. I need to create a new process because we need to have a main trunk now and a branch with a big code re-write. What is the best way to do this. Should I create different unix logins for each developer and set up different working areas on the dev server for there changes? e.g. /var/www/mysite_derek /var/www/mysite_paul /var/www/mysite_mike my thinking is they can do a pull from the main branch and then create there own branch and merge it back in. I'm not sure how this will work though with git locally and with github. will i need to create different github user accounts as well. I'd like to do this the 'right' way and future proof for having lots of potential developers but I also don't want to over complicate it. I simple and elegant solution is preferred. any recommendations or suggestions?

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • Progress 4GL and DB to Oracle and cloud

    - by llaszews
    Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy. The least risky and expensive option (in the short term) is to use the Progress OpenEdge DataServer for Oracle: Progress OpenEdge DataServer This eliminates the need to have to migrate the Progress 4GL to Java/J2EE. The database can be migrated using SQLWays Ispirer: Ispirer SQLWays ProgressDB migrations tool The Progress 4GL can remain as is. In order to get the application on the cloud there are a few approaches: 1. VDI - Virtual Desktop is a way to put all of the users desktop in a centralized environment off the desktop. This is great in cases where it is just not one client/server application that the user needs access too. In many cases, users will utilize MS Access, MS Excel, Crystal Reports and other tools to get at the Progress DB and other centralized databases. Vmware's acquistion of Wanova shows how VDI is growing in usage. Citrix is the 800 pound gorilla in the VDI space with Citrix WinFrame (now called XenDesktop). Oracle offers a VDI solution that Oracle picked up when it acquired Sun. 2. Hypervisor Server Virtualization - Of course you can place applications written in client/server languages like Progress 4GL buy using server virtualization from Oracle, VMWare, Microsoft, Citrix and others. 3. Microsoft Remote Desktop Services (aka: Terminal Services Client) The entire idea is to eliminate all the client/server desktop devices and connections which require desktop software and database drivers. A solution to removing database drivers from the desktop is to use DataDirect SQLLink

    Read the article

  • Windows Intune, Cloud Desktop management

    - by David Nudelman
    As a part of Microsoft Cloud computing strategy, Windows Intune beta was released today. Here’s a quick overview of what customers and IT consultants can do with the cloud service component of Windows Intune: Manage PCs through web-based console: Windows Intune provides a web-based console for IT to administrate their PCs. Administrators can manage PCs from anywhere. Manage updates: Administrators can centrally manage the deployment of Microsoft updates and service packs to all PCs. Protection from malware: Windows Intune helps protect PCs from the latest threats with malware protection built on the Microsoft Malware Protection Engine that you can manage through the Web-based console. Proactively monitor PCs: Receive alerts on updates and threats so that you can proactively identify and resolve problems with your PCs—before it impacts end users and your business. Provide remote assistance: Resolve PC issues, regardless of where you or your users are located, with remote assistance. Track hardware and software inventory: Track hardware and software assets used in your business to efficiently manage your assets, licenses, and compliance. Set security policies: Centrally manage update, firewall, and malware protection policies, even on remote machines outside the corporate network. And here a quick video about Windows Intune For support and questions go to : TechNet Forums for Intune Regards, David Nudelman

    Read the article

  • Cloud Computing and Data Utilization

    - by Ahsan Alam
    Someone recently asked me “is cloud computing going to change the way we perceive data?”. My first instinct was “off course”; but I restrained myself and thought for a moment. Then my answer was “no”. Why do I feel that way? Technology and business have evolved quite a bit in the past few decades; however, the need to effectively view and utilize data hasn't changed. It is not uncommon to see many organization to rely on multiple database management systems (DBMS). Applications and systems are often built to utilize information from all these data sources, and effectively present them to users. In addition to multiple DBMS, corporations are also housing their systems across numerous data centers. In fact, systems and data can reside anywhere around the world with the advent of globalization. Cloud based systems have simply provided us a different place to maintain our data, nothing more. Hosting costs, security and accessibility are different issues; however, the way we utilize and view these data remains the same.

    Read the article

  • Is DQS-in-the-cloud on its way?

    - by jamiet
    LinkedIn profiles are always a useful place to find out what's really going on in Microsoft. Today I stumbled upon this little nugget from former SSIS product team member Matt Carroll: March 2012 – December 2012 (10 months)Redmond, WA Took ownership of the SQL 2012 Data Quality Services box product and re-architected and extended it to become a cloud service. Led team and managed product to add dynamic scale, security, multi-tenancy, deployment, logging, monitoring, and telemetry as well as creating new Excel add-in and new ecosystem experience around easily sharing and finding cleansing agents. Personally designed, coded, and unit tested in-memory trigram matching algorithm core to better performance, scale and maintainability. Delivered and supported successful private preview of the new service prior to SQL wide reorganization.  http://www.linkedin.com/profile/view?id=9657184  Sounds as though a Data-Quality-Services-in-the-cloud (which I spoke of as being a useful addition to Microsoft's BI portfolio in my previous blog post Thoughts on Power BI for Office 365 ) might be on its way some time in the future. And what's this SQL wide reorganization? Interesting stuff. @Jamiet  

    Read the article

  • TechEd 2012: A Little Cloud And Too Little Windows Phone

    - by Tim Murphy
    It is Monday afternoon and the last couple of sessions have been disappointing.  I started out in the Nokia: Learning to Tile session.  I guess I should have read the summary more closely because it turned out to be more of a Nokia/WP7 history and sales pitch. “I’m outa here!” I made a quick venue change and now we are learning about Private Cloud Architecture.  The topic and the material were very informative.  The speaker even had a couple of quotable statements. The first quote was “You can trust me … I’m a doctor”.  The second was a new acronym (at least for me): CAVE – committee against virtually everything.  I am sure I have dealt with them more than once in my career. Unfortunately he didn’t just have a doctorate, the presentation was overdone like a medical journal.  While I didn’t enjoy the presentation, I am looking forward to getting my hands on the slides to review. Here is looking forward to the next sessions. del.icio.us Tags: Windows Phone,Cloud,Architecture

    Read the article

  • Oracle Enterprise Manager 12c R3 introduces advancements in cloud lifecycle and operations management

    - by Anand Akela
    Oracle Enterprise Manager 12c Release 3 (R3) was announced ( Press Release ) earlier today. It is now available for download at  OTN . This latest release features improvements in several areas, including: Improvements to Private Cloud and Engineered Systems Management Expanded Middleware and Application Management Capabilities Efficiency Gains for Enterprise manager Users in EM’s Enterprise-Ready Framework You can learn more about what's new in the Oracle Enterprise Manager 12c R3 in the Enterprise Manager 12c documentation . You will see more blogs and details about the new features during the next few weeks. Please let us what On July 18th, you can join us at a webcast to hear Thomas Kurian, EVP of Product Development on what Oracle Engineering has achieved with Oracle Enterprise Manager 12c Release 3 to address these challenges. Later, during this webcast, Oracle experts will discuss the latest capabilities in Oracle Enterprise Manager 12c Release 3 for cloud lifecycle and operations management. The presentation will be followed by a live Q&A session with Oracle experts. You can also join us online on Twitter to get your specific questions answered. Please use hash tag #em12c to join the conversation. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Register Now for the Webcast! Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Oracle Cloud and Oracle Platinum Services Announcements

    - by kellsey.ruppel
    Live Webcast - Oracle Cloud and Oracle Platinum Services Announcements Wednesday, June 06, 2012 1:00 p.m. PT – 2:30 p.m. PT View your local time Live Webcast Register to watch at your desk! Don't have an Oracle account? Sign up now!  Why do I need an account? Register Now! Please join Larry Ellison and Mark Hurd for important Oracle announcements. Be among the first to learn about new developments in Oracle’s cloud strategy and game-changing advances in Oracle Support.  Register Now! Are you based in the San Francisco Bay Area? Register to attend the live event in Redwood Shores. Oracle values your privacy, and will treat the information we collect from you as a result of your registration and participation in this activity in accordance with the Oracle Privacy Policy. Event Details: Wednesday, June 06, 2012 1:00 p.m. PT – 2:30 p.m. PT Live Webcast Stay Connected:     Join the conversation: #oraclecloud #oraclesupport

    Read the article

  • Oracle as a Service in the Cloud

    - by Jason Williamson
    This should really be a Tweet, but I guess I'm writing a bit more. In theme of data migration and legacy modernization, I am seeing more and more of a push to consolidate data sources, especially from non-oracle to oracle in an effort to save dollars. From a modernization perspective, this fits in quite well. We are able to migrate things like Terradata, Sybase and DB2 and put that all into an Oracle database and then provide that as a OaaS (Oracle as a Service) Cloud offering. This seems to be a growing trend, and not unlike the cool RDS Amazon Database cloud being built on Oracle as well. We again find ourselves back in the similar theme of migration, however. The target technology sounds like a winner (COBOL to Java/SOA...DB2/Datacom/Adabas to Oracle) but the age-old problem of how to get there still persists. It it not trivial to migrate large amounts of pre-relational or "devolved" relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have. I'm working on a couple of new sections and chapters for a book that Tom and Prakash and I are writing around Database Migration and Consolidation. I'll share some snipits shortly.

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • Can I get a faster output pipe than /dev/null ?

    - by naugtur
    Hi I am running a huge task [automated translation scripted with perl + database etc.] to run for about 2 weeks non-stop. While thinking how to speed it up I saw that the translator outputs everything (all translated sentences, all info on the way) to STDOUT all the time. This makes it work visibly slower when I get the output on the console. I obviously piped the output to /dev/null, but then I thought "could there be something even faster?" It's so much output that it'd really make a difference. And that's the question I'm asking You, because as far as I know there is nothing faster... (But I'm far from being a guru having used linux on a daily basis only last 3 years)

    Read the article

  • Steve Ballmer on Cloud Computing &ndash; We&rsquo;re all in

    - by Eric Nelson
    Steve spoke last week (March 4th 2010) on the possibilities of the Cloud and the importance to Microsoft. Of our 40,000 people building software, 70% of the people at Microsoft today are working on the Cloud – 90% in a years time In other words: The video is 85mins of Steve and there is an easy way of navigating to some soundbytes on presspass. I also like the new website that simplifies our story and commitment around the cloud http://www.microsoft.com/cloud/ Which includes an easy mapping between well known product offerings from Microsoft and the Cloud “equivalents”

    Read the article

  • How Ubuntu cloud version enforces the "no root login" over ssh ?

    - by Maxim Veksler
    Hello, I'm looking to tweak ubuntu cloud version default setup where is denies root login. Attempting to connect to such machine yields: maxim@maxim-desktop:~/workspace/integration/deployengine$ ssh [email protected] The authenticity of host 'ec2-204-236-252-95.compute-1.amazonaws.com (204.236.252.95)' can't be established. RSA key fingerprint is 3f:96:f4:b3:b9:4b:4f:21:5f:00:38:2a:bb:41:19:1a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ec2-204-236-252-95.compute-1.amazonaws.com' (RSA) to the list of known hosts. Please login as the ubuntu user rather than root user. Connection to ec2-204-236-252-95.compute-1.amazonaws.com closed. I would like to know where this is setup and how I can change the printed message? Thank you, Maxim.

    Read the article

  • Way to auto resize photos before uploaded to cloud service?

    - by AndroidHustle
    I love using auto syncing services to have my photos taken with my smartphone stored on a cloud storage service. One problem though is that the photos are uploaded in high resolution and takes up a lot of space on the drive. I wonder if any one knows of a service/strategy to have the auto uploaded photo resized to have it occupy less space when auto stored? That is, without me having to take the photos with lesser quality, I still want photos taken with the highest quality since I may take a photo I really like.

    Read the article

  • How do I make a Windows virtual machine replicate to another datacenter/cloud?

    - by zippy
    We have a Windows 2008 VM running IIS and SQL Server Express (it's an all-in-one web application). We need to have another copy at our secondary datacenter site. What is the best way to do this? It doesn't have to be running all the time but it has to have almost the latest copy of the current VM. I took a look at VMWare Fault Tolerance and after the heart attack at the price I starting looking for another solution. If need be I wouldn't mind copying it over to a cloud VM provider, if I can find one that lets me copy my own VMs up and start them up without any conversion process.

    Read the article

  • IP not detected in terremark enteprise cloud server - how to install VMware on instance?

    - by JohnMerlino
    Using terremark enteprise cloud, when you create a server, you assigned an IP address to them and that IP is visible under Detected IP when selecting the server. However, I created a server, with IP address and I created an internet service and connected it with a node. I used protocol TCP and mapped it to port 3001. But I notice when I select my server, the IP address doesnt dsplay under Detected IP and then I VPN Connect, launch terminal and try to SSH with the IP to my server, and I get connection timed out. I presume the reason lies in that the IP address is not being detected. Someone suggested that my VMware-Tools is out of date and in fact on the server instance for VMware-Tools it does say "out of date". I'm not sure how to mount the instance and install VMware-Tools. I am using Mac OSX. Someone said that it will only work on PC running IE.

    Read the article

  • Cloud storage services offering one-time download links? [closed]

    - by TARehman
    Is anyone aware of consumer-targeted cloud storage services that allow users to generate a one-time download link for hosted files? Case in point: I have an encrypted container with some documents I need to send to a vendor. I would prefer to give them a one-time download link, so that I know when they have accessed the file, and then inform them of the passphrase by phone. I have heard that MediaFire offers 1-time links, but that they are buried in tons of advertising. At the moment, I'm not sure that I consider MediaFire fully legitimate; I'm more interested in solutions with Google Drive, Box.net, DropBox, etc.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >