Search Results

Search found 3000 results on 120 pages for 'computing capacity'.

Page 74/120 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • After force-installing a 32-bit deb failed, how can I install the 64-bit version?

    - by TryTryAgain
    I tried to dpkg -i --force-architecture google-earth-stable_i386.deb and it failed. But now when I try to install the amd64.deb it fails saying dpkg: error processing google-earth-stable_current_amd64.deb (--install): google-earth-stable: 6.2.2.6613-r0 (Multi-Arch: no) is not co-installable with google-earth-stable:i386 6.2.2.6613-r0 (Multi-Arch: no) which is currently installed Errors were encountered while processing: google-earth-stable_current_amd64.deb somehow it thinks the i386 version is installed. No google-earth files or directories even exist. sudo dpkg --configure -a outputs: dpkg: dependency problems prevent configuration of google-earth-stable:i386: google-earth-stable:i386 depends on lsb-core (= 3.2). dpkg: error processing google-earth-stable:i386 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: google-earth-stable:i386 so it does exist in some capacity. sudo apt-get -f install does nothing out of the ordinary: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. The weird thing is that synaptic doesn't show any google earth package available let alone installed, nothing under the broken filter either. I have also tried sudo apt-get autoremove and sudo apt-get autoclean So, my question: How can I get rid of this issue?

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Growing mobile developers inside a web development org

    - by Arkaaito
    I work for a "mature web startup" as a web developer (mainly using PHP). Our main site has about 8 million registered members at the moment. However, the site is basically impossible to use on anything that's not a real computer. One of our most-requested features, if not the most requested, is a mobile app or mobile version of the site. I think we need to do it. Management thinks we need to do it. In fact, everyone in the company thinks we need to do it. But it's nigh impossible to hire someone with iPhone/Android skills in the present market. I'm the only person at the company with any level of mobile development experience currently, and I'm not that good (yet), so I'm seeking comments on how to bootstrap a capacity for mobile development. Anything from general tips (should I focus on developing my personal skills first or try to pick up a more experienced mobile dev?) to specific recommendations on training, etc., may be helpful, as long as it doesn't reduce to "sucks to be you." :-)

    Read the article

  • Learning to be a good developer: what parts can you skip over?

    - by Andrew M
    I have set myself the goal of becoming a decent developer by this time next year. By this I mean full experience of the development 'lifecycle,' a few good apps/sites/webapps under my belt, and most importantly being able to work at a steady pace without getting sidelined for hours by some should-know-this-already technique. I'm not starting from scratch. I've written a lot of html/css, SQL, javascript, python and VB.net, and studied other languages like C and Java. I know about things like OOP, design patterns, TDD, complexity, computational linguistics, pointers/references, functional programming, and other academic/theoretical matters. It's just I can't say I've really done these things yet. So I want to get up to speed, and I want to know what things I can leave till a later date. For instance, studying algorithms and the maths behind them is interesting and all, but so far I've hardly needed to write anything but the most basic nested loops. Investigating Assembly to have a clearer picture of low-level operations would be cool... but I imagine rarely infringes on daily work. On the other hand, looking at a functional programming language might help me write programs that are more comprehensible and less prone to hidden failures (at the moment I'm finding the biggest difficulty is when the complexity of the app exceeds my capacity to understand it - for instance passing data around was fine... until I had to start doing it with AJAX, which was a painful step up). I could spend time working through case studies of design patterns, but I'm not sure how many of them get used in 'real life.' I'm a programmer with basic abilities - what skills should I focus on developing? (also my Unix skills are very weak, and also knowledge of Windows configuration... not sure how much time I should spend on that)

    Read the article

  • Gnome-Network-Manager Config File Migration

    - by Jorge
    I think I have an issue with gnome-network-manager, I used to have a lot of connections configured, Wireless, Wired and VPN. After upgrading to 12.04 (from 11.10) I lost every configuration. I realized that the configs that used to be saved in $HOME/.gconf/system/networking/connections now are being saved in /etc/NetworkManager/system-connections/. I don't know how to migrate my settings to the new config file format Can anybody help me? jorge@thinky:~$ sudo lshw -C network *-network description: Ethernet interface product: 82566MM Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 03 serial: 00:1f:e2:14:5a:9b capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.5.1-k firmware=0.3-0 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:fe000000-fe01ffff memory:fe025000-fe025fff ioport:1840(size=32) *-network description: Wireless interface product: PRO/Wireless 4965 AG or AGN [Kedron] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 61 serial: 00:21:5c:32:c2:e5 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwl4965 driverversion=3.2.0-23-generic-pae firmware=228.61.2.24 ip=192.168.2.103 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn resources: irq:47 memory:df3fe000-df3fffff jorge@thinky:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise jorge@thinky:~$ uname -a Linux thinky 3.2.0-23-generic-pae #36-Ubuntu SMP Tue Apr 10 22:19:09 UTC 2012 i686 i686 i386 GNU/Linux jorge@thinky:~$ dpkg -l | grep -i firm ii linux-firmware 1.79 Firmware for Linux kernel drivers

    Read the article

  • Installation Problems & Antivirus

    - by Sagar Walvekar
    We have just migrated our systems from Windows to Ubuntu. We have more than 50 systems previously running Windows XP & Windows 7. After the release of 12.04 we decided to switch to Ubuntu. While we were installing Ubuntu 12.04 on Pentium 4 systems we have faced some issues. We tried it with a USB flash driver & CD. The typical configuration of the system is as follows: Pentium 4 with HT & 512 MB RAM, 40/80 GB IDE / SATA HDD. In windows there were several partitions and data on all the drives in all the systems. We have addressed this by taking backup of the C: drive on other drives & while installing Ubuntu we have just deleted & the C drive & recreated Linux partitions as follows - /root , /Boot, /Home & Swap.(in 10 GB) The problem is that the Linux gets installed without any error. However when I restart the system after the installation the system hangs in while detecting the HDD in BIOS and the system fails to start until I remove the HDD. When I installed the previous Ubuntu version it worked fine, without any hassle. Also if we install Ubuntu 12.04 on same HDD on any other higher capacity system & reconnected to old system it works fine. How can I fix this problem? Also, is there is any Antivirus which will give me real time protection on ubuntu 11 & higher versions with both 32 & 64 Bits?

    Read the article

  • Nokia Lumia Windows Phones Coming To India On Nov 14

    - by Gopinath
    Nokia released it’s first set of smartphones, Lumia 800 & Lumia 710,  powered by Microsoft’s Windows Phone OS few weeks ago in Europe. India being it’s one of the favourite markets, Nokia is all set to launch the Lumia on November 14 in New Delhi. Unlike Apple who releases iPhones in India very late, Nokia is planning to bring its flagship smart mobiles very early to Indian market.  This is a good move by Nokia to keep it’s existing market share that is continuously challenged by Android OS smart phones from various manufactures. Nokia Lumia 710 runs on version 7.5 of Windows Phone OS(nick named as Mango) with 512 MB RAM, 8 GB internal storage capacity, 3.7″ WVGA TFT display, WIFI, GPS,  and 5 MP camera. Price details of the phone is not available but to be competitive in the market it should be priced some where between 25,000 to 30,000. Lets wait two more days and we will get the full details after the press release on Nov 14th. source: nirmaltv This article titled,Nokia Lumia Windows Phones Coming To India On Nov 14, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • connecting to freenx server : configuration error

    - by Sandeep
    I am not able to pin point what is missing. I have configured freenx-server (useradd, passwd etc). However, server drops the connection after authentication. Please note my server is ubuntu 10.04 and client recent version. Below is the error log NX> 203 NXSSH running with pid: 6016 NX> 285 Enabling check on switch command NX> 285 Enabling skip of SSH config files NX> 285 Setting the preferred NX options NX> 200 Connected to address: 192.168.2.2 on port: 22 NX> 202 Authenticating user: nx NX> 208 Using auth method: publickey HELLO NXSERVER - Version 3.2.0-74-SVN OS (GPL, using backend: 3.5.0) NX> 105 hello NXCLIENT - Version 3.2.0 NX> 134 Accepted protocol: 3.2.0 NX> 105 SET SHELL_MODE SHELL NX> 105 SET AUTH_MODE PASSWORD NX> 105 login NX> 101 User: sandeep NX> 102 Password: NX> 103 Welcome to: ubuntu user: sandeep NX> 105 listsession --user="sandeep" --status="suspended,running" --geometry="1366x768x24+render" --type="unix-gnome" NX> 127 Sessions list of user 'sandeep' for reconnect: Display Type Session ID Options Depth Screen Status Session Name ------- ---------------- -------------------------------- -------- ----- -------------- ----------- ------------------------------ NX> 148 Server capacity: not reached for user: sandeep NX> 105 startsession --link="lan" --backingstore="1" --encryption="1" --cache="16M" --images="64M" --shmem="1" --shpix="1" --strict="0" --composite="1" --media="0" --session="t" --type="unix-gnome" --geometry="1366x768" --client="linux" --keyboard="pc105/us" --screeninfo="1366x768x24+render" ssh_exchange_identification: Connection closed by remote host NX> 280 Exiting on signal: 15

    Read the article

  • Missing Wireless access point

    - by nvcnvn
    I have an: Ubuntu 12.04 with proprietary Boardcom STA Wireless driver installed BCM43224 802.11a/b/g/n wireless card TP-LINK TD-W8101G 54Mbps Wireless ADSL2+ Modem Router Yesterday before I went to sleep, I still connect to my wireless access point, this morning when I start my laptop I don't see it in the list - there are some neighbors listed but not mine. The WLAN is green, with my old Nokia E72, I still see my access point with 100% signal. I have tried to restart my laptop and turn the firmware off/on by the switch but no help. I have read the WirelessTroubleShootingGuide but I can do anything. Here is some of my card info: *-network description: Wireless interface product: BCM43224 802.11a/b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: eth1 version: 01 serial: c0:cb:38:06:5d:53 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11abgn resources: irq:17 memory:f0500000-f0503fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 03 serial: f0:4d:a2:42:ab:e4 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168d-1.fw ip=192.168.1.3 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:42 ioport:5000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff memory:f0420000-f043ffff

    Read the article

  • ArchBeat Link-o-Rama for 11/30/2011

    - by Bob Rhubart
    Coding - the new Latin | @BBCRoryCJ BBC Technology Correspondent Rory Cellan-Jones reports on why "the campaign to boost the teaching of computer skills - particularly coding - in schools is gathering force." BPM Business Value Patterns | SOA Partner Community Blog Juergen Kress shares the presentation he and Matthias Ziegler from Accenture delivered at the SOA & BPM Integration Days event in Germany in October. Coherence 3.7.1 Resources Busy blogger Juergen Kress shares links to screencasts and other resources for those interested in Oracle Coherence 3.7.1. OBIEE 11.1.1 - Introduction to OBIEE 11g Full Sample App "The OBIEE 11g Full Sample App (FSA) is a comprehensive collection of examples designed to demonstrate the latest Oracle BIEE 11g capabilities and design best practices." Solaris 11 Customer Maintenance Lifecycle | Gerry Haskins Gerry Haskins launches a new blog devoted to Solaris "policies, best practices, clarifications, and lots of other stuff." Harnessing Business Events for Predictive Decision Making - part 1 / 3 | Sanjeev Sharma "Data growth is outpacing storage capacity by a factor of two and computing power is still very much bounded by Moore's Law, doubling only every 18 months," says Sanjeev Sharma. The Latest Research from the SEI | Douglas C. SchmidtSchmidt shares information on several recently published Software Engineering Institute (SEI) technical reports that "highlight the latest work of SEI technologists in Agile methods, insider threat,the SMART Grid Maturity Model, acquisition, and CMMI." Tiger/Line Shape Files and Oracle | Bradley D. Brown "Have you ever needed to load an ESRI "shape file" and wondered if that's an easy effort or a difficult effort? I know I have and I assumed that it was a pretty difficult effort. However, I learned today that's actually pretty easy!" -- Oracle ACE Director Bradley Brown of TUSC. Webcast: Enterprise Clouds with Oracle VM Tuesday, December 6, 2011, 9:00 am PT / Noon ET. Featuring Adam Hawley (Senior Director of Product Management, Oracle) and Dan Herrup (Principal Systems Engineer, Oracle Corporate Citizenship). SOA Made Simple; Architects in AZ; Cloud Migration Introduction This week on the Architect Home Page on OTN.

    Read the article

  • Ask the Readers: How Do You Deal with Bacn?

    - by Jason Fitzpatrick
    Most people get their fair share of email they want, email they don’t want at all (Spam), and a healthy dose of Bacn–email they want but not right now. How do you deal with your daily dose of Bacn? While Spam is unsolicited garbage you don’t ever want, Bacn is email content you’ve actively selected to receive (weather updates, coupons from your favorite retailers, web site digests, etc.) that isn’t as important as email from friends and coworkers. It’s email that you want but not right now. This week we want to hear all about your methods for wrangling Bacn so you can enjoy it when you’re in the mood but it doesn’t clutter up your inbox when you aren’t. Sound off in the comments with your Bacn handling tips and then check back in on Friday for the What You Said roundup to see how your fellow readers handle things. HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • Play Updated Retro Arcade Games for Free Courtesy of Microsoft

    - by Jason Fitzpatrick
    As part of a promotion for Internet Explorer 10, Microsoft comissioned Atari to update several classic retro games from the arcade and the Atari consoles. Fortunately for those of us looking for a retro gaming fix, you can play the games in any HTML5-enabled browser. While game play is really smooth on most games there are a few little quirks that using Internet Explorer 10 does take care of. First, if you’re not on IE10, you’ll see a little advertisement before you begin playing each game. Second, some games call on some of the new touch/motion variable functionality built into IE10 for the Windows 8 tablet experience and they’ll stall out when they reach the point in the game they need to call that variable. That said, we played quite a few games without any hiccups at all. Hit up the link below to play classics like Asteroids, Lunar Lander, Pong, Super Breakout, and more. Atari IE10 Promo Gallery [Atari] HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • my LaCie 500 Gb not mounted on 11.10

    - by pooo
    My external USB drive was recognized with 10.x versions of Ubuntu but since 11.x I am getting stuck, I had tried everything I read in forums but still the same error: 4956.401052] usb 2-1.4: new high speed USB device number 14 using ehci_hcd [ 4956.539216] scsi14 : uas [ 4956.740955] scsi 14:0:0:0: Direct-Access LaCie Rugged FW USB3 1081 PQ: 0 ANSI: 4 [ 4963.256055] scsi 14:0:0:0: uas_eh_abort_handler tag 0 [ 4963.256076] scsi 14:0:0:0: uas_eh_device_reset_handler tag 0 [ 4963.256085] scsi 14:0:0:0: uas_eh_target_reset_handler tag 0 [ 4963.256091] scsi 14:0:0:0: uas_eh_bus_reset_handler tag 0 [ 4963.328122] usb 2-1.4: reset high speed USB device number 14 using ehci_hcd [ 4963.468743] scsi 14:0:0:0: Device offlined - not ready after error recovery [ 4963.468813] scsi 14:0:0:0: rejecting I/O to offline device [ 4963.468831] scsi 14:0:0:0: rejecting I/O to offline device [ 4963.469204] scsi 14:0:0:1: uas_sense_old: urb length 26 disagrees with IU sense data length 510, using 18 bytes of sense data [ 4963.512104] sd 14:0:0:0: Attached scsi generic sg3 type 0 [ 4994.253779] sd 14:0:0:0: uas_eh_abort_handler tag 0 [ 4994.253802] sd 14:0:0:0: uas_eh_device_reset_handler tag 0 [ 4994.253809] sd 14:0:0:0: uas_eh_target_reset_handler tag 0 [ 4994.253815] sd 14:0:0:0: uas_eh_bus_reset_handler tag 0 [ 4994.325880] usb 2-1.4: reset high speed USB device number 14 using ehci_hcd [ 4994.466488] sd 14:0:0:0: Device offlined - not ready after error recovery [ 4994.466555] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466573] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466582] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466588] sd 14:0:0:0: [sdc] READ CAPACITY failed [ 4994.466593] sd 14:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 4994.466600] sd 14:0:0:0: [sdc] Sense not available. [ 4994.466608] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466616] sd 14:0:0:0: [sdc] Write Protect is off [ 4994.466622] sd 14:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 4994.466629] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466635] sd 14:0:0:0: [sdc] Asking for cache data failed [ 4994.466640] sd 14:0:0:0: [sdc] Assuming drive cache: write through [ 4994.467003] sd 14:0:0:0: [sdc] Attached SCSI disk if I am trying on an old ubuntu, the drive is mounted,

    Read the article

  • How to partition Seagate FreeAgent GoFlex 2TB hard disk?

    - by balki
    Hi I bought a new Seagate 2TB external hard disk. I opened the drive's application in my virtual windows, did product registration using the application present in it. I have few questions on how best to use it. The drive by default has some files and folders - setup.exe, System Volume Information, USB 3.0 PC Card Adapter etc,. I copied all the files to my laptop. Is it safe to delete these files? It has a dash board for windows which allows to tune power options, test the drive etc. Will I be able to use the dash board if I put back all these files and mount on windows again? I want to partition and format the hard disk. Data I like to store is Around 10 to 20 GB Files - Virtual box images. Around 4GB Files - Dvd images. Other Movies and personal Files. What is the best filesystem to store very huge files like 10 to 20GB files. So that they are written and accessed fast also best uses the drive's capacity. If I leave one of the partition as ntfs and others to different files systems, will it be able to mount on windows and Will I be able to use the device's dash board? Note: I dont need any encryption for my data. Any other advice on using the hard disk is also welcome.

    Read the article

  • What stands out on a Juniors CV

    - by DeanMc
    Hi all, I am looking for advice, hopefully from people more experienced that me. I am going to start applying for junior positions for .net development in the summer. At the moment the only thing on my CV is a project I have done for Windows Phone 7 called sprout sms. It allows the user to send webtext as provided by Irish Operators. The app is doing well and is top ten in my local marketplace (Ireland). By trade I am a salesman and that is the extent of most of my employment. I haven't been to college and due to financial commitments I would not be able to go down the road of full time education. I have kept up to date with various .net related tech in a junior capacity and am looking to now change careers. What I am look to see is what stands out on a juniors CV. My lack of education stands against me so I am looking to offset this with practical experience. I am open to all suggestions and from the end of this month I am free to pursue "notches" which will make my CV stand out. So in short if you where hiring a Junior, what would you like to see that would make you take a second look or request an interview? NOTE: I do fear this is a subjective subject, rather than debate what is the best items to have on a Juniors CV I would like to concentrate on what info you would give to a junior who is looking to apply for a job this year. Thanks to all that respond.

    Read the article

  • MightyMintyBoost Is a 3-in-1 Gadget Charger

    - by ETC
    If you’re looking for a versatile battery booster, this DIY 3-in-1 solar/usb/wall current charger known as the MightyMintyBoost will top of your phone, mp3 player, and other gadgets with ease. Instructables user Honus didn’t just build the MightMintyBoost to geek out and show off his electronics project skills (although it’s certainly a nifty little project to do so), he’s serious about solar power and the impact clean energy has: Apple has sold over 30 million iPodTouch/iPhone units- imagine charging all of them via solar power…. If every iPhone/iPodTouch sold was fully charged every day (averaging the battery capacity) via solar power instead of fossil fuel power we would save approximately 50.644gWh of energy, roughly equivalent to 75,965,625 lbs. of CO2 in the atmosphere per year. Granted that’s a best case scenario (assuming you can get enough sunlight per day and approximately 1.5 lbs. CO2 produced per kWh used.) Of course, that doesn’t even figure in all the other iPods, cell phones, PDAs, microcontrollers (I use it to power my Arduino projects) and other USB devices that can be powered by this charger- one little solar cell charger may not seem like it can make a difference but add all those millions of devices together and that’s a lot of energy! His MightyMintyBoost is a battery booster for devices that can charge via USB and it accepts incoming current from the solar panel on top (or, on cloudy days can be charged via a wall charger or the USB port on your computer). Hit up the link below to see his full build guide and create your own MightyMintyBoost. MightyMintyBoost [Instructables] Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • two possible wifi devices competing, one is hard blocked - unable to connect wireless

    - by patrickmw
    blacklisted acer_wmi because that was showing up in the rfkill list then ideapad_wlan was listed $ rfkill list wifi 1: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 3: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: yes $ lshw -C network *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: c0 serial: f0:de:f1:12:21:e9 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.1.139 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:42 memory:f0400000-f043ffff ioport:2000(size=128) *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: eth1 version: 01 serial: ac:81:12:38:ba:89 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:f0500000-f0503fff contents of /var/lib/NetworkManager/NetworkManager.state [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true I'm not sure how to disable the wifi devices independently. I'm also not sure which device is the correct one. I think its the brcmw device. Any suggestions?

    Read the article

  • Is there any evidence that drugs can actually help programmers produce "better" code? [closed]

    - by sytycs
    I just read this quote from Steve Jobs: "Doing LSD was one of the two or three most important things I have done in my life." Also a quote from that article: He was hardly alone among computer scientists in his appreciation of hallucinogenics and their capacity to liberate human thought from the prison of the mind. Now I'm wondering if there's any evidence to support the theory that drugs can help make a "better" programmer. Has there ever been a study where programmers have been given drugs to see if they could produce "better" code? Is there a well-known programming concept or piece of code which originated from people who were on drugs? EDIT So I did a little more research and it turns out Dennis R. Wier actually documented how he took LSD to wrap his head around a coding project: "At one point in the project I could not get an overall viewpoint for the operation of the entire system. It really was too much for my brain to keep all the subtle aspects and processing nuances clear so I could get a processing and design overview. After struggling with this problem for a few weeks, I decided to use a little acid to see if it would enable a breakthrough, because otherwise, I would not be able to complete the project and be certain of a consistent overall design"[1] There is also an interesting article on wired about Kevin Herbet, who used LSD to solve tough technical problems and chemist Kary Mullis even said "...that LSD had helped him develop the polymerase chain reaction that helps amplify specific DNA sequences." [2]

    Read the article

  • Java stored procedures in Oracle, a good idea?

    - by Scott A
    I'm considering using a Java stored procedure as a very small shim to allow UDP communication from a PL/SQL package. Oracle does not provide a UTL_UDP to match its UTL_TCP. There is a 3rd party XUTL_UDP that uses Java, but it's closed source (meaning I can't see how it's implemented, not that I don't want to use closed source). An important distinction between PL/SQL and Java stored procedures with regards to networking: PL/SQL sockets are closed when dbms_session.reset_package is called, but Java sockets are not. So if you want to keep a socket open to avoid the tear-down/reconnect costs, you can't do it in sessions that are using reset_package (like mod_plsql or mod_owa HTTP requests). I haven't used Java stored procedures in a production capacity in Oracle before. This is a very large, heavily-used database, and this particular shim would be heavily used as well (it serves as a UDP bridge between a PL/SQL RFC 5424 syslog client and the local rsyslog daemon). Am I opening myself up for woe and horror, or are Java stored procedures stable and robust enough for usage in 10g? I'm wondering about issues with the embedded JVM, the jit, garbage collection, or other things that might impact a heavily used database.

    Read the article

  • WolframAlpha Can Now Do In-depth Analysis of Your Facebook Account

    - by Jason Fitzpatrick
    If you’re a big fan of WolframAlpha’s ability to crunch the numbers on just about anything–and we certainly are–you’ll likely be just as delighted as we were to watch it massage the data from your Facebook account. Find out your most liked, discussed, and shared posts, see your Facebook habits, and other neat trends. I unleashed it on my account this morning, not sure what to expect from the results. Within the results tabulation WolframAlpha provided me with all sorts of neat data break downs. I now know exactly how many days it is to my next birthday, the composition of my aggregate posting habits (how many posts are status updates, links, or photos), the time of day when I do the most posting (and what the composition of those posts is), and my average post length. I also know my most liked post and my most commented on post. It will even crunch the numbers on your network of friends (60.6% of my friends are married, for example). By far one of the more interesting data analysis it does on the friendship data, however, is organizing all your friends into relationship clusters so you can see who in your Facebook network is friends with other people in your Facebook network. The service from WolframAlpha is free: simply visit the WolframAlpha search portal and type in “Facebook report” to start the process. You’ll be prompted to create a WolframAlpha account if you don’t have one and to authorize the WolframAlpha Facebook app to access your data. Your Facebook data is cached to your WolframAlpha account for one hour in order to crunch the numbers and display the results. WolframAlpha HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Donkey Kong Wall Shelves [DIY Project Inspiration]

    - by Asian Angel
    Are you looking for inspiration for a geeky DIY project to get into over the holiday weekend? Then take a look at this fantastic looking set of Donkey Kong wall shelves created by artist Igor Chak! From the website: So here is a Donkey Kong wall, strong, good looking but still has its character. The wall is made out of individual sections; each section is made out of durable but light carbon fiber, anodized aluminum pixels that are joined with strong stainless steel rods and toughened glass tops. The special mounts themselves are made out of steel and can support up to 60 lbs. Igor’s notes and additional images for the project can be found approximately half way down the webpage linked below. If Donkey Kong is not your favorite game, this could still inspire a shelving project focused on the one you like best! Donkey Kong Wall [via Neatorama] HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • Four Proven Advantages of Online Learning | Outside Cost, Accessibility or Flexibility

    - by Mohit Phogat
    Coursera believes that online courses complement and supplement traditional education (versus a common misconception online will “replace” traditional.) Our research shows that Coursera’s platform, when used concurrently with a traditional classroom setup, is ideal for “blended learning” (i.e., students watch lectures pre-class, then class-time focuses on interactive work and discussion.) Additionally, we agree with Brad Zomick of SkilledUp—an online learning aggregator—who acknowledges an online course “isn’t an alternative at all but rather a different path with its own rewards.” The advantages of Coursera and our apps for mobile were straightforward and conspicuous from the start: we’re free, open, and flexible to learners’ unique needs and style. Over the past two years, however, the evidence proves there are many more tangible benefits to open, online learning. In SkilledUp’s “The Advantages of Online Courses [Infographic]”–crafted from findings of leading educational research–four observations stand out from the overt characteristics: Speedier Learning - “Research shows that online students achieve same or better learning results in about half the time as those in traditional courses” More Active, Engaged & Motivated - Learners thrive “when working with coursework that is challenging but within their capacity to master.” Tangible Skill Building - with an “improved attitude toward learning” Better Teaching Quality - Courses are taught by experts, with various multimedia and cutting-edge technology, and “are usually better organized than traditional courses” This is only the beginning, Courserians! Everyday we hear your incredible stories on how open online courses enrich your lives and enhance your careers. Meanwhile we study the steady stream of scientific, big-data research proving their worth on a large-scale (such as UPenn’s latest research on the welcomed diversity in Coursera-hosted Wharton MBA courses.) Our motto “Learning without Limits” reminds us that open, online courses give tremendous opportunity to those that might not otherwise have access (or time, or money) to study at a high-caliber institution. Source: Coursera

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >