Search Results

Search found 13776 results on 552 pages for 'high technology'.

Page 92/552 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • SQLAuthority News TechEd India April 12-14, 2010 Bangalore An Unforgettable Experience An Opport

    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQLAuthority News TechEd India April 12-14, 2010 Bangalore An Unforgettable Experience An Opport

    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The True Cost of a Solution

    - by D'Arcy Lussier
    I had a Twitter chat recently with someone suggesting Oracle and SQL Server were losing out to OSS (Open Source Software) in the enterprise due to their issues with scaling or being too generic (one size fits all). I challenged that a bit, as my experience with enterprise sized clients has been different – adverse to OSS but receptive to an established vendor. The response I got was: Found it easier to influence change by showing how X can’t solve our problems or X is extremely costly to scale. Money talks. I think this is definitely the right approach for anyone pitching an alternate or alien technology as part of a solution: identify the issue, identify the solution, then present pros and cons including a cost/benefit analysis. What can happen though is we get tunnel vision and don’t present a full view of the costs associated with a solution. An “Acura”te Example (I’m so clever…) This is my dream vehicle, a Crystal Black Pearl coloured Acura MDX with the SH-AWD package! We’re a family of 4 (5 if my daughters ever get their wish of adding a dog), and I’ve always wanted a luxury type of vehicle, so this is a perfect replacement in a few years when our Rav 4 has hit the 8 – 10 year mark. MSRP – $62,890 But as we all know, that’s not *really* the cost of the vehicle. There’s taxes and fees added on, there’s the extended warranty if I choose to purchase it, there’s the finance rate that needs to be factored in… MSRP –   $62,890 Taxes –      $7,546 Warranty - $2,500 SubTotal – $72,936 Finance Charge – $ 1094.04 Grand Total – $74,030 Well! Glad we did that exercise – we discovered an extra $11k added on to the MSRP! Well now we have our true price…or do we? Lifetime of the Vehicle I’m expecting to have this vehicle for 7 – 10 years. While the hard cost of the vehicle is known and dealt with, the costs to run and maintain the vehicle are on top of this. I did some research, and here’s what I’ve found: Fuel and Mileage Gas prices are high as it is for regular fuel, but getting into an MDX will require that I *only* purchase premium fuel, which comes at a premium price. I need to expect my bill at the pump to be higher. Comparing the MDX to my 2007 Rav4 also shows I’ll be gassing up more often. The Rav4 has a city MPG of 21, while the MDX plummets to 16! The MDX does have a bigger fuel tank though, so all in all the number of times I hit the pumps might even out. Still, I estimate I’ll be spending approximately $8000 – $10000 more on gas over a 10 year period than my current Rav4. Service Options Limited Although I have options with my Toyota here in Winnipeg (we have 4 Toyota dealerships), I do go to my original dealer for any service work. Still, I like the fact that I have options. However, there’s only one Acura dealership in all of Winnipeg! So if, for whatever reason, I’m not satisfied with the level of service I’m stuck. Non Warranty Service Work Also let’s not forget that there’s a bulk of work required every year that is *not* covered under warranty – oil changes, tire rotations, brake pads, etc. I expect I’ll need to get new tires at the 5 years mark as well, which can easily be $1200 – $1500 (I just paid $1000 for new tires for the Rav4 and we’re at the 5 year mark). Now these aren’t going to be *new* costs that I’m not used to from our existing vehicles, but they should still be factored in. I’d budget $500/year, or $5000 over the 10 years I’ll own the vehicle. Final Assessment So let’s re-assess the true cost of my dream MDX: MSRP                    $62,890 Taxes                       $7,546 Warranty                 $2,500 Finance Charge         $1094 Gas                        $10,000 Service Work            $5000 Grand Total           $89,030 So now I have a better idea of 10 year cost overall, and I’ve identified some concerns with local service availability. And there’s now much more to consider over the original $62,890 price tag. Tying This Back to Technology Solutions The process that we just went through is no different than what organizations do when considering implementing a new system, technology, or technology based solution, within their environments. It’s easy to tout the short term cost savings of particular product/platform/technology in a vacuum. But its when you consider the wider impact that the true cost comes into play. Let’s create a scenario: A company is not happy with its current data reporting suite. An employee suggests moving to an open source solution. The selling points are: - Because its open source its free - The organization would have access to the source code so they could alter it however they wished - It provided features not available with the current reporting suite At first this sounds great to the management and executive, but then they start asking some questions and uncover more information: - The OSS product is built on a technology not used anywhere within the organization - There are no vendors offering product support for the OSS product - The OSS product requires a specific server platform to operate on, one that’s not standard in the organization All of a sudden, the true cost of implementing this solution is starting to become clearer. The company might save money on licensing costs, but their training costs would increase significantly – developers would need to learn how to develop in the technology the OSS solution was built on, IT staff must learn how to set up and maintain a new server platform within their existing infrastructure, and if a problem was found there was no vendor to contact for support. The true cost of implementing a “free” OSS solution is actually spinning up a project to implement it within the organization – no small cost. And that’s just the short-term cost. Now the organization must ensure they maintain trained staff who can make changes to the OSS reporting solution and IT staff that will stay knowledgeable in the new server platform. If those skills are very niche, then higher labour costs could be incurred if those people are hard to find or if trained employees use that knowledge as leverage for higher pay. Maybe a vendor exists that will contract out support, but then there are those costs to consider as well. And let’s not forget end-user training – in our example, anyone that runs reports will need to be trained on how to use the new system. Here’s the Point We still tend to look at software in an “off the shelf” kind of way. It’s very easy to say “oh, this product is better than vendor x’s product – and its free because its OSS!” but the reality is that implementing any new technology within an organization has a cost regardless of the retail price of the product. Training, integration, support – these are real costs that impact an organization and span multiple departments. Whether you’re pitching an improved business process, a new system, or a new technology, you need to consider the bigger picture costs of implementation. What you define as success (in our example, having better reporting functionality) might not be what others define as success if implementing your solution causes them issues. A true enterprise solution needs to consider the entire enterprise.

    Read the article

  • Were the first assemblers written in machine code?

    - by The111
    I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.

    Read the article

  • lm-sensor and cpu temperatures

    - by nalsanj
    i am on ubuntu Precise Pangolin. The processor is Intel i3. a desktop. i installed lm-sensors and below is the report "sensors" gave coretemp-isa-0000 Adapter: ISA adapter Core 0: +30.0°C (high = +89.0°C, crit = +105.0°C) Core 2: +33.0°C (high = +89.0°C, crit = +105.0°C) w83627dhg-isa-0a10 Adapter: ISA adapter Vcore: +0.93 V (min = +0.00 V, max = +1.74 V) in1: +0.75 V (min = +1.99 V, max = +1.99 V) ALARM AVCC: +3.36 V (min = +2.98 V, max = +3.63 V) +3.3V: +3.36 V (min = +2.98 V, max = +3.63 V) in4: +1.30 V (min = +0.90 V, max = +1.77 V) in5: +0.76 V (min = +1.15 V, max = +0.90 V) ALARM in6: +1.06 V (min = +0.94 V, max = +2.03 V) 3VSB: +3.36 V (min = +2.98 V, max = +3.63 V) Vbat: +3.36 V (min = +2.70 V, max = +3.30 V) ALARM fan1: 0 RPM (min = 3515 RPM, div = 128) ALARM fan2: 0 RPM (min = 10546 RPM, div = 128) ALARM fan3: 0 RPM (min = 10546 RPM, div = 128) ALARM fan5: 0 RPM (min = 10546 RPM, div = 128) ALARM temp1: +39.0°C (high = -121.0°C, hyst = +9.0°C) ALARM sensor = diode temp2: +39.0°C (high = +80.0°C, hyst = +75.0°C) sensor = diode temp3: +127.0°C (high = +80.0°C, hyst = +75.0°C) ALARM sensor = thermistor cpu0_vid: +2.050 V intrusion0: OK radeon-pci-0100 Adapter: PCI adapter temp1: +70.5°C The fans sensors are detecting 0 RPM and some temperatures are out of range - the ALARMs above but i dont understand it very well. Can someone help out?

    Read the article

  • Want to develop my own primitive physics engine, don't know how to start with it's high-level architecture. Suggestions?

    - by Violet Giraffe
    Few years ago I tried to make a simple 3D game - billiards. Completed like 50%, stuck with physics. Basically, I only need to calculate balls rolling over flat surface, but it would be nice to make something more flexible. I know all the formulas and laws (most of them, anyway). the problem is I have no idea of how to make good physics engine architecture-wise. I tried google and other forums but didn't find what I was looking for. The only suggestion was to look at open-source engine, but I'm not that good a programmer to make heads or tails out of it...

    Read the article

  • Introducing Sreelatha Doma, Guest Author

    - by Steven Chan
    I'm very pleased to welcome Sreelatha Doma to this blog's panel of guest authors.  Sreelatha Doma is a Principal Engineer - Database Administration in the Oracle Applications Technology Integration team, with a current focus on database technology.  She has been with Oracle since October 2005.  She was an EBS technology stack certification engineer for four years, and was involved in various technology product certifications for databases, RAC, browsers, Forms and middleware products. Prior to joining Oracle, she worked as a database administrator and Senior Technical Officer in Electronics and Communications India Limited (ECIL) and the Department of Atomic Energy.  She started her career as a software developer. Sreelatha has been in in the IT industry for over 13 years, and holds a B.Tech in Computer Science and Engineering.

    Read the article

  • DotNetNuke Website Developers in India

    DotNetNuke is a novel technology here in India and developers providing services through this technology are available in reasonable numbers. The technology has been brought into effect over a period... [Author: John Anthony - Web Design and Development - May 18, 2010]

    Read the article

  • Laptop runs HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu additions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • OUCH! Laptop running SUPER HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu editions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! (left side of terminal) (right side of terminal) This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • Root cause for high CPU usage; which measurement to trust more: Windows Task Manager or Process Explorer?

    - by p.campbell
    Consider this Windows 8.1 machine (in-place upgrade from Windows 8) with differing reports on its CPU usage. The machine is idle, and has been for 3 days. There are no CPU intensive tasks running currently nor over the 3 day idle period. Windows Task Manager is reporting CPU usage constantly at an incredibly high value (and increasing over time!) at around 75%. Process Explorer from SysInternals reports that the CPU usage is much different at around 42% How does Process Explorer report 42.14% usage, but its columns report Idle at 57%, with the sum of the other processes not even approaching 10%? Which of these two values should I trust more, and why should it be trusted over the other measurement? How can I actually determine which process is causing Task Manager to report its values? These Proc Exp metrics were taken with Administrator privileges, and with option 'Show Details for All Processes' Click for larger view:

    Read the article

  • New Whitepaper - Exalogic Virtualization Architecture

    - by Javier Puerta
    One of the key enhancements in the current generation of Oracle Exalogic systems—and the focus of this whitepaper—is Oracle’s incorporation of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology to permit the system to share the internal InfiniBand network and storage fabric between as many as 63 virtual machines per physical server node with near-native performance simultaneously allowing both high performance and high workload consolidation. Download it here: An Oracle White Paper - November 2012- Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads

    Read the article

  • XDIME for Mobile Applications

    - by Carlos Gavidia
    I'm involved in a project that requires to mobile-enable some previously developed Portlets. The Portlets are deployed in WebSphere Portal, and the container offers a technology called IBM Mobile Portal Accelerator that uses XDIME to render mobile pages according to the device. I'm trying to document myself in the technology and I'm having a bad time: Google only shows some outdated sites from IBM and even older posts from Volantis, another company involved in the technology (Amazon shows no related books). So... what's the current status of that technology actually? Is has some decent level of adoption?

    Read the article

  • Discover 25 Years of SPARC Innovation

    - by Cinzia Mascanzoni
    Over the last 25 years SPARC technology has led the field in enterprise IT innovation – providing world record performance to data centers across the globe. Discover how the history of SPARC has formed the IT landscape of today, and how upcoming improvements to this industry-leading technology will continue to shape the future. Register Now to hear the story of SPARC from the people who shaped the past, present, and future of this remarkable technology

    Read the article

  • SQL Server 2012 AlwaysOn: Multisite Failover Cluster Instance

    SQL Server Failover Clustering, which includes support for both local and multisite failover configurations, is part of the SQL Server 2012 AlwaysOn implementation suite, designed to provide high availability and disaster recovery for SQL Server. The multisite failover clustering technology has been enhanced significantly in SQL Server 2012. The multisite failover cluster architecture, enhancements in SQL Server 2012 to the technology, and some best practices to help with deployment of the technology are the primary focus of this paper.

    Read the article

  • "Growing Green"

    Organizations are managing more information, reducing fuel consumption, and developing clean energy with Oracle technology.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >