Search Results

Search found 44691 results on 1788 pages for 'first'.

Page 501/1788 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • What incidentals do server maintenance people/devs need?

    - by SeniorShizzle
    I'm trying to put together a thank-you package for clients who are server-side developers and in charge of their company's servers and databases. Since I've never been in that line of work before, I would like to know what it is like. Get inside your heads, for instance. My first thought was (don't hate me) a few patch cables. I don't know if you guys actually need these often or anything. What are some similar things that you like or need for your lives at work. Small incidentals less than $10 are preferred, like Coffee or a notebook, screwdrivers. Et cetera.

    Read the article

  • Week in Geek: New Security Hole Found Just Hours After Latest Java Update Released

    - by Asian Angel
    Our first edition of WIG for September is filled with news link coverage on topics such as Firefox 16 Beta introduces new command line feature for developers, Google to restore passwords lost using Chrome iOS app, new password stealing malware is targeting Linux & Mac OS X users, and more. Special Note: The title refers to the latest security update of Java just released this past Thursday. Please refer to our article on disabling Java here. Skull and crosshair targeting scope clipart courtesy of Clker.com. HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Juju bootstrap fails with "Temporary failure in name resolution" using Amazon AWS

    - by Will
    I have followed the instructions over at https://juju.ubuntu.com/docs/config-aws.html to try and setup myself with a juju environment. It all seemed to setup alright. SSH keys, Gererate config, repository, adding access id and secret keys to environments.yaml file. Although my key file from aws IAM management console was called credentials.csv rather than rootkey, I couldn't find that link described in the documentation. When I give the command juju bootstrap in the testing page. It fails giving me the error: juju bootstrap ERROR Get https://s3-us-west-1.amazonaws.com/juju-gobblygookmynukmbersinhere/provider-state: lookup s3-us-west-1.amazonaws.com: Temporary failure in name resolution (note I just replaced my numbers for this posting my actual terminal has my numbers in. This is my first attempt at any ec2 work so I have gone in and created new IAM profiles. What have I done wrong? Any help would be great. I think I'm in over my head!

    Read the article

  • Number crunching algo for learning multithreading?

    - by Austin Henley
    I have never really implemented anything dealing with threads; my only experience with them is reading about them in my undergrad. So I want to change that by writing a program that does some number crunching, but splits it up into several threads. My first ideas for this hopefully simple multithreaded program were: Beal's Conjecture brute force based on my SO question. Bailey-Borwein-Plouffe formula for calculating Pi. Prime number brute force search As you can see I have an interest in math and thought it would be fun to incorporate it into this, rather than coding something such as a server which wouldn't be nearly as fun! But the 3 ideas don't seem very appealing and I have already done some work on them in the past so I was curious if anyone had any ideas in the same spirit as these 3 that I could implement?

    Read the article

  • Smart Grid Gateway and New Meter Data Management released

    - by Anthony Shorten
    Two products have just been released and are available from edlivery.oracle.com. Smart Grid Gateway 2.0.0 - A new product to integrate to Smart Grid networks Meter Data Management 2.0.1 - A new version of the Meter Data Management product. These products are the first products to use the brand new version of the Oracle Utilities Applicaton Framework (V4.1). The new framework builds up on FW2.2 and FW4.0.2 to add exciting new features (this is just a subset): Support for Database Vault Enhancements to Business Object Maintenance Batch Statistics Portal for benchmarking Custom template user exit support File permissions now consistent with other Oracle products Use of Universal Connection Pool for all database pool access Ability to manage the batch data cache Over the next few weeks I will be publishing articles and updates to existing whitepapers to highlight all the new features.

    Read the article

  • More Quick Interview Tips

    - by Ajarn Mark Caldwell
    In the last couple of years I have conducted a lot of interviews for application and database developers for my company, and I can tell you that the little things can mean a lot.  Here are a few quick tips to help you make a good first impression. A year ago I gave you my #1 interview tip: Do some basic research!  And a year later, I am still stunned by how few technical people do the most basic of research.  I can only guess that it is because it is so engrained in our psyche that technical competence is everything (see How to Manage Technical Employees for more on this idea) that we forget or ignore the importance of soft skills and the art of the interview.  Or maybe it is because we have heard the stories of the uber-geek who has zero personal skills but still makes a fortune working for Microsoft.  Well, here’s another quick tip:  You’re probably not as good as he is; and a large number of companies actually run small to medium sized teams and can’t really afford to have the social outcast in the group.  In a small team, everyone has to get along well, and that’s an important part of what I’m evaluating during the interview process. My #2 tip is to act alive!  I typically conduct screening interviews by phone before I bring someone in for an in-person.  I don’t care how laid-back you are or if you have a “quiet personality”, when we are talking, ACT like you are happy I called and you are interested in getting the job.  If you sound like you are bored-to-death and that you would be perfectly happy to never work again, I am perfectly happy to help you attain that goal, and I’ll move on to the next candidate. And closely related to #2, perhaps we’ll call it #2.1 is this tip:  When I call you on the phone for the interview, don’t answer your phone by just saying, “Hello”.  You know that the odds are about 999-to-1 that it is me calling for the interview because we have specifically arranged this time slot for the call.  And you can see on the caller ID that it is not one of your buddies calling, so identify yourself.  Don’t make me question whether I dialed the right number.  Answer your phone with a, “Hello, this is ___<your full name preferred, but at least your first name>___.”.  And when I say, “Hi, <your name>, this is Mark from <my company>” it would be really nice to hear you say, “Hi, Mark, I have been expecting your call.”  This sets the perfect tone for our conversation.  I know I have the right person; you are professional enough and interested enough in the job or contract to remember your appointments; and now we can move on to a little intro segment and get on with the reason for our call. As crazy as it sounds, I’ve actually had phone interviews that went like this: <Ring…> You:  “Hello?” Me:  “Hi, this is Mark from _______” You:  “Yeah?” Me:  “Is this <your name>?” You:  “Yeah.” Me:  “I had this time in my calendar for us to talk…were you expecting my call?” You:  “Oh, yeah, sure…” I used to be nice and would try to go ahead with the interview even after this bad start, thinking I was giving the candidate the benefit of the doubt…a second chance…but more often than not it was a struggle and 10 minutes into what was supposed to be a 45-minute call, I’m looking for a way to hang up without being rude myself.  It never worked out.  I never brought that person in for an in-person interview, much less offered them the job or contract.  Who knows, maybe they were some sort of wunderkind that we missed out on.  What I know is that they would never fit in with the rest of the team, and around here that is absolutely critical. So, in conclusion… Act alive!  Identify yourself!  And do at least the very basic of research.

    Read the article

  • What should I learn to create web-services like ones listed? [closed]

    - by Gerald Blizz
    I am very inspired by websites like imgur, dropbox, screencloud, maybe w3schools...you get my point. Fresh web-services with some new idea, not big portals but something simple yet useful and used by many people, something simple and new. What aspects of my developer career should I focus to be able to build such things on my own if I have enough ideas? (Sure if it ends up being popular I can get more developers to help me and so on, but at first I can do it alone, right?) I am currently a PHP web-developer, I know HTML+CSS+JS+AJAX+JQuery. But even like that there still is web-design, there are a lot of paths: websites for enterprise, startups, webservices, entertainment websites and serious bank/document flow systems, frameworks used for big systems, different approaches for little ones, etcetcetc. Which path should I take to be able to start my own projects like the ones that I listed on top which inspire me?

    Read the article

  • What is a non commital approach to software analysis

    - by dsjbirch
    When I think about software analysis the first thing which comes to mind is SSADM and the UML. But, what I want is a high level view of the system before I commit to a programming paradigm. Where am I going wrong? How do I approach a problem in a high level and generic way before I commit to a paradigm? What are the diagrams/tools available to support me? Edit: Some examples of tools that appear to be what I'm after are... A block diagram - http://en.wikipedia.org/wiki/Block_diagram A data flow diagram - http://en.wikipedia.org/wiki/Data_flow_diagram

    Read the article

  • Burning Snow Leopard DMG on Ubuntu

    - by Caitlann Lloyd
    So I have a SL Copy of Mac OS X Snow Leopard.DMG and I am running Ubuntu 12.0.4. I tried burn the DMG to a DVD using CD/DVD Creator however when I try, it comes up with a message saying 'The size of the file is over 2 GiB. Files larger than 2 GiB are not supported by the ISO9660 standard in its first and second versions (the most widespread ones). It is recommended to use the third version of the ISO9660 standard, which is supported by most operating systems, including Linux and all versions of Windows™. However, Mac OS X cannot read images created with version 3 of the ISO9660 standard.' How do I bypass this so my DVD is bootable on my iMac. By the way, I'm running Ubuntu on the iMac and it is my only partition. No other OS Installed

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • Nearly technical books that you enjoyed reading

    - by pablo
    I've seen questions about "What books you recommend" in several of the Stack Exchange verticals. Perhaps these two questions (a and b) are the most popular. But, I'd like to ask for recommendations of a different kind of books. I have read in the past "The Passionate Programmer" and I am now reading "Coders at Work". Both of them I would argue that are almost a biography (or biographies in the "Coders at work") or even a bit of "self-help" book (that is more the case of the "Passionate programmer"). And please don't get me wrong. I loved reading the first one, and I am loving reading the second one. There's a lot of value in it, mostly in "lessons of the trade" kind of way. So, here is what I'd like to know. What other books that you read that are similar to these ones in intent that you enjoyed? Why?

    Read the article

  • Fixing LINQ Error: Sequence contains no elements

    - by ChrisD
    I’ve read some posts regarding this error when using the First() or Single() command.   They suggest using FirstOrDefault() or SingleorDefault() instead. But I recently encountered it when using a Sum() command in conjunction with a Where():   Code Snippet var effectiveFloor = policies.Where(p => p.PricingStrategy == PricingStrategy.EstablishFloor).Max(p => p.Amount);   When the Where() function eliminated all the items in the policies collection, the Sum() command threw the “Sequence contains no elements” exception.   Inserting the DefaultIfEmpty() command between the Where() and Sum(), prevents this error: Code Snippet var effectiveFloor = policies.Where(p => p.PricingStrategy == PricingStrategy.EstablishFloor).DefaultIfEmpty().Max(p => p.Amount);   but now throws a Null Reference exception!   The Fix: Using a combination of DefaultIfEmpty() and a null check in the Sum() command solves this problem entirely: Code Snippet var effectiveFloor = policies.Where(p => p.PricingStrategy == PricingStrategy.EstablishFloor).DefaultIfEmpty().Max(p =>  p==null?0 :p.Amount);

    Read the article

  • Enterprise Java and Hibernate [closed]

    - by KyelJmD
    I am now done learning the JavaSE. Now , I want to jump out of my comfort zone and learn how to program /create web applications using java. First My Question: What is the difference between Enterprise Java and Hibernate? are they a like? or Hibernate is a framework for java? Now that I am done learning "Core" java I need to move ahead. what are the things I need to learn to use Hibernate? and learn java. as of now I have no idea where should I go after learning "core java". and I don't know what path to go to

    Read the article

  • Installing netcdf c++ interface on ubuntu 12.04.1 LTS

    - by iluvatar
    I am using a code which employs the modern netcdf c++ interface (netcdf namespace, include file is called just netcdf without .h or similar, ncFile class, etc) and have just switched to ubuntu 12.04.1 LTS. I installed netcdf and libnetcdf6 with apt-get, but I still get the "old" headers in /usr/local/include (netcdf.h, netcdfcpp.h, etc). In Ubuntu, the library version for netcdf is 4.1.1, while at my own computer with Mac Os X (where I have the right netcdf include file) the version is 4.2.1.1 . I cannot modify the source code I am using. I would like to know if there is a way to upgrade the netcdf library on ubuntu to support the modern c++ ointerface, or, if I have to manually compile it, if you think that using src2pkg is a good idea. This is my first experience with Ubuntu. Thanks in advance.

    Read the article

  • Financial Management: Why Move to the Cloud?

    - by Kathryn Perry
    A guest post by Terrance Wampler, Vice President, Financials Product Strategy, Oracle I’ve spent my career designing and developing financial management systems, most of it at Oracle. Every single day I either meet with our customers or talk to them on the phone. The time is usually spent discussing various business challenges facing CFOs and Controllers, who are running Oracle’s Financials. Lately, we’ve been talking a lot about cloud computing and whether it makes sense for finance to go to the cloud. Here are some pros and cons that might help you make that decision. Let’s start with the benefits of cloud solutions. The first is savings. With cloud services, you pay only for those commodities that you use. That makes you feel like you're getting better value for your money. Plus, you can preserve your cash for your core business and you can get a better matching of expenses and revenues. So, at the top of the list is lower total cost of ownership. The second point has to do with optimization. With cloud services, you’ll need less IT infrastructure so you can optimize your IT resources for better-value, higher-end projects. This also leads to greater financial visibility, where there's a clear cost for the set of services or features replaced by cloud services. And, the last benefit is what I call acceleration. You can save money by speeding up the initialization and deployment of the project. You don't have to deal with IT infrastructure and you can start implementing right away. We did a quick survey of about 70 CFOs at the CFO Summit last month in New York City. We asked them why they were looking at cloud services, and not necessarily just for financials. The No. 1 response was perceived lower cost of ownership. But of course there are risks to consider. The first thing most people think about in the cloud is security and ownership of data. So, will your data really be safe? Can you meet your own privacy policy requirements? Do you really want your private financial data exposed? Do you trust the provider? Is what you see really your data? Do you own it or is it managed by someone else? Security is a big concern that comes with an emotional component. The next thing in the risk category is reliability. Is the provider proven? You’re taking what you have control over – for example, standards and policies and internal service level agreements – away from your IT department and giving it to someone else. Will you still be able to adapt to shifts in your business? Will the provider be able to grow with your business effectively? Reliability means having a provider that can give you the service infrastructure that you need. And then there’s performance, which has two components in terms of risk. Going forward, will the provider be able to scale the infrastructure or service level if you have new employees or new businesses? And second, will the price you negotiate and the rate you lock in cover additional costs and rising service fees? Another piece is cost. What happens if you don't get the service level you want? What if you end the service? What happens, if after a few years, you send the service out for bid and change service? Can you move your data? Can you move the applications? Do the integrations work? These are cost components people don’t always take into account. And, the final piece is the business case. The perception is that you can get started really quickly with cloud. It has a perceived lower cost of total ownership and it feels cool because it's cloud. But do you have a good business case for moving to the cloud? Your total cost of ownership is over three years; then you’ll renew it, so your TCO is six years. Have you compared that to other internal services that you’re offering? You might already have product that you can run this new business or division on. In that same survey at the CFO Summit, the execs thought the biggest perceived risks were security of data, ability to move data back, and the ability to create a business case to actually justify the risks. So that’s the list of pros and cons. Not to leave you hanging, I will do another post on how to balance these pros and cons and make the right decision for your business.

    Read the article

  • Java EE and GlassFish Server Roadmap Update

    - by John Clingan
    2013 has been a stellar year for both the Java EE and GlassFish Server communities. On June 12, Oracle and its partners announced the release of Java EE 7, which delivers on three major themes – HTML5, developer productivity, and meeting enterprise demands. The online event attracted over 10,000 views in the first two days! During the online event, Oracle also announced the availability of GlassFish Server Open Source Edition 4, the world's first Java EE 7 compatible application server. The primary role of GlassFish Server Open Source Edition has been, and continues to be, driving adoption of the latest release of the Java Platform, Enterprise Edition. Oracle also announced the Java EE 7 SDK, which bundles GlassFish Server Open Source Edition 4, as a Java EE 7 learning aid. Last, Oracle publicly announced the Java EE 7 reference implementation based on GlassFish Server Open Source Edition 4. Java EE is a popular platform, as evidenced by the 20+ Java EE 6 compatible implementations available to choose from. After the launch of Java EE 7 and GlassFish Server Open Source Edition 4, we began planning the Java EE 8 roadmap, which was covered during the JavaOne Strategy Keynote. To summarize, there is a lot of interest in improving on HTML5 support, Cloud, and investigating NoSQL support. We received a lot of great feedback from the community and customers on what they would like to see in Java EE 8. As we approached JavaOne 2013, we started planning the GlassFish Server roadmap. What we announced at JavaOne was that GlassFish Server Open Source Edition 4.1 is scheduled for 2014. Here is an update to that roadmap. GlassFish Server Open Source Edition 4.1 is scheduled for 2014 We are planning updates as needed to GlassFish Server Open Source Edition, which is commercially unsupported As we head towards Java EE 8: The trunk will eventually transition to GlassFish Server Open Source Edition 5 as a Java EE 8 implementation The Java EE 8 Reference Implementation will be derived from GlassFish Server Open Source Edition 5. This replicates what has been done in past Java EE and GlassFish Server releases. Oracle will no longer release future major releases of Oracle GlassFish Server with commercial support – specifically Oracle GlassFish Server 4.x with commercial Java EE 7 support will not be released. Commercial Java EE 7 support will be provided from WebLogic Server. Oracle GlassFish Server will not be releasing a 4.x commercial version Expanding on that last bullet, new and existing Oracle GlassFish Server 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. Oracle recommends that existing commercial Oracle GlassFish Server customers begin planning to move to Oracle WebLogic Server, which is a natural technical and license migration path forward: Applications developed to Java EE standards can be deployed to both GlassFish Server and Oracle WebLogic Server GlassFish Server and Oracle WebLogic Server have implementation-specific deployment descriptor interoperability (here and here). GlassFish Server 3.x and Oracle WebLogic Server share quite a bit of code, so there are quite a bit of configuration and (extended) feature similarities. Shared code includes JPA, JAX-RS, WebSockets (pre JSR 356 in both cases), CDI, Bean Validation, JAX-WS, JAXB, and WS-AT. Both Oracle GlassFish Server 3.x and Oracle WebLogic Server 12c support Oracle Access Manager, Oracle Coherence, Oracle Directory Server, Oracle Virtual Directory, Oracle Database, Oracle Enterprise Manager and are entitled to support for the underlying Oracle JDK. To summarize, Oracle is committed to the future of Java EE.  Java EE 7 has been released and planning for Java EE 8 has begun. GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition. We are planning for GlassFish Server Open Source Edition 5 as the foundation for the Java EE 8 reference implementation, as well as bundling GlassFish Server Open Source Edition 5 in a Java EE 8 SDK, which is the most popular distribution of GlassFish. This will allow GlassFish releases to be more focused on the Java EE platform and community-driven requirements. We continue to encourage community contributions, bug reports, participation on the GlassFish forum, etc. Going forward, Oracle WebLogic Server will be the single strategic commercially supported application server from Oracle. Disclaimer: The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract.It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Is it a good idea to create shared UI library that would render natively on different platforms?

    - by Maciej Donajski
    I am designing an application that has following flow: User designs a form using web application (J2EE backend application) The form is sent to mobile device (Android) Mobile device User fills out the form designed in 1. Results are synced with backend. One of my ideas is to create a common java UI library for creating the type of forms that I need. This library would also have a native renderers for different platforms (Web and Android would be implemented first). The whole point of it is to have a native experience on web and android side. Are there any existing solutions to meet the requirements that I have? Is it a good approach to achieve them?

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • What is needed for a networked home printer?

    - by Jay
    I've seen this question: Home network printer recommendations but I think I'm asking something more basic. I'm not really familiar with networked printers or how they work or what they do really. What I'd like is to have a printer that is accessible to anyone connected to my home network, without having to plug into the printer itself. An alternative setup might to be to have a printer that is always hooked up to one computer, like a desktop, that is almost always on and allows other computers connected on the network to print to it as well. I believe the first option is called a networked printer and the second is printer sharing. But again I'm new to this so I don't really know the details or if I'm using the correct terminology. I was wondering if someone might be able to shed some light on this and let me know what is needed for either of these setup. Thanks.

    Read the article

  • Can WinRT really be used at just the boundaries?

    - by Bret Kuhns
    Microsoft (chiefly, Herb Sutter) recommends when using WinRT with C++/CX to keep WinRT at the boundaries of the application and keep the core of the application written in standard ISO C++. I've been writing an application which I would like to leave portable, so my core functionality was written in standard C++, and I am now attempting to write a Metro-style front end for it using C++/CX. I've had a bit of a problem with this approach, however. For example, if I want to push a vector of user-defined C++ types to a XAML ListView control, I have to wrap my user-defined type in a WinRT ref/value type for it to be stored in a Vector^. With this approach, I'm inevitably left with wrapping a large portion of my C++ classes with WinRT classes. This is the first time I've tried to write a portable native application in C++. Is it really practical to keep WinRT along the boundaries like this? How else could this type of portable core with a platform-specific boundary be handled?

    Read the article

  • The best Windows 7 virtual desktop tool by far&hellip; Dexpot

    - by Eric Nelson
    [Oh – and Windows XP, Vista etc] Every so often I yearn for the virtual desktop functionality that is implemented so well under Linux. Unfortunately every time I start looking for a great tool for Windows I ultimately end up disappointed. But … I think this time around I have actually found one that will outlast the first day or two and become a must have. Check out http://www.dexpot.de/ So far this is 100% stable, 100% sensible and offers awesome functionality, yet still is very simple to use. There is a detailed look at the many features on the site but a couple that do it for me: Desktop Manager and next/previous tray icons make it easy to navigate around: Announcement of Desktop as a desktop takes focus: And best of all, Windows 7 preview integration And… it is FREE for private use and you get 30 days to try it out for professional use (e.g. me)

    Read the article

  • How Easy is it to Code In-Built Videos?

    - by Alan Parker
    First time poster so please don't bite my head off. Basically, I'm having a site built for me and I don't really know anything about coding but I'm not too sure if I trust my web developer. I asked him recently about adding a feature where I could display built-in videos like the following page - http://www.ejot.co.uk/buildingfasteners.odl and he quoted me quite a high amount for it. I just wanted to double check with you guys whether this is a difficult feature to add in and whether it justifies a reasonable amount of money on top of what I'm already paying him. Thanks in advance for your help, Alan

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • Presentations about SARGability tomorrow

    - by Rob Farley
    Tomorrow, via LiveMeeting, I’m giving two presentations about SARGability. That’s April 27th , for anyone reading this later. The first will be at the Adelaide SQL Server User Group . If you’re in Adelaide, you should definitely come along. Both meetings are being broadcast to the PASS AppDev Virtual Chapter . The second will be in the Adelaide evening at 9:30pm, and will just be and my computer. The audience will be on the other end of a phone-line, which is very different for me. I’m very much...(read more)

    Read the article

  • Dell whitepaper on PowerEdge R810 R910 and M910 Memory Architecture

    - by jchang
    The Dell PowerEdge 11 th Generation Servers: R810, R910 and M910 Memory Guidance whitepaper seems to have caused some confusion. I believe the source is an error in the paper. In the section on FlexMem Bridge Technology, the Dell whitepaper says this applies to the R810 and the M910. The Dell M910 is a 4-way blade server for the Xeon 7500 series processor line. First a breif recap. The R810 is a 2-way server, by which I mean it has two sockets regardless of the number of cores on each processor....(read more)

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >