Search Results

Search found 44910 results on 1797 pages for 'breadth first traversal'.

Page 513/1797 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • How Easy is it to Code In-Built Videos?

    - by Alan Parker
    First time poster so please don't bite my head off. Basically, I'm having a site built for me and I don't really know anything about coding but I'm not too sure if I trust my web developer. I asked him recently about adding a feature where I could display built-in videos like the following page - http://www.ejot.co.uk/buildingfasteners.odl and he quoted me quite a high amount for it. I just wanted to double check with you guys whether this is a difficult feature to add in and whether it justifies a reasonable amount of money on top of what I'm already paying him. Thanks in advance for your help, Alan

    Read the article

  • The best Windows 7 virtual desktop tool by far&hellip; Dexpot

    - by Eric Nelson
    [Oh – and Windows XP, Vista etc] Every so often I yearn for the virtual desktop functionality that is implemented so well under Linux. Unfortunately every time I start looking for a great tool for Windows I ultimately end up disappointed. But … I think this time around I have actually found one that will outlast the first day or two and become a must have. Check out http://www.dexpot.de/ So far this is 100% stable, 100% sensible and offers awesome functionality, yet still is very simple to use. There is a detailed look at the many features on the site but a couple that do it for me: Desktop Manager and next/previous tray icons make it easy to navigate around: Announcement of Desktop as a desktop takes focus: And best of all, Windows 7 preview integration And… it is FREE for private use and you get 30 days to try it out for professional use (e.g. me)

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • What is needed for a networked home printer?

    - by Jay
    I've seen this question: Home network printer recommendations but I think I'm asking something more basic. I'm not really familiar with networked printers or how they work or what they do really. What I'd like is to have a printer that is accessible to anyone connected to my home network, without having to plug into the printer itself. An alternative setup might to be to have a printer that is always hooked up to one computer, like a desktop, that is almost always on and allows other computers connected on the network to print to it as well. I believe the first option is called a networked printer and the second is printer sharing. But again I'm new to this so I don't really know the details or if I'm using the correct terminology. I was wondering if someone might be able to shed some light on this and let me know what is needed for either of these setup. Thanks.

    Read the article

  • Presentations about SARGability tomorrow

    - by Rob Farley
    Tomorrow, via LiveMeeting, I’m giving two presentations about SARGability. That’s April 27th , for anyone reading this later. The first will be at the Adelaide SQL Server User Group . If you’re in Adelaide, you should definitely come along. Both meetings are being broadcast to the PASS AppDev Virtual Chapter . The second will be in the Adelaide evening at 9:30pm, and will just be and my computer. The audience will be on the other end of a phone-line, which is very different for me. I’m very much...(read more)

    Read the article

  • Can WinRT really be used at just the boundaries?

    - by Bret Kuhns
    Microsoft (chiefly, Herb Sutter) recommends when using WinRT with C++/CX to keep WinRT at the boundaries of the application and keep the core of the application written in standard ISO C++. I've been writing an application which I would like to leave portable, so my core functionality was written in standard C++, and I am now attempting to write a Metro-style front end for it using C++/CX. I've had a bit of a problem with this approach, however. For example, if I want to push a vector of user-defined C++ types to a XAML ListView control, I have to wrap my user-defined type in a WinRT ref/value type for it to be stored in a Vector^. With this approach, I'm inevitably left with wrapping a large portion of my C++ classes with WinRT classes. This is the first time I've tried to write a portable native application in C++. Is it really practical to keep WinRT along the boundaries like this? How else could this type of portable core with a platform-specific boundary be handled?

    Read the article

  • "Updating VMware Tools for Linux" spins forever

    - by Nicolas Raoul
    I installed Ubuntu 11.04 inside VMware Player 3.1.4 inside Ubuntu 10.10 When first booting on the ISO image I was asked if I wanted to download the VMware Tools for Linux and I accepted. It downloaded for a few minutes, and then it has been "updating" for an hour already. Even shutting down the VM does not stop it. If I try to exit VMware, I am told "Cannot exit while still downloading. Cancel all downloads and try again." but the dialog's Cancel button is greyed out. Nothing special in vmware.log I ended up killink VMware. A few days after I restarted it, it asked about the VMware Tools again, I accepted again, and same problem...

    Read the article

  • SEO and unique IDs in urls

    - by kokoko
    I have a web site who's home page is at http://domain.com when a new user first hits the web site I'm creating a unique id of 5 chars for them (for example 'abcde' and redirecting them to http://domain.com/abcde so they can later bookmark and return to their workspace. My question is what's the best approach for SEO purposes, I need the main url domain.com to be indexed but google will also get the redirect and will not index the main page. I know about canonical urls, but this applies only when the domain.com url does not redirect also should I use 301 or 302 code in the redirect ?

    Read the article

  • Shader optimization - cg/hlsl pseudo and via multiplication

    - by teodron
    Since HLSL/Cg do not allow texture fetching inside conditional blocks, I am first checking a variable and performing some computations, afterwards setting a float flag to 0.0 or 1.0, depending on the computations. I'd like to trigger a texture fetch only if the flag is 1.0 or not null, for that matter of fact. I kind of hoped this would do the trick: float4 TU0_atlas_colour = pseudoBool * tex2Dlod(TU0_texture, float4(tileCoord, 0, mipLevel)); That is, if pseudoBool is 0, will the texture fetch function still be called and produce overhead? I was hoping to prevent it from getting executed via this trick that usually works in plain C/C++.

    Read the article

  • How to disable tap to click in Lubuntu 13.10

    - by radiomasten
    Tap to click is usually the first thing I disable when I have installed a new OS, but this time I couldn't get rid of it. In earlier versions of Lubuntu, I was able to disable it by writing "@synclient MaxTapTime=0" to /etc/xdg/lxsession/Lubuntu/autostart and save. But in Lubuntu 13.10 this method doesn't work any more. I can't find any solution on the internet either. (If there was a checkbox in "mouse and keyboard" preferences in LXDE to turn tap to click on/off permanently, like in Unity, that would make both lovers and haters of this divisive feature happy. I don't understand how this feature could be thought of as something everybody wants.)

    Read the article

  • Learn About HTML5 and the Future of the Web

    Learn About HTML5 and the Future of the Web San Francisco Java, PHP, and HTML5 user groups hosted an event on May 11th, 2010 on HTML5 with three amazing speakers: Brad Neuberg from Google, Giorgio Sardo from Microsoft, and Peter Lubbers from Kaazing. In this first of the three videos, Brad Neuberg from Google (formerly an HTML5 advocate and currently a Software Engineer on the Google Buzz team) explains why HTML5 matters - to consumers as well as developers! His overview of HTML5 included SVG/Canvas rendering, CSS transforms, app-cache, local databases, web workers, and much more. He also identified the scope and practical implications of the changes that are coming along with HTML5 support in modern browsers. This event was organized by Marakana, Michael Tougeron from Gamespot, and Bruno Terkaly from Microsoft. Microsoft was the host and Marakana, Gamespot, Medallia, TEKsystems, and Guidewire Software sponsored the event. marakana.com From: GoogleDevelopers Views: 177 6 ratings Time: 50:44 More in Science & Technology

    Read the article

  • Simple Project Templates

    - by Geertjan
    The NetBeans sources include a module named "simple.project.templates": In the module sources, Tim Boudreau turns out to be the author of the code, so I asked him what it was all about, and if he could provide some usage code. His response, from approximately this time last year because it's been sitting in my inbox for a while, is below. Sure - though I think the javadoc in it is fairly complete.  I wrote it because I needed to create a bunch of project templates for Javacard, and all of the ways that is usually done were grotesque and complicated.  I figured we already have the ability to create files from templates, and we already have the ability to do substitutions in templates, so why not have a single file that defines the project as a list of file templates to create (with substitutions in the names) and some definitions of what should be in project properties. You can also add files to the project programmatically if you want.Basically, a template for an entire project is a .properties file.  Any line which doesn't have the prefix 'pp.' or 'pvp.' is treated as the definition of one file which should be created in the new project.  Any such line where the key ends in * means that file should be opened once the new project is created.  So, for example, in the nodejs module, the definition looks like: {{projectName}}.js*=Templates/javascript/HelloWorld.js .npmignore=node_hidden_templates/npmignore So, the first line means:  - Create a file with the same name as the project, using the HelloWorld template    - I.e. the left side of the line is the relative path of the file to create, and the right side is the path in the system filesystem for the template to use       - If the template is not one you normally want users to see, just register it in the system filesystem somewhere other than Templates/ (but remember to set the attribute that marks it as a template)  - Include that file in the set of files which should be opened in the editor once the new project is created. To actually create a project, first you just create a new ProjectCreator: ProjectCreator gen = new ProjectCreator( parentFolderOfNewProject ); Now, if you want to programmatically generate any files, in addition to those defined in the template, you can: gen.add (new FileCreator("nbproject", "project.xml", false) {     public DataObject create (FileObject project, Map<String,String> substitutions) throws IOException {          ...     } }); Then pass the FileObject for the project template (the properties file) to the ProjectCreator's createProject method (hmm, maybe it should be the string path to the project template instead, to save the caller trouble looking up the FileObject for the template).  That method looks like this: public final GeneratedProject createProject(final ProgressHandle handle, final String name, final FileObject template, final Map<String, String> substitutions) throws IOException { The name parameter should be the directory name for the new project;  the map is the strings you gathered in the wizard which should be used for substitutions.  createProject should be called on a background thread (i.e. use a ProgressInstantiatingIterator for the wizard iterator and just pass in the ProgressHandle you are given). The return value is a GeneratedProject object, which is just a holder for the created project directory and the set of DataObjects which should be opened when the wizard finishes. I'd love to see simple.project.templates moved out of the javacard cluster, as it is really useful and much simpler than any of the stuff currently done for generating projects.  It would also be possible to do much richer tools for creating projects in apisupport - i.e. choose (or create in the wizard) the templates you want to use, generate a skeleton wizard with a UI for all the properties you'd like to substitute, etc. Here is a partial project template from Javacard - for example usage, see org.netbeans.modules.javacard.wizard.ProjectWizardIterator in javacard.project (or the much simpler one in contrib/nodejs). #This properties file describes what to create when a project template is#instantiated.  The keys are paths on disk relative to the project root. #The values are paths to the templates to use for those files in the system#filesystem.  Any string inside {{ and }}'s will be substituted using properties#gathered in the template wizard.#Special key prefixes are #  pp. - indicates an entry for nbproject/project.properties#  pvp. - indicates an entry for nbproject/private/private.properties #File templates, in format [path-in-project=path-to-template]META-INF/javacard.xml=org-netbeans-modules-javacard/templates/javacard.xmlMETA-INF/MANIFEST.MF=org-netbeans-modules-javacard/templates/EAP_MANIFEST.MF APPLET-INF/applet.xml=org-netbeans-modules-javacard/templates/applet.xmlscripts/{{classnamelowercase}}.scr=org-netbeans-modules-javacard/templates/test.scrsrc/{{packagepath}}/{{classname}}.java*=Templates/javacard/ExtendedApplet.java nbproject/deployment.xml=org-netbeans-modules-javacard/templates/deployment.xml#project.properties contentspp.display.name={{projectname}}pp.platform.active={{activeplatform}} pp.active.device={{activedevice}}pp.includes=**pp.excludes= I will be using the above info in an upcoming blog entry and provide step by step instructions showing how to use them. However, anyone else out there should have enough info from the above to get started yourself!

    Read the article

  • What is the best free program to burn DVD movies from DVD movie files that are present on the hard drive the way ImgBurn does it in Windows?

    - by cipricus
    Evaluating burning programs is difficult: it takes more time, while errors are very unpleasant. I have looked at AcetoneISO, Brasero, and K3B (which many recommend) but they seem more limited than Nero, CDBurnXP and especially ImgBurn from Windows. They all seem to work (did not tested them fully myself) but at a more basic level (create ISO, burn them, etc.). When it comes to something like creating a DVD movie disk from DVD movie files present on the hard drive, it seems to me that I would have to create the ISO first and then burn it, which involves using more hard drive space and more time. Looking in the Windows direction, there is a Nero for Linux. It costs money. ImgBurn is said to work well in Wine, but as yet I have not tested it enough. What other options are there?

    Read the article

  • Oracle Linked Servers on Windows Server 2008 R2

    - by John Paul Cook
    Oracle hasn’t yet released versions of its client software for Windows Server 2008 R2. If you need to create an Oracle linked server, that’s a problem. You’ll see this installation block when attempting to install the Oracle client software for Windows Server 2008: It’s very simple to fix. Check the first checkbox to make the installer ignore the version check. Click Next and ignore the warning you’ll see. The installation should complete successfully. Windows does offer various strategies for mitigating...(read more)

    Read the article

  • The Recovery: New Challenges for your Supply Chain!

    - by [email protected]
    Nearly half of CFOs are planning to reduce their inventory during the first half of 2010 in part due to supply chain improvements that allow them to hold less product, but also because of reduced demand according to Kate O'Sullivan, Sr Editor at CFO Magazine. Her view is based on this quarter's Duke University Global Business Outlook Survey. Highlights: Employment will be a drag on the economy- full-time employment to increase by 1%. Temp hiring to grow <1%, Outsourcing 4%.  70% of CFOs at SMEs say credit conditions are worse then 12 mos ago - placing strains on inventory growth Asia and China finance execs are more optimistic than their EMEA or US counterparts and expect stronger growth in capital spending with a 16% gain Source: "Slouching Towards Recovery", CFO Magazine, April 2010, pgs 19-20    

    Read the article

  • Do employers prefer software engineering over CS majors?

    - by Joey Green
    I'm in grad school at a university that was one of the first to have a software engineering accredited program. My undergrad is in CS. An employer recently recruited at our university and hired 5 SE majors. None of them were CS. Do employers prefer software engineering majors? The reason I ask is because I can focus on many different areas during my graduate studies and really want to take the classes that will help me land a great job. Right now I'm either going to use CUDA and parallelize an advanced ray-tracer for a graduate project or do research on non-photo-realistic rendering in augmented reality. Pursuing these would leave very little SE classes in my schedule. If I went the software engineering route, I would probably either do research into data-oriented programming or software design complexity. Sometimes I think when I'm 40 and look back will it matter at all? For some reason I'm thinking not.

    Read the article

  • junior / professional / senior categorization

    - by oozoo
    Hey guys, is it just me or is the categorization of developer levels highly subjective? I get the feeling that every company tries to hire experienced developers as juniors because they don't know $technology. For example my own career: I switched technologies a couple of times, while sticking to java as a programming language. For example I first worked for 3 years using JavaSE technologies, the next company I worked for hired me as junior because I didn't have JavaEE experience - while still selling me as professional level to customers (I work in consulting). The next company hired me again as junior because I didn't have SAP experience - they mostly work with SAP Java technologies which is definitely a niche. Still, they are selling all their technology consultants for exactly the same rate while paying them significantly different wages. Now when switching jobs again I feel like this whole thing is going to start all over again because I don't have Spring experience or Oracle knowledge. tl;dr = is my observation totally off base that companies are just using these categorizations as means to keep down wages?

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

  • The Art of Computer Programming - To read or not to read?

    - by Zannjaminderson
    There are lots of books about programming out there, and it seems Code Complete is pretty much at the top of most people's list of "must-read programming books", but what about The Art of Computer Programming by Donald Knuth? I'm a busy person, between work and a young family I don't have a ton of free time, so I have to be picky about how I use it. I'm wondering - has anybody here read 'TAOCP'? If so, is it worth making time to read or would some other book or more on-the-side programming like pet projects or contributing to open source be a better use of my time in terms of professional development? DISCLAIMER - For those of you who sport "Knuth is my homeboy" t-shirts, don't get me wrong - I want to read it, but I'm just wondering if it should be right at the top of my priority list or if something else should come first.

    Read the article

  • SQLAuthority News – A Conversation with an Old Friend – Sri Sridharan

    - by pinaldave
    Sri Sridharan is my old friend and we often talk on GTalk. The subject varies from Life in India/USA, movies, musics, and of course SQL. We have our differences when we talk about food or movie but we always agree when we talk about SQL. Yesterday while chatting with him we talked about SQLPASS and the conversation lasted for a long time. Here is the conversation between us on GTalk. I have removed a few of the personal talks and formatted into paragraphs as GTalk often shows stuff out of formatting. Pinal: Sri, Congrats on running for the PASS BoD again. You were so close last year. What made you decide to run again this year? Sri: Thank you Pinal for your leadership in the PASS India Community and all the things you do out there. After coming so close last year, there was no doubt in my mind that I will run again. I was truly humbled by the support I got from the community. Growing up in India for over 25 years, you are brought up in a very competitive part of the world. Right from the pressure of staying in the top of the class from kindergarten to your graduation, the relentless push from your parents about studying and getting good grades (and nothing else matters), you land up essentially living in a pressure cooker. To survive that relentless pressure, you need to have a thick skin, ability to stand up for who you really are , what you want to accomplish and in the process stay true those values. I am striving for a greater cause, to make PASS an organization that can help people with their SQL Server careers, to make PASS relevant to its chapter members, to make PASS an organization that every SQL professional in the world wants to be connected with. Just because I did not get elected or appointed last year does not mean that these causes are not worth fighting. Giving up upon failing the first time is simply not in me. If I did that, what message would I send to those who voted for me? What message would I send to my kids? Pinal: As someone who has such strong roots in India, what can the Indian PASS Community expect from you? Sri: First of all, I think fostering a regional leadership is something PASS must encourage as part of its global growth plan. For PASS global being able to understand all the issues in a region of the world and make sound decisions will be a tough thing to do on a continuous basis. I expect people like you, chapter leaders, regional mentors, MVPs of the region start playing a bigger role in shaping the next generation of PASS. That is something I said in my campaign and I still stand by it. I would like to see growth in the number of chapters in India. The current count does not truly represent the full potential of that region. I was pretty thrilled to see the Bangalore SQLSaturday happen early this year. I would like to see more of SQLSaturday events, at least in the major metro cities. I know the issues in India are very different from the rest of the world. So the formula needs to be tweaked a little for it to work better in India. Once the SQLSaturday model is vetted out, maybe there could be enough justification to have SQLRally India. PASS needs to have a premier SQL event in that region. Going to USA or Europe for that matter is incredibly hard due to VISA issues etc. So this could be a case of where PASS comes closer to where the community is. Pinal: What portfolio would take on if you are elected to the PASS Board? Sri: There are some very strong folks on the PASS Board today. The President discusses the portfolios with the group and makes the final call on the portfolios. I am also a fan of having a team associated with the portfolios. In that case, one person is the primary for a portfolio but secondary on a couple of other portfolios. This way people on the board have a direct vested interest in a few portfolios. Personally, I know I would these portfolios good justice – Chapters, Global Growth and Events (SQLSat, SQLRally). I would try to see if we can get a director to focus on Volunteers.  To me that is very critical for growth in the international regions. Pinal: This is an interesting conversation with you Sri. I know you so long time but this is indeed inspiring to many. India is a big country and we appreciate your thoughts. Sri: Thank you very much for taking time to chat with me today. Cheers. There are pretty strong candidates for SQLPASS Board of Elections this year. I know all of them in person and honestly it is going to be extremely difficult to not to vote for anybody. I am indeed in a crunch right now how to pick one over another. Though the choice is difficult, I encourage you to vote for them. I am extremely confident that the new board of directors will all have the same goal – Better SQL Server Community. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • How to disable cryptswap?

    - by mit
    How can I disable cryptswap? I would like an unencrypted swap like before. This is on an ubuntu 9.10 system. It worked. First I removed the lines from /etc/fstab and /etc/crypttab. But it was not possible (and maybe not necessary?) to use the command sudo cryptsetup remove crytswap1 Before rebooting it was not possible (because cryptswap1 was still in use) and after rebooting cryptswap1 was already inactive. I removed cryptsetup from the system afterwards: sudo aptitude remove cryptsetup

    Read the article

  • Live boot from hard disk problem

    - by user172277
    I've installed Ubuntu Desktop 13.04 32bit. Next I configured /etc/grub.d/40_custom to boot live system from ubuntu.iso (also Desktop 13.04 32 bit) I used configuration: menuentry "Ubuntu 13.04 Desktop" { loopback loop /boot/ubuntu.iso linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=/boot/ubuntu.iso noeject noprompt splash -- initrd (loop)/casper/initrd.lz } and it works OK. Later I made some changes on my installed ubuntu. I made some configuration, installed additional packages and so on. After that I made backup using remastersys tool. Remastersys gave me new ISO file. So I wanted to use it. And here was first problem. Remastersys creates only initrd.gz file. So I changed the grub configuration form: initrd (loop)/casper/initrd.lz to: initrd (loop)/casper/initrd.gz But after that, when I reboot my system I get error: /init: line 3: can't open /dev/sr0: No medium found Any ideas how to fix it? Best Regards, Bartosz

    Read the article

  • USB modeswitch to mass storage device

    - by Andrew
    I'm trying to install a Sierra AirCard 320U (branded as "Telstra USB 4G") into a VirtualBox Windows XP machine, on an Ubuntu 10.10 host. usb_modeswitch offers me a way to disable the "mass storage device" option, but I can't see a way to permanently re-enable it so it's usable by VirtualBox (device filters briefly detect it but then it disappears again). lsusb shows the device as: 0f3d:68aa Airprime, Incorporated When I first insert the device, it shows as 1199:0fff Sierra Wireless, Inc. Is there any way to re-enable the storage device so VirtualBox can see it?

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >