Search Results

Search found 18860 results on 755 pages for 'enigma machine'.

Page 596/755 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • Chunking a List - .NET vs Python

    - by Abhijeet Patel
    Chunking a List As I mentioned last time, I'm knee deep in python these days. I come from a statically typed background so it's definitely a mental adjustment. List comprehensions is BIG in Python and having worked with a few of them I can see why. Let's say we need to chunk a list into sublists of a specified size. Here is how we'd do it in C#  static class Extensions   {       public static IEnumerable<List<T>> Chunk<T>(this List<T> l, int chunkSize)       {           if (chunkSize <0)           {               throw new ArgumentException("chunkSize cannot be negative", "chunkSize");           }           for (int i = 0; i < l.Count; i += chunkSize)           {               yield return new List<T>(l.Skip(i).Take(chunkSize));           }       }    }    static void Main(string[] args)  {           var l = new List<string> { "a", "b", "c", "d", "e", "f","g" };             foreach (var list in l.Chunk(7))           {               string str = list.Aggregate((s1, s2) => s1 + "," + s2);               Console.WriteLine(str);           }   }   A little wordy but still pretty concise thanks to LINQ.We skip the iteration number plus chunkSize elements and yield out a new List of chunkSize elements on each iteration. The python implementation is a bit more terse. def chunkIterable(iter, chunkSize):      '''Chunks an iterable         object into a list of the specified chunkSize     '''        assert hasattr(iter, "__iter__"), "iter is not an iterable"      for i in xrange(0, len(iter), chunkSize):          yield iter[i:i + chunkSize]    if __name__ == '__main__':      l = ['a', 'b', 'c', 'd', 'e', 'f']      generator = chunkIterable(l,2)      try:          while(1):              print generator.next()      except StopIteration:          pass   xrange generates elements in the specified range taking in a seed and returning a generator. which can be used in a for loop(much like using a C# iterator in a foreach loop) Since chunkIterable has a yield statement, it turns this method into a generator as well. iter[i:i + chunkSize] essentially slices the list based on the current iteration index and chunksize and creates a new list that we yield out to the caller one at a time. A generator much like an iterator is a state machine and each subsequent call to it remembers the state at which the last call left off and resumes execution from that point. The caveat to keep in mind is that since variables are not explicitly typed we need to ensure that the object passed in is iterable using hasattr(iter, "__iter__").This way we can perform chunking on any object which is an "iterable", very similar to accepting an IEnumerable in the .NET land

    Read the article

  • Oracle VM Deep Dives

    - by rickramsey
    "With IT staff now tasked to deliver on-demand services, datacenter virtualization requirements have gone beyond simple consolidation and cost reduction. Simply provisioning and delivering an operating environment falls short. IT organizations must rapidly deliver services, such as infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS). Virtualization solutions need to be application-driven and enable:" "Easier deployment and management of business critical applications" "Rapid and automated provisioning of the entire application stack inside the virtual machine" "Integrated management of the complete stack including the VM and the applications running inside the VM." Application Driven Virtualization, an Oracle white paper That was published in August of 2011. The new release of Oracle VM Server delivers significant virtual networking performance improvements, among other things. If you're not sure how virtual networks work or how to use them, these two articles by Greg King and friends might help. Looking Under the Hood at Virtual Networking by Greg King Oracle VM Server for x86 lets you create logical networks out of physical Ethernet ports, bonded ports, VLAN segments, virtual MAC addresses (VNICs), and network channels. You can then assign channels (or "roles") to each logical network so that it handles the type of traffic you want it to. Greg King explains how you go about doing this, and how Oracle VM Server for x86 implements the network infrastructure you configured. He also describes how the VM interacts with paravirtualized guest operating systems, hardware virtualized operating systems, and VLANs. Finally, he provides an example that shows you how it all looks from the VM Manager view, the logical view, and the command line view of Oracle VM Server for x86. Fundamental Concepts of VLAN Networks by Greg King and Don Smerker Oracle VM Server for x86 supports a wide range of options in network design, varying in complexity from a single network to configurations that include network bonds, VLANS, bridges, and multiple networks connecting the Oracle VM servers and guests. You can create separate networks to isolate traffic, or you can configure a single network for multiple roles. Network design depends on many factors, including the number and type of network interfaces, reliability and performance goals, the number of Oracle VM servers and guests, and the anticipated workload. The Oracle VM Manager GUI presents four different ways to create an Oracle VM network: Bonds and ports VLANs Both bond/ports and VLANS A local network This article focuses the second option, designing a complex Oracle VM network infrastructure using only VLANs, and it steps through the concepts needed to create a robust network infrastructure for your Oracle VM servers and guests. More Resources Virtual Networking for Dummies Download Oracle VM Server for x86 Find technical resources for Oracle VM Server for x86 -Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • links for 2010-04-14

    - by Bob Rhubart
    Why business needs should shape IT architecture - McKinsey Quarterly - Business Technology - Organization "Too often, efforts to fix architecture issues remain rooted in a company’s IT practices, culture, and leadership. The reason, in part, is that the chief architect—the overall IT-architecture program leader—is frequently selected from within the technical ranks, bringing deep IT know-how but little direct experience or influence in leading a business-wide change program. A weak linkage to the business creates a void that limits the quality of the resulting IT architecture and the organization’s ability to enforce and sustain the benefits of implementation over time." -- Helge Buckow and Stéphane Rey (tags: architecture it technology enterprise mckinsey) Eric Maurice: April 2010 Critical Patch Update Released Eric Maurice offers the details on April 2010 Critical Patch Update (CPUApr2010), "the first one to include security fixes for Oracle Solaris" (tags: oracle otn database fusionmiddleware peoplesoft security) @shivmohan: Oracle – OAF – Oracle Application Framework – OA Framework "For all the PL/SQL and Oracle Forms developers out there, start planning your evolution. Sure PL/SQL and Forms will be around for some time, but you need to add more skills to your stack if you want to stay current (employable)." -- Shivmohan Purohit (tags: oracle otn application framework) @ORACLENERD: APEX Architecture Oracle ACE Chet Justice offer a "short list of potential architectures" for Oracle APEX, based on his experience with a client. (tags: oracle otn oracleace apex architecture) Luis Moreno Campos: Why is Exadata so fast? "You could find a lot of tech doc around oracle.com, but the bottom line is that the vision to even build a V2 and place it as an OLTP and DW (general purpose) machine is just pure genius." -- Luis Moreno Campos (tags: oracle otn exadata database) Edwin Biemond: Resetting Weblogic datasources with ANT Oracle ACE and Whitehorses architect Edwin Biemond shares an ANT script "to fire some WLST and Python commandos" to correct invalid database session states. (tags: oracle otn oracleace database ANT Python) @deltalounge: The future of MySQL with Oracle Peter Paul van de Beek has compiled an informative collection of Edward Scriven quotes, from various publications, on Oracle's plans for MySQL. (tags: oracle otn database mysql) Cristobal Soto: Coherence Special Interest Group: First Meeting in Toronto, Upcoming Events in New York and California Cameron Purdy, Patrick Peralta, and others are speaking at upcoming Coherence SIG events. Cristobal Soto shares the details. (tags: oracle otn coherence sig grid appserver)

    Read the article

  • SQL SERVER – Working with FileTables in SQL Server 2012 – Part 1 – Setting Up Environment

    - by pinaldave
    Filestream is a very interesting feature, and an enhancement of FileTable with Filestream is equally exciting. Today in this post, we will learn how to set up the FileTable Environment in SQL Server. The major advantage of FileTable is it has Windows API compatibility for file data stored within an SQL Server database. In simpler words, FileTables remove a barrier so that SQL Server can be used for the storage and management of unstructured data that are currently residing as files on file servers. Another advantage is that the Windows Application Compatibility for their existing Windows applications enables to see these data as files in the file system. This way, you can use SQL Server to access the data using T-SQL enhancements, and Windows can access the file using its applications. So for the first step, you will need to enable the Filestream feature at the database level in order to use the FileTable. -- Enable Filestream EXEC sp_configure filestream_access_level, 2 RECONFIGURE GO -- Create Database CREATE DATABASE FileTableDB ON PRIMARY (Name = FileTableDB, FILENAME = 'D:\FileTable\FTDB.mdf'), FILEGROUP FTFG CONTAINS FILESTREAM (NAME = FileTableFS, FILENAME='D:\FileTable\FS') LOG ON (Name = FileTableDBLog, FILENAME = 'D:\FileTable\FTDBLog.ldf') WITH FILESTREAM (NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'FileTableDB'); GO Now, you can run the following code and figure out if FileStream options are enabled at the database level. -- Check the Filestream Options SELECT DB_NAME(database_id), non_transacted_access, non_transacted_access_desc FROM sys.database_filestream_options; GO You can see the resultset of the above query which returns resultset as the following image shows. As you can see , the file level access is set to 2 (filestream enabled). Now let us create the filetable in the newly created database. -- Create FileTable Table USE FileTableDB GO CREATE TABLE FileTableTb AS FileTable WITH (FileTable_Directory = 'FileTableTb_Dir'); GO Now you can select data using a regular select table. SELECT * FROM FileTableTb GO It will return all the important columns which are related to the file. It will provide details like filesize, archived, file types etc. You can also see the FileTable in SQL Server Management Studio. Go to Databases >> Newly Created Database (FileTableDB) >> Expand Tables Here, you will see a new folder which says “FileTables”. When expanded, it gives the name of the newly created FileTableTb. You can right click on the newly created table and click on “Explore FileTable Directory”. This will open up the folder where the FileTable data will be stored. When you click on the option, it will open up the following folder in my local machine where the FileTable data will be stored: \\127.0.0.1\mssqlserver\FileTableDB\FileTableTb_Dir In tomorrow’s blog post as Part 2, we will go over two methods of inserting the data into this FileTable. Reference : Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Filestream

    Read the article

  • Haskell's cabal dependency problem with happy

    - by wirrbel
    I have problems installing ghc-mod on my linux machine. cabal worries about "happy" not being available in versione = 1.17: $ cabal install ghc-mod Resolving dependencies... [1 of 1] Compiling Main ( /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/Setup.hs, /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/Main.o ) Linking /tmp/haskell-src-exts-1.14.0-1357/haskell-src-exts-1.14.0/dist/setup/setup ... Configuring haskell-src-exts-1.14.0... setup: The program happy version =1.17 is required but it could not be found. Failed to install haskell-src-exts-1.14.0 cabal: Error: some packages failed to install: ghc-mod-3.1.3 depends on haskell-src-exts-1.14.0 which failed to install. haskell-src-exts-1.14.0 failed during the configure step. The exception was: ExitFailure 1 hlint-1.8.53 depends on haskell-src-exts-1.14.0 which failed to install. However, it even is installed in v. 1.19, as you can see here: $ cabal install happy Resolving dependencies... [1 of 1] Compiling Main ( /tmp/happy-1.19.0-1124/happy-1.19.0/Setup.lhs, /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/Main.o ) Linking /tmp/happy-1.19.0-1124/happy-1.19.0/dist/setup/setup ... Configuring happy-1.19.0... Building happy-1.19.0... Preprocessing executable 'happy' for happy-1.19.0... [ 1 of 18] Compiling NameSet ( src/NameSet.hs, dist/build/happy/happy-tmp/NameSet.o ) [ 2 of 18] Compiling Target ( src/Target.lhs, dist/build/happy/happy-tmp/Target.o ) [ 3 of 18] Compiling AbsSyn ( src/AbsSyn.lhs, dist/build/happy/happy-tmp/AbsSyn.o ) [ 4 of 18] Compiling ParamRules ( src/ParamRules.hs, dist/build/happy/happy-tmp/ParamRules.o ) [ 5 of 18] Compiling GenUtils ( src/GenUtils.lhs, dist/build/happy/happy-tmp/GenUtils.o ) [ 6 of 18] Compiling ParseMonad ( src/ParseMonad.lhs, dist/build/happy/happy-tmp/ParseMonad.o ) [ 7 of 18] Compiling Lexer ( src/Lexer.lhs, dist/build/happy/happy-tmp/Lexer.o ) [ 8 of 18] Compiling Parser ( dist/build/happy/happy-tmp/Parser.hs, dist/build/happy/happy-tmp/Parser.o ) [ 9 of 18] Compiling AttrGrammar ( src/AttrGrammar.lhs, dist/build/happy/happy-tmp/AttrGrammar.o ) [10 of 18] Compiling AttrGrammarParser ( dist/build/happy/happy-tmp/AttrGrammarParser.hs, dist/build/happy/happy-tmp/AttrGrammarParser.o ) [11 of 18] Compiling Grammar ( src/Grammar.lhs, dist/build/happy/happy-tmp/Grammar.o ) [12 of 18] Compiling First ( src/First.lhs, dist/build/happy/happy-tmp/First.o ) [13 of 18] Compiling LALR ( src/LALR.lhs, dist/build/happy/happy-tmp/LALR.o ) [14 of 18] Compiling Paths_happy ( dist/build/autogen/Paths_happy.hs, dist/build/happy/happy-tmp/Paths_happy.o ) [15 of 18] Compiling ProduceCode ( src/ProduceCode.lhs, dist/build/happy/happy-tmp/ProduceCode.o ) [16 of 18] Compiling ProduceGLRCode ( src/ProduceGLRCode.lhs, dist/build/happy/happy-tmp/ProduceGLRCode.o ) [17 of 18] Compiling Info ( src/Info.lhs, dist/build/happy/happy-tmp/Info.o ) [18 of 18] Compiling Main ( src/Main.lhs, dist/build/happy/happy-tmp/Main.o ) Linking dist/build/happy/happy ... Installing executable(s) in /home/hope/.cabal/bin Installed happy-1.19.0 Any ideas? cabal-install version 1.16.0.2 using version 1.16.0 of the Cabal library

    Read the article

  • .NET development on a “Retina” MacBook Pro

    - by Jeff
    The rumor that Apple would release a super high resolution version of its 15” laptop has been around for quite awhile, and one I watched closely. After more than three years with a 17” MacBook Pro, and all of the screen real estate it offered, I was ready to replace it with something much lighter. It was a fantastic machine, still doing 6 or 7 hours after 460 charge cycles, but I wanted lighter and faster. With the SSD I put in it, I was able to sell it for $750. The appeal of higher resolution goes way back, when I would plug into a projector and scale up. Consolas, as it turns out, is a nice looking font for code when it’s bigger. While I have mostly indifference for iOS, I have to admit that a higher dot pitch on the iPhone and iPad is pretty to look at. So I ordered the new 15” “Retina” model as soon as the Apple Store went live with it, and got it seven days later. I’ve been primarily using Parallels as my VM of choice from OS X for about five years. They recently put out an update for compatibility with the display, though I’m not entirely sure what that means. I figured there would have to be some messing around to get the VM to look right. The combination that seems to work best is this: Set the display in OS X to “more room,” which is roughly the equivalent of the 1920x1200 that my 17” did. It’s not as stunning as the text at the default 1440x900 equivalent (in OS X), but it’s still quite readable. Parallels still doesn’t entirely know what to do with the high resolution, though what it should do is somehow treat it as native. That flaw aside, I set the Windows 7 scaling to 125%, and it generally looks pretty good. It’s not really taking advantage of the display for sharpness, but hopefully that’s something that Parallels will figure out. Screen tweaking aside, I got the base model with 16 gigs of RAM, so I give the VM 8. I can boot a Windows 7 VM in 9 seconds. Nine seconds! The Windows Experience Index scores are all 7 and above, except for graphics, which are both at 6. Again, that’s in a VM. It’s hard to believe there’s something so fast in a little slim package like that. Hopefully this one gets me at least three years, like the last one.

    Read the article

  • Finally, I have my HP 6910p laptop running with 8Gb RAM

    - by Liam Westley
    Today, I received two Corsair Value Select 4Gb DDR SO-DIMMs (from overclock.co.uk) for my aging HP 6910p to give it the extra lease of life to keep it going until the end of 2010.  And here is the proof that Windows 7 64-bit happily sees all 8Gb, There are no 4Gb modules are officially supported for the HP 6910p (they didn’t exist when it was first build).  I was taking a bit of a gamble, and relying on the UK distance selling regulations which meant that even if they didn’t work I’d be able to send them back, getting a full refund and only paying for the return postage. I’d read Keith Comb’s blog back in 2008, (http://blogs.technet.com/b/keithcombs/archive/2008/07/05/loading-a-hp-6910p-with-8gb-of-ram.aspx) where he mentioned ‘trying’ out 4Gb samples of SO-DIMMs in a HP 6910p laptop, but there still appears to be no mentions of running this configuration in any other blog. Seeing how the 8Gb of memory is used is made easier with the new Resource Monitor available in Windows 7.  With two copies of Visual Studio 2008, Outlook, Firefox (with 30+ tabs), TweetDeck (an infamous memory hog) and VMWare workstation running a virtual machine allocated with 2Gb of memory, you might have no ‘free’ memory remaining, but the standby memory is an awesome 2.4Gb, and once the VM is up and running the Hard Faults/sec hovers around zero,   It’s the page fault figure which really counts, because reducing that value means that you are preventing the Windows 7 system drive from being used for virtual memory paging operations.  Even after only a few hours of use it’s noticeable that disc access has been reduced and applications feel more responsive and ‘snappy’.  I did consider the option of purchasing an SSD to replace the main drive, rather than go for 8Gb of RAM, but I think I’ve probably made the correct decision. Given my hobby topic of virtualisation, I take the view that you can never have too much memory.   It was also a decision made easier by the price differential between 8Gb of RAM compared to a decent size SSD.  In the 18 months since Keith Comb tested the first 4Gb SO-DIMMS they have plummeted in price, at just under £100 per 4Gb, they are around a fifth of the price when launched. So if you ever wondered if a HP 6910p can handle 8Gb, now you know.

    Read the article

  • Visual Studio 10 crashed when tried to open one of solutions

    - by Michael Freidgeim
    Visual Studio 10 crashed when I tried to open  one of my solutions. Closing Visual Studio and rebooting the machine didn’t help.The error message that was logged(see below), didn’t give any useful ideas.Finally It was fixed after I’ve deleted MySolution.suo file, which was quite big, and also Resharper folders.Log Name:      ApplicationSource:        Application ErrorEvent ID:      1000Task Category: (100)Level:         ErrorKeywords:      ClassicUser:          N/ADescription:Faulting application name: devenv.exe, version: 10.0.40219.1, time stamp: 0x4d5f2a73Faulting module name: msenv.dll, version: 10.0.40219.1, time stamp: 0x4d5f2d48Exception code: 0xc0000005Fault offset: 0x00355770Faulting process id: 0x1dc0Faulting application start time: 0x01cd1836888599f4Faulting application path: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exeFaulting module path: c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\msenv.dllReport Id: 9924b2f9-844e-11e1-bc19-782bcba513eaEvent Xml:<Event >  <System>    <Provider Name="Application Error" />    <EventID Qualifiers="0">1000</EventID>    <Level>2</Level>    <Task>100</Task>    <Keywords>0x80000000000000</Keywords>    <TimeCreated SystemTime="2012-04-12T03:21:31.000000000Z" />    <EventRecordID>401998</EventRecordID>    <Channel>Application</Channel>    <Security />  </System>  <EventData>    <Data>devenv.exe</Data>    <Data>10.0.40219.1</Data>    <Data>4d5f2a73</Data>    <Data>msenv.dll</Data>    <Data>10.0.40219.1</Data>    <Data>4d5f2d48</Data>    <Data>c0000005</Data>    <Data>00355770</Data>    <Data>1dc0</Data>    <Data>01cd1836888599f4</Data>    <Data>C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe</Data>    <Data>c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\msenv.dll</Data>    <Data>9924b2f9-844e-11e1-bc19-782bcba513ea</Data>  </EventData></Event>v

    Read the article

  • Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the HIVE in Big Data Story. In this article we will understand what is PIG and PIG Latin in Big Data Story. Yahoo started working on Pig for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. What is Pig and What is Pig Latin? Pig is a high level platform for creating MapReduce programs used with Hadoop and the language we use for this platform is called PIG Latin. The pig was designed to make Hadoop more user-friendly and approachable by power-users and nondevelopers. PIG is an interactive execution environment supporting Pig Latin language. The language Pig Latin has supported loading and processing of input data with series of transforming to produce desired results. PIG has two different execution environments 1) Local Mode – In this case all the scripts run on a single machine. 2) Hadoop – In this case all the scripts run on Hadoop Cluster. Pig Latin vs SQL Pig essentially creates set of map and reduce jobs under the hoods. Due to same users does not have to now write, compile and build solution for Big Data. The pig is very similar to SQL in many ways. The Ping Latin language provide an abstraction layer over the data. It focuses on the data and not the structure under the hood. Pig Latin is a very powerful language and it can do various operations like loading and storing data, streaming data, filtering data as well various data operations related to strings. The major difference between SQL and Pig Latin is that PIG is procedural and SQL is declarative. In simpler words, Pig Latin is very similar to SQ Lexecution plan and that makes it much easier for programmers to build various processes. Whereas SQL handles trees naturally, Pig Latin follows directed acyclic graph (DAG). DAGs is used to model several different kinds of structures in mathematics and computer science. DAG Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Zookeeper. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Open World Day 1 Continued

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part II A couple of things I forgot to mention about yesterdays OpenWorld. First I attended a presentation on SOA Suite and Virtualization which explained how Oracle Virtual Assembly Builder (OVAB) can be used to accelerate the deployment of an Enterprise Deployment Guide (EDG) compliant SOA Suite infrastructure.  OVAB provides the ability to introspect a deployed software component such as WebLogic Server, SOA Suite or other components and extract the configuration and package it up for rapid deployment into an Oracle Virtual Machine.  OVAB allows multiple machines to be configured and connections made between the machines and outside resources such as databases.  That by itself is pretty cool and has been available for a while in OVAB.  What is new is that Oracle has done this for an EDG compliant installations and made it available as an OVAB assembly for customers to use, significantly accelerating the deployment of an EDG deployment.  A real help for customers standing up EDG environments, particularly in test, dev and QA environments. The other thing I forgot to mention was the most memorable demo I saw at OpenWorld.  This was done by my co-author Matt Wright who was showcasing the products of his company Rubicon Red.  They showed a really cool application called OneSpot which puts all the information about a single users business processes in one spot!  Apparently a customer suggested the name.  It allows business flows to be defined that map onto events.  As events occur the status of the business flow is updated to reflect the change.  The interface is strongly reminiscent of social media sites and provides a graphical view of business flows.  So how does this differ from BPEL and BPM process flows?  The OneSpot process flow is more like a BAM process flow, it is based on events arriving from multiple sources, and is focused on the clients view of the process, not the actual business process.  This is important because it allows an end user to get a view of where his current business flow is and what actions, if any, are required of him.  This by itself is great, but better still is that OneSpot has a real time updating view of events that have occurred (BAM style no need to refresh the browser).  This means that as new events occur the end user can see them and jump to the business flow or take other appropriate actions.  Under the covers OneSpot makes use of Oracle Human Workflow to provide a forms interface, but this is not the HWF GUI you know!  The HWF GUI screens are much prettier and have more of a social media feel about them due to their use of images and pulling in relevant related information.  If you are at OOW I strongly recommend you visit Matt or John at the Rubicon Red stand and ask, no demand a demo of OneSpot!

    Read the article

  • Microsoft Desktop Player is a Valuable Tool for IT Pro’s

    - by Mysticgeek
    If you are an IT Professional, a new education tool introduced by Microsoft is the MS Desktop Player. Today we take a look at what it has to offer, from Webcasts, White Papers, Training Videos, and more. Microsoft Desktop Player You can run the player from the website (shown here) or download the application for use on your local machine (link below). It allows you to easily access MS training and information in a central interface. To get the Desktop version, download the .msi file from the site… And run through the installer…   When you first start out, enter in if you’re an IT Pro, Developer and your role. Then you can decide on the resources you’re looking for such as Exchange Server, SharePoint, Windows 7, Security…etc. Here is an example of checking out a Podcast on Office 2007 setup and configuration from TechNet radio. Under Settings you can customize your search results and local resources. This helps you narrow down pertinent information for your needs. If you find something you really like, hover the pointer over the screen and you can add it to your library, share it, send feedback, and check for additional resources. If you don’t need items in your library they can be easily deleted. Under the News tab you get previews of Microsoft news items, clicking on it will open the full article in a separate browser. While you’re watching a presentation you can show or hide the details related to it. Conclusion Microsoft Desktop Player is currently in Beta, but has a lot of cool features to offer for your learning needs. You can easily find Podcasts, Webcasts, and more without having to browse all over the place. In our experience we didn’t notice any bugs, and what it offers so far works well. If you’re a geek who’s constantly browsing TechNet and other Microsoft learning sites, this helps keep everything consolidated in one app.  Download Microsoft Desktop Player Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesBuilt-in Quick Launch Hotkeys in Windows VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineHow to Get Virtual Desktops on Windows XPWindows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool

    Read the article

  • Introduction to Lean Software Development and Kanban Systems

    - by Ben Griswold
    Last year I took myself through a crash course on Lean Software Development and Kanban Systems in preparation for an in-house presentation.  I learned a bunch.  In this series, I’ll be sharing what I learned with you.   If your career looks anything like mine, you have probably been affiliated with a company or two which pushed requirements gathering and documentation to the nth degree. To add insult to injury, they probably added planning process (documentation, requirements, policies, meetings, committees) to the extent that it possibly retarded any progress. In my opinion, the typical company resembles the quote from Tom DeMarco. It isn’t enough just to do things right – we also had to say in advance exactly what we intended to do and then do exactly that. In the 1980s, Toyota turned the tables and revolutionize the automobile industry with their approach of “Lean Manufacturing.” A massive paradigm shift hit factories throughout the US and Europe. Mass production and scientific management techniques from the early 1900’s were questioned as Japanese manufacturing companies demonstrated that ‘Just-in-Time’ was a better paradigm. The widely adopted Japanese manufacturing concepts came to be known as ‘lean production’. Lean Thinking capitalizes on the intelligence of frontline workers, believing that they are the ones who should determine and continually improve the way they do their jobs. Lean puts main focus on people and communication – if people who produce the software are respected and they communicate efficiently, it is more likely that they will deliver good product and the final customer will be satisfied. In time, the abstractions behind lean production spread to logistics, and from there to the military, to construction, and to the service industry. As it turns out, principles of lean thinking are universal and have been applied successfully across many disciplines. Lean has been adopted by companies including Dell, FedEx, Lens Crafters, LLBean, SW Airlines, Digital River and eBay. Lean thinking got its name from a 1990’s best seller called The Machine That Changed the World : The Story of Lean Production. This book chronicles the movement of automobile manufacturing from craft production to mass production to lean production. Tom and Mary Poppendieck, that is.  Here’s one of their books: Implementing Lean Software Thinking: From Concept to Cash Our in-house presentations are supposed to run no more than 45 minutes.  I really cranked and got through my 87 slides in just under an hour. Of course, I had to cheat a little – I only covered the 7 principles and a single practice. In the next part of the series, we’ll dive into Principle #1: Eliminate Waste. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • VS.NET 2010 SP1, Win 7, Parallels, and a MBP&ndash;Hell, my friends&hellip;HELL!

    - by D'Arcy Lussier
    LightSwitch Beta 2 is out. That’s how all this started. All I wanted was to install it on my MBP’s Win7 Parallels VM. But as I’m finding with running a Win7 VM on a MBP, nothing is as easy as it should be. First my MBP froze during the SP1 installation. Not my VM crashing, the entire machine freezing…no mouse, nothing. Had to do a hard reset. BLECH. Then we’re back and I try to re-install SP1 (since the first try obviously failed). I get met with a dialog asking me where silverlight_sdk.msi was. It was *nowhere*! So I hit the net and download it from Microsoft’s site. Unfortunately, it only downloads an exe and not the individual files which would include the msi. Here’s what I did: - Download the tools for Silverlight 4 (http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b3deb194-ca86-4fb6-a716-b67c2604a139&displaylang=en) - Run it, but don’t hit the install or next button when the dialog comes up - Look in your file structure for a folder with a weird name…bunch of numbers and letters. This is a temp folder that the exe creates and dumps all the necessary setup files into, and clears away after its done. - Inside this folder you’ll find the silverlight_sdk.msi (hooray!). Just copy it to a different location on the C drive. You can then cancel installation. Ok, so that takes care of that…but then running the SP1 installer I get hit with *another* dialog asking for the WCF RIA Services SP1 msi. Now it looks like this MSI is part of the Silverlight Tools package because you’ll see the MSI, but the VS.NET 2010 SP1 installer will thumb its nose at this unworthy msi…for whatever reason. So instead, go here: http://www.silverlight.net/getstarted/riaservices/ …and click on the “Install WCF Ria Services Sp1…” option. This downloads the msi, which you should save to your C drive and direct the VS.NET 2010 SP1 installer to. Then, if you’ve done all that, been good all year, and not made any little children cry, you *might* just be able to install VS.NET 2010 SP1 on your Parallels VM. If you were playing that “Take a shot every time he writes VS.NET 2010 Sp1” drinking game, then you’re drunk…which is a better place to be than where I am right now: watching the installation progress bar slowly creep to completion, hoping there’s no more surprises in store. D

    Read the article

  • links for 2011-03-15

    - by Bob Rhubart
    Dr. Frank Munz: Resize AWS EC2 Cloud Instances Dr Munz says: "You cannot dynamically resize a running cloud instance. E.g. there is no API call to ask for 2.2 GHz CPU speed instead of 1.8 GHz or to dynamically add another 3.5 GB of RAM." (tags: oracle cloud amazon ec2) Roddy Rodstein: Oracle VM Manager Architecture and Scalability Rodstein says: "Oracle VM Manager can be installed in an all-in-one configuration using the default Oracle 10g Express Database or in a more traditional two tier architecture with an OC4J web tier and a 10 or 11g database tier." (tags: oracle otn virtualization oraclevm) Mark Nelson: Getting started with Continuous Integration for SOA projects Nelson says: "I am exploring how to use Maven and Hudson to create a continuous integration capability for SOA and BPM projects. This will be the first post of several on this topic, and today we will look at setting up some simple continuous integration for a single SOA project." (tags: oracle maven hudson soa bpm) 5 New Java Champions (The Java Source) Tori Wieldt shares the big news. Congratulations to new Java Champs Jonas Bonér, James Strachan, Rickard Oberg, Régina ten Bruggencate, and Clara Ko. (tags: oracle java) Alert for Forms customers running Oracle Forms 10g (Grant Ronald's Blog) Ronald says: "While you might have been happily running your Forms 10g applications for about 5 years or so now, the end of premier support is creeping up and you need to start planning for a move to Oracle Forms 11g." (tags: oracle oracleforms) Brenda Michelson: Enterprise Architecture Rant #4,892 "I’m increasingly concerned about the macro-direction of our field, as we continue to suffer ivory tower enterprise architecture punditry, rigid frameworks and endless philosophical waxing." - Brenda Michelson (tags: entarch enterprisearchitecture ivorytower) Amitabh Apte: Enterprise Architecture - Different Perspectives "Business does not need Enterprise Architecture," says Apte, "it needs value and outcomes from the EA function." (tags: entarch enterprisearchitecture) First Ever MySQL on Windows Online Forum - March 16, 2011 (Oracle's MySQL Blog) Monica Kumar shares the details. (tags: oracle mysql mswindows) Jeff Davies: Running Multiple WebLogic and OSB Domains "There is a small 'gotcha' if you want to create multiple domains on a devevelopment machine," says Jeff Davies. But don't worry - there's a solution. (tags: oracle soa osb weblogic servicebus) The Arup Nanda Blog: Good Engineering "Engineering is not about being superficially creative," Nanda says, "it's about reliability and trustworthiness." (tags: oracle engineering software technology) Welcome to the SOA & E2.0 Partner Community Forum (SOA Partner Community Blog) (tags: ping.fm)

    Read the article

  • You do not need a separate SQL Server license for a Standby or Passive server - this Microsoft White Paper explains all

    - by tonyrogerson
    If you were in any doubt at all that you need to license Standby / Passive Failover servers then the White Paper “Do Not Pay Too Much for Your Database Licensing” will settle those doubts. I’ve had debate before people thinking you can only have a single instance as a standby machine, that’s just wrong; it would mean you could have a scenario where you had a 2 node active/passive cluster with database mirroring and log shipping (a total of 4 SQL Server instances) – in that set up you only need to buy one physical license so long as the standby nodes have the same or less physical processors (cores are irrelevant). So next time your supplier suggests you need a license for your standby box tell them you don’t and educate them by pointing them to the white paper. For clarity I’ve copied the extract below from the White Paper. Extract from “Do Not Pay Too Much for Your Database Licensing” Standby Server Customers often implement standby server to make sure the application continues to function in case primary server fails. Standby server continuously receives updates from the primary server and will take over the role of primary server in case of failure in the primary server. Following are comparisons of how each vendor supports standby server licensing. SQL Server Customers does not need to license standby (or passive) server provided that the number of processors in the standby server is equal or less than those in the active server. Oracle DB Oracle requires customer to fully license both active and standby servers even though the standby server is essentially idle most of the time. IBM DB2 IBM licensing on standby server is quite complicated and is different for every editions of DB2. For Enterprise Edition, a minimum of 100 PVUs or 25 Authorized User is needed to license standby server.   The following graph compares prices based on a database application with two processors (dual-core) and 25 users with one standby server. [chart snipped]  Note   All prices are based on newest Intel Xeon Nehalem processor database pricing for purchases within the United States and are in United States dollars. Pricing is based on information available on vendor Web sites for Enterprise Edition. Microsoft SQL Server Enterprise Edition 25 users (CALs) x $164 / CAL + $8,592 / Server = $12,692 (no need to license standby server) Oracle Enterprise Edition (base license without options) Named User Plus minimum (25 Named Users Plus per Core) = 25 x 2 = 50 Named Users Plus x $950 / Named Users Plus x 2 servers = $95,000 IBM DB2 Enterprise Edition (base license without feature pack) Need to purchase 125 Authorized User (400 PVUs/100 PVUs = 4 X 25 = 100 Authorized User + 25 Authorized Users for standby server) = 125 Authorized Users x $1,040 / Authorized Users = $130,000  

    Read the article

  • Remove the Lock Icon from a Folder in Windows 7

    - by Trevor Bekolay
    If you’ve been playing around with folder sharing or security options, then you might have ended up with an unsightly lock icon on a folder. We’ll show you how to get rid of that icon without over-sharing it. The lock icon in Windows 7 indicates that the file or folder can only be accessed by you, and not any other user on your computer. If this is desired, then the lock icon is a good way to ensure that those settings are in place. If this isn’t your intention, then it’s an eyesore. To remove the lock icon, we have to change the security settings on the folder to allow the Users group to, at the very least, read from the folder. Right-click on the folder with the lock icon and select Properties. Switch to the Security tab, and then press the Edit… button. A list of groups and users that have access to the folder appears. Missing from the list will be the “Users” group. Click the Add… button. The next window is a bit confusing, but all you need to do is enter “Users” into the text field near the bottom of the window. Click the Check Names button. “Users” will change to the location of the Users group on your particular computer. In our case, this is PHOENIX\Users (PHOENIX is the name of our test machine). Click OK. The Users group should now appear in the list of Groups and Users with access to the folder. You can modify the specific permissions that the Users group has if you’d like – at the minimum, it must have Read access. Click OK. Keep clicking OK until you’re back at the Explorer window. You should now see that the lock icon is gone from your folder! It may be a small aesthetic nuance, but having that one folder stick out in a group of other folders is needlessly distracting. Fortunately, the fix is quick and easy, and does not compromise the security of the folder! Similar Articles Productive Geek Tips What is this "My Sharing Folders" Icon in My Computer and How Do I Remove It?Lock The Screen While in Full-Screen Mode in Windows Media PlayerHave Windows Notify You When You Accidentally Hit the Caps Lock KeyWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Create Shutdown / Restart / Lock Icons in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • Grub2 -- Dualboot Ubuntu LTS 12.04 and Windows 7 -- Detects two Windows 7 (loader) entries

    - by DarkIron112
    this is the first question I have ever asked the Ubuntu Community. :D I'm fairly new to Ubuntu, but I understand the basics and know how to navigate the Terminal. I also know how to ask for/research my problems before asking for/ help. I have scoured the internet high and low and learned much of how Grub2 works. But nothing has helped me to solve my problem. My problem is this: I have a computer that has three hard drives. It previously had Windows XP, but I upgraded to Windows 7. I also installed Ubuntu 12.04 LTS (Precise Pangolin). During my installation of Windows 7, there was a failure and I had to restart the installation. Afterwards, I installed Ubuntu. After some trouble removing all traces of the XP OS (Ubuntu auto-detected it, but not Windows 7) I got the two OSes working flawlessly. Or, almost. When booting up, Grub2 used to display Ubuntu, Ubuntu Recovery Mode, Other Versions of Linux, memtest, followed by "Windows 7 (loader) on /dev/sda1" and "Windows 7 (loader) on /dev/sdb1". I eventually removed Recovery Mode, Other Versions, and Memtest. Now, when I run: sudo update-grub I get this print-out: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-26-generic Found initrd image: /boot/initrd.img-3.2.0-26-generic Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sdb1 I would like to remove "Windows 7 (loader) on /dev/sda1", as it is a broken entry that shouldn't exist, and must have been installed during my first Windows 7 attempt. I cannot find a Windows 7 entry in /etc/grub.d... And I don't know where to look. Here is a layout of my hard drives: /dev/sda1/ (1.82 TiB), NTFS ("Media") /dev/sdb1/ (100 Mib), NTFS ("System Reserved") /dev/sdb2/ (149 GiB), NTFS ("Windows 7") /dev/sdb3/ (149 GiB), Extended (" ") /dev/sdb4/ (145 GiB), ext4 (" ") /dev/sdb5/ (4 GiB), linux-swap (" ") /dev/sdc1/ (488.28 GiB), NTFS ("Downloads") /dev/sdc2/ (488.28 GiB), NTFS ("AltMedia") /dev/sdc3/ (886.45 GiB), NTFS ("Personal") unallocated (2.09 MiB), unallocated What I think has happened: Windows 7 installed first and badly. I installed it again. First, there was Windows XP to guide where the bootloader went to so it was put on /dev/sdb1/. But, the second time no such guide existed so the machine put another bootloader on /dev/sda1/. sda1, by the way, is the only partition on a 2TB drive. No boot record partition appears to exist according to gedit. I'm not sure where Grub2 is getting this information from. But, there it is. Is there anything somebody can do to help me? Or, is there any more information I should add? Thank you, community!

    Read the article

  • clear explanation sought: throw() and stack unwinding

    - by Jerry Gagelman
    I'm not a programmer but have learned a lot watching others. I am writing wrapper classes to simplify things with a really technical API that I'm working with. Its routines return error codes, and I have a function that converts those to strings: static const char* LibErrString(int errno); For uniformity I decided to have member of my classes throw an exception when an error is encountered. I created a class: struct MyExcept : public std::exception { const char* errstr_; const char* what() const throw() {return errstr_;} MyExcept(const char* errstr) : errstr_(errstr) {} }; Then, in one of my classes: class Foo { public: void bar() { int err = SomeAPIRoutine(...); if (err != SUCCESS) throw MyExcept(LibErrString(err)); // otherwise... } }; The whole thing works perfectly: if SomeAPIRoutine returns an error, a try-catch block around the call to Foo::bar catches a standard exception with the correct error string in what(). Then I wanted the member to give more information: void Foo::bar() { char adieu[128]; int err = SomeAPIRoutine(...); if (err != SUCCESS) { std::strcpy(adieu,"In Foo::bar... "); std::strcat(adieu,LibErrString(err)); throw MyExcept((const char*)adieu); } // otherwise... } However, when SomeAPIRoutine returns an error, the what() string returned by the exception contains only garbage. It occurred to me that the problem could be due to adieu going out of scope once the throw is called. I changed the code by moving adieu out of the member definition and making it an attribute of the class Foo. After this, the whole thing worked perfectly: a try-call block around a call to Foo::bar that catches an exception has the correct (expanded) string in what(). Finally, my question: what exactly is popped off the stack (in sequence) when the exception is thrown in the if-block when the stack "unwinds?" As I mentioned above, I'm a mathematician, not a programmer. I could use a really lucid explanation of what goes onto the stack (in sequence) when this C++ gets converted into running machine code.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • Looking for best practice for version numbering of dependent software components

    - by bit-pirate
    We are trying to decide on a good way to do version numbering for software components, which are depending on each other. Let's be more specific: Software component A is a firmware running on an embedded device and component B is its respective driver for a normal PC (Linux/Windows machine). They are communicating with each other using a custom protocol. Since, our product is also targeted at developers, we will offer stable and unstable (experimental) versions of both components (the firmware is closed-source, while the driver is open-source). Our biggest difficulty is how to handle API changes in the communication protocol. While we were implementing a compatibility check in the driver - it checks if the firmware version is compatible to the driver's version - we started to discuss multiple ways of version numbering. We came up with one solution, but we also felt like reinventing the wheel. That is why I'd like to get some feedback from the programmer/software developer community, since we think this is a common problem. So here is our solution: We plan to follow the widely used major.minor.patch version numbering and to use even/odd minor numbers for the stable/unstable versions. If we introduce changes in the API, we will increase the minor number. This convention will lead to the following example situation: Current stable branch is 1.2.1 and unstable is 1.3.7. Now, a new patch for unstable changes the API, what will cause the new unstable version number to become 1.5.0. Once, the unstable branch is considered stable, let's say in 1.5.3, we will release it as 1.4.0. I would be happy about an answer to any of the related questions below: Can you suggest a best practice for handling the issues described above? Do you think our "custom" convention is good? What changes would you apply to the described convention? Thanks a lot for your feedback! PS: Since I'm new here, I can't create new tags (e.g. best-practice). So, I'm wondering if best-pactice is just misspelled or I don't get its meaning.

    Read the article

  • Ubuntu Preseed set Norwegian Keyboard?

    - by Vangelis Tasoulas
    It's been a couple of days now that I am trying to make a fully automated unattended installation. I managed to make it work with Ubuntu/Cobbler and a preseed file, but I cannot set the correct keyboard layout which is Norwegian in this case. I am doing the tests on a virtual machine and when I am going with a normal manual installation (no preseed) everything is working fine. When I am using the preseed file, I always end up with an "English (US)" keyboard no matter the many different options I have tried. I can change it manually with the "dpkg-reconfigure keyboard-configuration" command, but that's not the case. It should be handled automatically using the preseed file. I am using DEBCONF_DEBUG=5 when the grub is loading, and as I see in "/var/log/installer/syslog" file after the installation has finished, the preseeding commands are accepted. Can anyone help on this? The preseed file I am using is following: d-i debian-installer/country string NO d-i debian-installer/language string en_US:en d-i debian-installer/locale string en_US.UTF-8 d-i console-setup/ask_detect boolean false d-i keyboard-configuration/layout select Norwegian d-i keyboard-configuration/variant select Norwegian d-i keyboard-configuration/modelcode string pc105 d-i keyboard-configuration/layoutcode string no d-i keyboard-configuration/xkb-keymap select no d-i netcfg/choose_interface select auto d-i netcfg/get_hostname string myhostname d-i netcfg/get_domain string simula.no d-i hw-detect/load_firmware boolean true d-i mirror/country string manual d-i mirror/http/hostname string ftp.uninett.no d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string http://10.0.1.253:3142/ d-i mirror/codename string precise d-i mirror/suite string precise d-i clock-setup/utc boolean true d-i time/zone string Europe/Oslo d-i clock-setup/ntp boolean true d-i clock-setup/ntp-server string 10.0.1.254 d-i partman-auto/method string lvm partman-auto-lvm partman-auto-lvm/new_vg_name string vg0 d-i partman-auto/purge_lvm_from_device boolean true d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true d-i partman-lvm/confirm boolean true d-i partman-lvm/confirm_nooverwrite boolean true d-i partman-auto-lvm/guided_size string max d-i partman-auto/choose_recipe select 30atomic d-i partman/default_filesystem string ext4 d-i partman-partitioning/confirm_write_new_label boolean true d-i partman/choose_partition select finish d-i partman/confirm boolean true d-i partman/confirm_nooverwrite boolean true d-i partman/mount_style select uuid d-i passwd/root-login boolean false d-i passwd/make-user boolean true d-i passwd/user-fullname string vangelis d-i passwd/username string vangelis d-i passwd/user-password-crypted password $6$asdafdsdfasdfasdf d-i passwd/user-uid string d-i user-setup/allow-password-weak boolean false d-i passwd/user-default-groups string adm cdrom dialout lpadmin plugdev sambashare d-i user-setup/encrypt-home boolean false d-i apt-setup/restricted boolean true d-i apt-setup/universe boolean true d-i apt-setup/backports boolean true d-i apt-setup/services-select multiselect security d-i apt-setup/security_host string security.ubuntu.com d-i apt-setup/security_path string /ubuntu tasksel tasksel/first multiselect Basic Ubuntu server, OpenSSH server d-i pkgsel/include string build-essential htop vim nmap ntp d-i pkgsel/upgrade select safe-upgrade d-i pkgsel/update-policy select none d-i pkgsel/updatedb boolean true d-i grub-installer/only_debian boolean true d-i grub-installer/with_other_os boolean true d-i finish-install/keep-consoles boolean false d-i finish-install/reboot_in_progress note d-i cdrom-detect/eject boolean true d-i debian-installer/exit/halt boolean false d-i debian-installer/exit/poweroff boolean false

    Read the article

  • SQLAuthority News – A Quick Note on @Pluralsight Video – Call Me Maybe Developer Way

    - by pinaldave
    I write a lot about how important learning and training is.  Any of my readers will know that I think the key to success is staying current with your education and taking very opportunity to increase your “tool kit” of skills.  I hope that I have not made the impression that it is all in the employees hands to make sure they are happy and satisfied at their jobs. I also firmly believe that a good boss will make good employees.  A boss who is good at communicating,  and leading, who knows how to nip problem in the bud and allocate resources wisely will have a well-oiled machine.  This means happy employees and a great work environment. It is important to have a healthy work environment because you will not succeed without one.  Successful business will always have the type of environment that fosters creativity and has efficient employees.  A healthy environment doesn’t force employees to produce results, but allows them to progress and create the results themselves. The result of a healthy work environment is that employees will enjoy their work and then work harder.  This can bring the company more revenue, and hopefully the employees will see the result of their hard work in bonuses and raises.  However, money is important but it is certainly secondary – the important part is the dedication of the employees to their work and to their company.  This is the true key to success. Any employee who recognizes this description as their working environment should consider themselves fortunate.  They are allowed to grow and do better, and employees being treated fairly can be a rarity in this world.  One company that I believe adheres to this principle is Pluralsight – as evidenced by this fun video. I have blogged about it earlier. (check out my cameo at 0:37). It was great fun to work with the employees at Pluralsight while making this video.  They are a great bunch and clearly have a great work environment – we wouldn’t have had this much fun if not!  I have to tell you a little bit about making this video.  My wife shot it with her mobile phone, which was certainly a different but exciting experience!  It was hard to get the look of the video right, since I was trying to portray a body builder – this was a little outside of my own personal experience.  I have what I like to call a “healthy” body type, so trying to look extremely fit like some of the other “actors” in this video was a challenge – but I do hope that you all think I succeeded.  All in all, it was great fun to participate in this video and I hope to see my friends at Pluralsight again soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Trouble compiling MonoDevelop 4 on Ubuntu 12.04

    - by Mehran
    I'm trying to compile the latest version of MonoDevelop (4.0.9) on my Ubuntu 12.04 and I'm facing errors I can not overcome. Here are my machine's configurations: OS: Ubuntu 12.04 64-bit Mono: version 3.0.12 And here are the commands that I ran to download MonoDevelop: $ git clone git://github.com/mono/monodevelop.git $ cd monodevelop $ git submodule init $ git submodule update And afterwards to compile: ./configure --prefix=`pkg-config --variable=prefix mono` --profile=stable make Then I faced the following errors (sorry if it's long): ... Building ./Main.sln xbuild /verbosity:quiet /nologo /property:CodePage=65001 ./Main.sln /property:Configuration=Debug /home/mehran/git/monodevelop/main/Main.sln: warning : Don't know how to handle GlobalSection MonoDevelopProperties.Debug, Ignoring. : warning CS1685: The predefined type `System.Runtime.CompilerServices.ExtensionAttribute' is defined in multiple assemblies. Using definition from `mscorlib' /usr/lib/mono/4.0/Microsoft.CSharp.targets: error : Compiler crashed with code: 1. : warning CS1685: The predefined type `System.Runtime.CompilerServices.ExtensionAttribute' is defined in multiple assemblies. Using definition from `mscorlib' Editor/IDocument.cs(98,30): warning CS0419: Ambiguous reference in cref attribute `GetOffset'. Assuming `ICSharpCode.NRefactory.Editor.IDocument.GetOffset(int, int)' but other overloads including `ICSharpCode.NRefactory.Editor.IDocument.GetOffset(ICSharpCode.NRefactory.TextLocation)' have also matched PatternMatching/INode.cs(51,37): warning CS1574: XML comment on `ICSharpCode.NRefactory.PatternMatching.PatternExtensions.Match(this ICSharpCode.NRefactory.PatternMatching.INode, ICSharpCode.NRefactory.PatternMatching.INode)' has cref attribute `PatternMatching.Match.Success' that could not be resolved TextLocation.cs(35,23): warning CS0419: Ambiguous reference in cref attribute `Editor.IDocument.GetOffset'. Assuming `ICSharpCode.NRefactory.Editor.IDocument.GetOffset(int, int)' but other overloads including `ICSharpCode.NRefactory.Editor.IDocument.GetOffset(ICSharpCode.NRefactory.TextLocation)' have also matched TypeSystem/FullTypeName.cs(87,24): warning CS0419: Ambiguous reference in cref attribute `ReflectionHelper.ParseReflectionName'. Assuming `ICSharpCode.NRefactory.TypeSystem.ReflectionHelper.ParseReflectionName(string)' but other overloads including `ICSharpCode.NRefactory.TypeSystem.ReflectionHelper.ParseReflectionName(string, ref int)' have also matched TypeSystem/INamedElement.cs(59,24): warning CS0419: Ambiguous reference in cref attribute `ReflectionHelper.ParseReflectionName'. Assuming `ICSharpCode.NRefactory.TypeSystem.ReflectionHelper.ParseReflectionName(string)' but other overloads including `ICSharpCode.NRefactory.TypeSystem.ReflectionHelper.ParseReflectionName(string, ref int)' have also matched TypeSystem/IType.cs(50,26): warning CS1584: XML comment on `ICSharpCode.NRefactory.TypeSystem.IType' has syntactically incorrect cref attribute `IEquatable{IType}.Equals(IType)' TypeSystem/IType.cs(319,38): warning CS1580: Invalid type for parameter `1' in XML comment cref attribute `GetMethods(Predicate{IUnresolvedMethod}, GetMemberOptions)' TypeSystem/TypeKind.cs(61,17): warning CS1580: Invalid type for parameter `1' in XML comment cref attribute `IType.GetNestedTypes(Predicate{ITypeDefinition}, GetMemberOptions)' TypeSystem/SpecialType.cs(50,52): warning CS1580: Invalid type for parameter `1' in XML comment cref attribute `IType.GetNestedTypes(Predicate{ITypeDefinition}, GetMemberOptions)' /usr/lib/mono/4.0/Microsoft.CSharp.targets: error : Compiler crashed with code: 1.

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >