Search Results

Search found 5262 results on 211 pages for 'operation'.

Page 103/211 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Column order can matter

    - by Dave Ballantyne
    Ordinarily, column order of a SQL statement does not matter. Select a,b,c from table will produce the same execution plan as   Select c,b,a from table However, sometimes it can make a difference.   Consider this statement (maxdop is used to make a simpler plan and has no impact to the main point):   select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc from sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) If you look at the execution plan, you will see similar to this That is three sorts.  One for RownAsc,  one for RownDesc and the final one for the ‘Order by’ clause.  Sorting is an expensive operation and one that should be avoided if possible.  So with this in mind, it may come as some surprise that the optimizer does not re-order operations to group them together when the incoming data is in a similar (if not exactly the same) sorted sequence.  A simple change to swap the RownAsc and RownDesc columns to produce this statement : select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc , ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc from Sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) Will result a different and more efficient query plan with one less sort. The optimizer, although unable to automatically re-order operations, HAS taken advantage of the data ordering if it is as required.  This is well worth taking advantage of if you have different sorting requirements in one statement. Try grouping the functions that require the same order together and save yourself a few extra sorts.

    Read the article

  • asp.net mvc vs angular.js model binding

    - by aw04
    So I've noticed a trend lately of .net web developers using angular.js on the client side of applications and I've become more curious as I play around with angular and compare it to how I would do things in asp.net mvc. I'll give a quick example of what really got me thinking. I recently came across a situation at work (I work in a .net environment) where I needed to create a table bound to a collection of objects that had the ability to add and remove rows/items from the collection. I had an add button that created a new object and appended a row to the end of the table, and a remove button in each row to remove a particular object/row. Using asp.net mvc, I first found myself making an ajax call to the server for each operation, updating the server side model, and refreshing part of the page to show the result in the table. This worked but I didn't really like the idea of calling the server to update the model each time, so I tried to come up with a solution to do this on the client side. It turned out to be quite a task, as I had to generate the html on add with validation and all and the correct indexing for the model binding to work. It got worse on remove, as I ended up with a crazy string replace function to recreate the indexes on each item to satisfy the binding requirements (if an item other than the last is removed, the indexes are no longer correct). Now out of curiosity, I tried to recreate this at home in angular (which I had no experience with) and it took me all of about 10 minutes with simple functions to add and remove items from the client side model. This is just one example, but it seems to me that I'm able to achieve the same results with far fewer calls to the server in angular because of the fact that it binds to a client side model. So my question is, is this a distinct advantage of using a javascript mvc framework or am I somehow under utilizing the power of asp.net mvc and am I right in thinking that these operations should be done on the client and have no business requiring calls to the server?

    Read the article

  • Package manager borked with gforge

    - by Leif Andersen
    I've been having a problem with the package manager. I seemed to have installed gforge, partially, but it keeps giving me errors whenever I install something. (Note that the thing I'm trying to install actually does get installed, but there is always an error returned). Here it is: Creating /etc/gforge/httpd.conf Creating /etc/gforge/httpd.secrets Creating /etc/gforge/local.inc Creating other includes invoke-rc.d: unknown initscript, /etc/init.d/postgresql-8.4 not found. dpkg: error processing gforge-db-postgresql (--configure): subprocess installed post-installation script returned error exit status 100 Errors were encountered while processing: gforge-db-postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) When I try to remove it with: sudo apt-get purge gforge-common I get this: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: gforge-common* gforge-db-postgresql* 0 upgraded, 0 newly installed, 2 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 5,853kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 717305 files and directories currently installed.) Removing gforge-db-postgresql ... Replacing config file /etc/postgresql/8.4/main/pg_hba.conf with new version invoke-rc.d: unknown initscript, /etc/init.d/postgresql-8.4 not found. dpkg: error processing gforge-db-postgresql (--purge): subprocess installed pre-removal script returned error exit status 100 Removing gforge-common ... Purging configuration files for gforge-common ... Processing triggers for man-db ... Errors were encountered while processing: gforge-db-postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) And it complains until I do a: sudo apt-get install -f At which point gforge is re-installed. I'm out of ideas, does anyone else have any other ideas with what might be wrong, and more importantly, how I can fix it? Thank you.

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • Error with APE Server Installation

    - by sadmicrowave
    I was trying to install APE-Server from the .deb file at the ape-server homepage (www.ape-project.org) and I ran into an error so wanted to try removing the installation and reinstalling. I did a sudo apt-get remove ape-server which ran successfully but left ape-server folders in my /etc/ and /etc/init.d locations. Me being an idiot new comer to linux decided that manually delete those folders. Now when I reinstall the ape-server those folders don't get recreated and therefore I cannot send the /etc/init.d/ape-server [option] command because the folder is not found. When I try to sudo apt-get purge (or remove) ape-server I get the following sudo apt-get purge ape-server Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: ape-server* 0 upgraded, 0 newly installed, 1 to remove and 92 not upgraded. 1 not fully installed or removed. After this operation, 1,753kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 43924 files and directories currently installed.) Removing ape-server ... invoke-rc.d: unknown initscript, /etc/init.d/ape-server not found. dpkg: error processing ape-server (--purge): subprocess installed pre-removal script returned error exit status 100 update-rc.d: /etc/init.d/ape-server: file does not exist dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ape-server E: Sub-process /usr/bin/dpkg returned an error code (1) My question is; how do I remove all of the ape-server installation packages that were installed so I can reinstall from scratch?

    Read the article

  • Slides and Pictures from PowerShell Saturday Columbus 2012

    - by Brian Jackett
    On March 10th, 2012 the first ever PowerShell Saturday conference took place in Columbus, OH and I couldn’t be happier with the outcome.  We had 100 attendees from 10 different states (the biggest surprise to me) come to see 6 speakers present on a variety of PowerShell topics: introduction, WMI, SharePoint, Active Directory, Exchange, 3rd party products and more.      A big thank you also goes out to a number of people. Planning committee Wes Stahler, lead organizer of PowerShell Saturday Columbus, president of Central Ohio PowerShell User Group Ed “Microsoft Scripting Guy” Wilson Teresa “The Scripting Wife” Wilson Ashley McGlone Brian T. Jackett (myself) Speakers Ed Wilson Ashley McGlone James Brundage Trevor Sullivon Daniel Cruz Volunteer Lisa Gardner, fellow Microsoft PFE volunteered her time on a Saturday to assist with smooth operation of the day Facility Coordination Debbie Carrier, facilities coordinator for the Columbus Microsoft Office and helped us out greatly with the venue   Slides and Script Samples    I presented my session on “PowerShell for the SharePoint 2010 Developer”.  Below you can download the slides and script samples.   Photos    I wasn’t able to take took many pictures (only 3) as I was busy doing my presentation, answering questions, and taking care of random items throughout the day.   Pictures on Facebook    click here Pictures on SkyDrive (higher res) PowerShell Saturday Columbus Mar '12 VIEW SLIDE SHOW DOWNLOAD ALL   Conclusion    I’m very happy that this first ever PowerShell Saturday was a success.  My fellow PFE and speaker Ashley McGlone also has a short write-up on his blog about the event (click here).  I have heard rumors that there are other cities starting to plan their own local events.  When I hear more details I’ll spread the word here and on Twitter.         -Frog Out

    Read the article

  • Best practice settings Effect parameters in XNA

    - by hichaeretaqua
    I want to ask if there is a best practice settings effect parameters in XNA. Or in other words, what exactly happens when I call pass.Apply(). I can imagine multiple scenarios: Each time Apply() is called, all effect parameters are transferred to the GPU and therefor it has no real influence how often I set a parameter. Each time Apply() is called, only the parameters that got reset are transferred. So caching Set-operations that don't actually set a new value should be avoided. Each time Apply() is called, only the parameters that got changed are transferred. So caching Set-operations is useless. This whole questions is bootless because no one of the mentions ways has any noteworthy impact on game performance. So the final question: Is it useful to implement some caching of Set-operation like: private Matrix _world; public Matrix World { get{ return _world;} set { if(value == world)return; _effect.Parameters["xWorld"].SetValue(value); _world = value; } Thanking you in anticipation

    Read the article

  • Can't boot Windows Xp after intalling Ubuntu 12.04

    - by Omul Neted
    Here's the situation. I installed Ubuntu using the along side option. Everything went ok. When I restarted I went strait to Ubuntu and it worked beautifully. When I restarted and tried to enter windows, the loading screen appeared, and after 3 -4 seconds it restarted again. No error, no cursor waiting, no nothing. I looked on the internet for help and found several resources. I tried first lilo since it seemes that many people had they're issues solved with it. After lilo neither ubuntu nor Windows would start. I installed and used bootinfoscript. The RESULTS.txt can be seen below https://www.dropbox.com/sh/r3luoa672qe73uq/Mob13HhNiB After that I looked at Boot-Repair I did as instructed here Can't boot XP after Ubuntu Installation, how to fix? ,meaning I redid the mbr of my Ubuntu install using a generic mbr. with no success. The results of boot-repair are in the first link. Now when I restart my computer I don't even get the windows loading screen, just Missing operating system Missing operation system Operating system not found that's it. I did not use the fixboot or fixmbr option because I don't have a windows cd cabable of seeing my hdd drivers. The usual XP windows setup tells me that I have no hdd. Please help, I don't know what to to next. This is my first time with Ubuntu or any Linux OS.

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • Is NAN suitable for communicating that an invalid parameter was involved in a calculation?

    - by Arman
    I am currently working on a numerical processing system that will be deployed in a performance-critical environment. It takes inputs in the form of numerical arrays (these use the eigen library, but for the purpose of this question that's perhaps immaterial), and performs some range of numerical computations (matrix products, concatenations, etc.) to produce outputs. All arrays are allocated statically and their sizes are known at compile time. However, some of the inputs may be invalid. In these exceptional cases, we still want the code to be computed and we still want outputs not "polluted" by invalid values to be used. To give an example, let's take the following trivial example (this is pseudo-code): Matrix a = {1, 2, NAN, 4}; // this is the "input" matrix Scalar b = 2; Matrix output = b * a; // this results in {2, 4, NAN, 8} The idea here is that 2, 4 and 8 are usable values, but the NAN should signal to the receipient of the data that that entry was involved in an operation that involved an invalid value, and should be discarded (this will be detected via a std::isfinite(value) check before the value is used). Is this a sound way of communicating and propagating unusable values, given that performance is critical and heap allocation is not an option (and neither are other resource-consuming constructs such as boost::optional or pointers)? Are there better ways of doing this? At this point I'm quite happy with the current setup but I was hoping to get some fresh ideas or productive criticism of the current implementation.

    Read the article

  • Is it better to load up a class with methods or extend member functionality in a local subclass?

    - by Calvin Fisher
    Which is better? Class #1: public class SearchClass { public SearchClass (string ProgramName) { /* Searches LocalFile objects, handles exceptions, and puts results into m_Results. */ } DateTime TimeExecuted; bool OperationSuccessful; protected List<LocalFile> m_Results; public ReadOnlyCollection<LocalFile> Results { get { return new ReadOnlyCollection<LocalFile>(m_Results); } } #region Results Filters public DateTime OldestFileModified { get { /* Does what it says. */ } } public ReadOnlyCollection<LocalFile> ResultsWithoutProcessFiles() { return new ReadOnlyCollection<LocalFile> ((from x in m_Results where x.FileTypeID != FileTypeIDs.ProcessFile select x).ToList()); } #endregion } Or class #2: public class SearchClass { public SearchClass (string ProgramName) { /* Searches LocalFile objects, handles exceptions, and puts results into m_Results. */ } DateTime TimeExecuted; bool OperationSuccessful; protected List<LocalFile> m_Results; public ReadOnlyCollection<LocalFile> Results { get { return new ReadOnlyCollection<LocalFile>(m_Results); } } public class SearchResults : ReadOnlyCollection<LocalFile> { public SearchResults(IList<LocalFile> iList) : base(iList) { } #region Results Filters public DateTime OldestFileModified { get { /* Does what it says. */ } } public ReadOnlyCollection<LocalFile> ResultsWithoutProcessFiles() { return new ReadOnlyCollection<LocalFile> ((from x in this where x.FileTypeID != FileTypeIDs.ProcessFile select x).ToList()); } #endregion } } ...with the implication that OperationSuccessful is accompanied by a number of more interesting properties on how the operation went, and OldestFileModified and ResultsWithoutProcessFiles() also have several more siblings in the Results Filters section.

    Read the article

  • Programmer + Drugs =? [closed]

    - by sytycs
    I just read this quote from Steve Jobs: "Doing LSD was one of the two or three most important things I have done in my life." Now I'm wondering: Has there ever been a study where programmers have been given drugs to see if they could produce "better" code? Is there a programming concept, which originated from people who where drug-users? Do you know of a piece of code, which was written by someone under the influence? EDIT So I did a little more research and it turns out Dennis R. Wier actually documented how he took LSD to wrap his head around a coding project: "At one point in the project I could not get an overall viewpoint for the operation of the entire system. It really was too much for my brain to keep all the subtle aspects and processing nuances clear so I could get a processing and design overview. After struggling with this problem for a few weeks, I decided to use a little acid to see if it would enable a breakthrough, because otherwise, I would not be able to complete the project and be certain of a consistent overall design"[1] There is also an interesting article on wired about Kevin Herbet, who used LSD to solve tough technical problems and chemist Kary Mullis even said "...that LSD had helped him develop the polymerase chain reaction that helps amplify specific DNA sequences." [2]

    Read the article

  • Constructs for wrapping a hardware state machine

    - by Henry Gomersall
    I am using a piece of hardware with a well defined C API. The hardware is stateful, with the relevant API calls needing to be in the correct order for the hardware to work properly. The API calls themselves will always return, passing back a flag that advises whether the call was successful, or if not, why not. The hardware will not be left in some ill defined state. In effect, the API calls advise indirectly of the current state of the hardware if the state is not correct to perform a given operation. It seems to be a pretty common hardware API style. My question is this: Is there a well established design pattern for wrapping such a hardware state machine in a high level language, such that consistency is maintained? My development is in Python. I ideally wish the hardware state machine to be abstracted to a much simpler state machine and wrapped in an object that represents the hardware. I'm not sure what should happen if an attempt is made to create multiple objects representing the same piece of hardware. I apologies for the slight vagueness, I'm not very knowledgeable in this area and so am fishing for assistance of the description as well!

    Read the article

  • A good tool for browser automation/client-side Web scripting

    - by hardmath
    I'm interested in adopting a tool/scripting language to automate some daily tasks connected with fighting forum spammers. A brief overview of these tasks: analyze new registrations and posts on a phpBB forum, and delete or deactivate spammers using a website/community that collects such spam reports. Typically such automation is integrated into the phpBB installation itself, which certainly has its advantages. My approach has the advantage of independent operation, etc. One way to think about this is in terms of browser automation. I've used iOpus iMacros for Firefox (the free version) in the past to respond to individual spammers, but current attacks are highly distributed. My "logic" for pigeonholing spammers vs. nonspammers seems beyond the easy reach of the free version of iMacros. From a more technical perspective one can think about dispensing with the browser altogether and programming GET/POST requests directed to my forum and other Web-based resources. I'm familiar with some scripting languages like Ruby and Lua, but I could be persuaded that a compiled application is better suited for these tasks. However in my experience the dynamic flexibility of interpreted environments is very useful in prototyping and debugging the application logic. So I'm leaning in the direction of scripting languages. Among browsers I favor Firefox and Chrome. I use both Windows and Linux platforms, and if the tool can adapt to an Android platform, it would make a neat demonstration of skills, yes? Thanks in advance for your suggestions!

    Read the article

  • E-Business Suite - Cloning Basics & AMP Cloning - EMEA/APAC

    - by Annemarie Provisero
    ADVISOR WEBCAST: E-Business Suite - Cloning Basics & AMP Cloning - EMEA/APAC PRODUCT FAMILY: EBS – ATG - Utilities July 19, 2011 at 10:00 am CET, 01:30 pm India, 05:00 pm Japan, 06:00 pm Australia This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Cloning functionality available in the E-Business Suite Release. We are going to talk about the generic Cloning options and will then go into depth about the cloning scenario when using AMP (Applications Management Pack) within the Enterprise Manager. TOPICS WILL INCLUDE: Cloning Overview Rapidclone steps in Details Rapidclone limitations EM Grid Setup with AMP for Cloning Advantages of Cloning with AMP Cloning Procedures available with AMP Monitoring Clone Operation Few things to remember before Cloning A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • "Size mismatch" apt error when installing openJDK

    - by siddanth
    when i try install openjdk-7-jre-headless i am getting the following error: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-lib tzdata tzdata-java Suggested packages: default-jre equivs cups-common liblcms2-utils libnss-mdns sun-java6-fonts ttf-dejavu-extra ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-sazanami-gothic ttf-kochi-gothic ttf-sazanami-mincho ttf-kochi-mincho ttf-wqy-microhei ttf-wqy-zenhei ttf-indic-fonts-core ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following NEW packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-headless openjdk-7-jre-lib tzdata-java The following packages will be upgraded: tzdata 1 upgraded, 12 newly installed, 0 to remove and 122 not upgraded. Need to get 41.2 MB/43.5 MB of archives. After this operation, 64.0 MB of additional disk space will be used. Get:5 http://in.archive.ubuntu.com/ubuntu/ oneiric/main java-common all 0.42ubuntu2 [62.4 kB] Fetched 41.1 MB in 4min 5s (167 kB/s) Failed to fetch http://in.archive.ubuntu.com/ubuntu/pool/main/j/java-common/java-common_0.42ubuntu2_all.deb Size mismatch E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? am unable to solve this. Am i missing something? please help me out in solving this.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Mailbox move issue from Exchange 2003 to Exchange 2010

    - by Ryan Roussel
    Today while moving mailboxes between Exchange 2003 and Exchange 2010, I hit an issue with a couple of mailboxes.  These mailboxes all popped access denied errors or more exactly: Insufficient Access Rights to perform the operation.   The cause was similar to the mail flow issue in that inheritable permissions were not turned on for the user object in Active Directory.  This also presented it’s own unique problem in that since the initial move request failed because of permissions, it had to be cleared before a new move request could be created. On top of that, the request did not show up in the EMC.  I used the following process to clear the request, assign permission, then create a new request:   1. First you need to know the ExchangeGUID of the mailbox for the remove-moverequest command.  To quickly get the GUID for a mailbox simply run:         2. Next we need to clear out the move request using PowerShell by running: [PS] c:\>Remove-moverequest -moverequestqueue "mailbox database 1030639620" -mailboxguid 8525686f-d4d3-42b7-92f1-46d77ea841a3   3. Then to re-establish inheritable permissions. This can be done by using AD Users and Computers, switching to View Advanced Features, then under the Security tab of the object.  Click Advanced, then check “allow inheritable permissions of parent to propagate to this object”   4. Once the Inheritable permissions are restored, we need to create a new move request: NOTE:  The EMC can also be used to initiate the Move Request once the permissions are corrected. [PS] c:\>New-moverequest –identity jyoung  -baditemlimit 100 -targetdatabase "mailbox database 1030639620"   And that’s it.  The mailbox should move over smoothly with no access denied error.

    Read the article

  • Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter

    - by Kellsey Ruppel
    Register Now for this Webcast. Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter Los Angeles Department of Building & Safety (LADBS) is one of the largest construction permitting departments in the country, serving over 350,000 walk-in and 530,000 phone customers, and issuing over 110,000 permits worth $3 Billion every year. LADBS needed a way to migrate walk-in and phone transactions to customer self-service, so they turned to Oracle WebCenter and teamed with Oracle Partner 3Di to deliver a customer self-service portal to lower their cost of customer service operation, while increasing customer satisfaction. Attend this Webcast to learn how Oracle WebCenter has allowed Los Angeles Department of Building & Safety to: Deliver a state of the art customer self-service portal Reduce traffic on high cost, low satisfaction customer service channels Integrate business workflows and legacy applications Register Now for this Webcast. REGISTER NOW Register now for this exclusive event. Wednesday, November 14, 2012 10 a.m. PT / 1 p.m. ET Presented by: Giovani DacumosDirector of Systems, Los Angeles Department of Building & Safety Jing ReyesApplications Development Group Manager, Los Angeles Department of Building & Safety Rajiv Desai CEO, 3Di Sheetal ParanjpyeProject Manager, 3Di Presented by: Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • How to fix E: Internal Error, No file name for libc6

    - by Loren Ramly
    How to fix E: Internal Error, No file name for libc6, Like that will show If I do: $ sudo apt-get upgrade or $ sudo apt-get install package This is example : $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn hplip hplip-data libdrm-dev libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libgrip0 libhpmud0 libkms1 libsane-hpaio libunity-2d-private0 libunity-core-5.0-5 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae printer-driver-hpcups printer-driver-hpijs unity unity-2d-common unity-2d-panel unity-2d-shell unity-2d-spread unity-common unity-services The following packages will be upgraded: alsa-base firefox firefox-globalmenu firefox-gnome-support firefox-locale-en icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-7-jre-jamvm libdbus-glib-1-2 libdbus-glib-1-dev libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libssl-dev libssl-doc libssl1.0.0 linux-sound-base openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openjdk-7-jdk openjdk-7-jre openjdk-7-jre-headless openjdk-7-jre-lib openssl sudo 27 upgraded, 0 newly installed, 0 to remove and 26 not upgraded. 3 not fully installed or removed. Need to get 0 B/126 MB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? y E: Internal Error, No file name for libc6 I have follow instruction from here E: Internal Error, No file name for libssl1.0.0 . Which do: sudo apt-get update sudo apt-get clean sudo apt-get install -fy sudo dpkg -i /var/cache/apt/archives/*.deb sudo dpkg --configure -a sudo apt-get install -fy sudo apt-get dist-upgrade But stuck with same error E: Internal Error, No file name for libc6 when do command sudo apt-get install -fy. And I've been looking on google, but have not been successful until now. Thanks.

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • Is there any evidence that drugs can actually help programmers produce "better" code? [closed]

    - by sytycs
    I just read this quote from Steve Jobs: "Doing LSD was one of the two or three most important things I have done in my life." Also a quote from that article: He was hardly alone among computer scientists in his appreciation of hallucinogenics and their capacity to liberate human thought from the prison of the mind. Now I'm wondering if there's any evidence to support the theory that drugs can help make a "better" programmer. Has there ever been a study where programmers have been given drugs to see if they could produce "better" code? Is there a well-known programming concept or piece of code which originated from people who were on drugs? EDIT So I did a little more research and it turns out Dennis R. Wier actually documented how he took LSD to wrap his head around a coding project: "At one point in the project I could not get an overall viewpoint for the operation of the entire system. It really was too much for my brain to keep all the subtle aspects and processing nuances clear so I could get a processing and design overview. After struggling with this problem for a few weeks, I decided to use a little acid to see if it would enable a breakthrough, because otherwise, I would not be able to complete the project and be certain of a consistent overall design"[1] There is also an interesting article on wired about Kevin Herbet, who used LSD to solve tough technical problems and chemist Kary Mullis even said "...that LSD had helped him develop the polymerase chain reaction that helps amplify specific DNA sequences." [2]

    Read the article

  • I can't install Ubuntu on my Dell Inspiron 15R at all

    - by Kieran Rimmer
    I'm attempting to install Ubuntu 12.04LTS, 64 bit onto a Dell Inspiron 15R laptop. I've shrunk down one of the windows partitions and even used gparted to format the vacant space as ext4. However, the install disk simply does not present any options when it comes to the partitioning step. What I get is a non-responsive blank table As well as the above, I've changed the BIOS settings so that USB emulation is disabled (as per Can't install on Dell Inspiron 15R), and changed the SATA Operation setting to all three possible options. Anyway, the install CD will bring up the trial version of ubuntu, and if I open terminal and type sudo fdisk -l, I get this: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xb4fd9215 Device Boot Start End Blocks Id System /dev/sda1 63 80324 40131 de Dell Utility Partition 1 does not start on physical sector boundary. /dev/sda2 * 81920 29044735 14481408 7 HPFS/NTFS/exFAT /dev/sda3 29044736 1005142015 488048640 7 HPFS/NTFS/exFAT /dev/sda4 1005154920 1953520064 474182572+ 83 Linux Disk /dev/sdb: 32.0 GB, 32017047552 bytes 255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb4fd923d Device Boot Start End Blocks Id System /dev/sdb1 2048 16775167 8386560 84 OS/2 hidden C: drive If I type 'sudo parted -l', I get: Model: ATA WDC WD10JPVT-75A (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 41.1MB 41.1MB primary fat16 diag 2 41.9MB 14.9GB 14.8GB primary ntfs boot 3 14.9GB 515GB 500GB primary ntfs 4 515GB 1000GB 486GB primary ext4 Model: ATA SAMSUNG SSD PM83 (scsi) Disk /dev/sdb: 32.0GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 8589MB 8588MB primary Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: Can't have a partition outside the disk! I've also tried a Kubuntu 12.04 and Linuxmint install disks, wityh the same problem. I'm completely lost here. Cheers, Kieran

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Ubuntu 12.04 OpenCL with Intel and Radeon?

    - by Steve
    I want to setup my Ubuntu 12.04 with OpenCL(Open Computing Language) support for i7 2600k and Radeon HD5870. My Monitor is connected to the integrated Graphics of the i7. Intel OpenCL SDK is installed and working. Iteration of avaliable OpenCL devices shows 2 entries for Intel. As recommended I installed AMD APP SDK 2.6 first and then the fglrx driver. I installed fglrx from Ubuntu repositories. This works fine till here. When I run aticonfig --inital -f and restart the system I get into trouble. Xorg starts only in low-graphics mode. cat /var/log/Xorg.0.log [ 21.201] X.Org X Server 1.12.2 Release Date: 2012-05-29 [ 21.201] X Protocol Version 11, Revision 0 [ 21.201] Build Operating System: Linux 2.6.24-29-xen x86_64 Ubuntu [ 21.201] Current Operating System: Linux chimera 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_ [ 21.201] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-24-generic root=UUID=c137757b-486b-4514-9dfe-00c97662 [ 21.201] Build Date: 05 June 2012 08:35:55AM [ 21.201] xorg-server 2:1.12.2+git20120605+server-1.12-branch.aaf48906-0ubuntu0ricotz~precise (For technical suppor [ 21.201] Current version of pixman: 0.26.0 [ 21.201] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 21.201] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 21.201] (==) Log file: "/var/log/Xorg.0.log", Time: Fri Jun 8 14:22:36 2012 [ 21.247] (==) Using config file: "/etc/X11/xorg.conf" [ 21.247] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 21.450] (==) ServerLayout "aticonfig Layout" [ 21.450] (**) |-->Screen "aticonfig-Screen[0]-0" (0) [ 21.450] (**) | |-->Monitor "aticonfig-Monitor[0]-0" [ 21.451] (**) | |-->Device "aticonfig-Device[0]-0" [ 21.451] (==) Automatically adding devices [ 21.451] (==) Automatically enabling devices [ 21.466] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 21.466] Entry deleted from font path. [ 21.466] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 21.466] Entry deleted from font path. [ 21.466] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 21.466] Entry deleted from font path. [ 21.473] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 21.473] Entry deleted from font path. [ 21.473] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 21.473] Entry deleted from font path. [ 21.473] (WW) The directory "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" does not exist. [ 21.473] Entry deleted from font path. [ 21.473] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 21.473] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib [ 21.473] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 21.473] (II) Loader magic: 0x7f0ad3b9ab00 [ 21.473] (II) Module ABI versions: [ 21.473] X.Org ANSI C Emulation: 0.4 [ 21.473] X.Org Video Driver: 12.0 [ 21.473] X.Org XInput driver : 16.0 [ 21.473] X.Org Server Extension : 6.0 [ 21.473] (--) PCI:*(0:0:2:0) 8086:0122:1458:d000 rev 9, Mem @ 0xfb800000/4194304, 0xe0000000/268435456, I/O @ 0x00 [ 21.473] (--) PCI: (0:1:0:0) 1002:6898:1787:2289 rev 0, Mem @ 0xd0000000/268435456, 0xfbdc0000/131072, I/O @ 0x000 [ 21.473] (II) Open ACPI successful (/var/run/acpid.socket) [ 21.473] (II) "extmod" will be loaded by default. [ 21.473] (II) "dbe" will be loaded by default. [ 21.473] (II) "glx" will be loaded. This was enabled by default and also specified in the config file. [ 21.473] (II) "record" will be loaded by default. [ 21.473] (II) "dri" will be loaded by default. [ 21.473] (II) "dri2" will be loaded by default. [ 21.473] (II) LoadModule: "glx" [ 21.732] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libgl [ 21.934] (II) Module glx: vendor="Advanced Micro Devices, Inc." [ 21.934] compiled for 6.9.0, module version = 1.0.0 [ 21.934] (II) Loading extension GLX [ 21.934] (II) LoadModule: "extmod" [ 22.028] (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so [ 22.041] (II) Module extmod: vendor="X.Org Foundation" [ 22.041] compiled for 1.12.2, module version = 1.0.0 [ 22.041] Module class: X.Org Server Extension [ 22.041] ABI class: X.Org Server Extension, version 6.0 [ 22.041] (II) Loading extension MIT-SCREEN-SAVER [ 22.041] (II) Loading extension XFree86-VidModeExtension [ 22.041] (II) Loading extension XFree86-DGA [ 22.041] (II) Loading extension DPMS [ 22.041] (II) Loading extension XVideo [ 22.041] (II) Loading extension XVideo-MotionCompensation [ 22.041] (II) Loading extension X-Resource [ 22.041] (II) LoadModule: "dbe" [ 22.041] (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so [ 22.066] (II) Module dbe: vendor="X.Org Foundation" [ 22.066] compiled for 1.12.2, module version = 1.0.0 [ 22.066] Module class: X.Org Server Extension [ 22.066] ABI class: X.Org Server Extension, version 6.0 [ 22.066] (II) Loading extension DOUBLE-BUFFER [ 22.066] (II) LoadModule: "record" [ 22.066] (II) Loading /usr/lib/xorg/modules/extensions/librecord.so [ 22.077] (II) Module record: vendor="X.Org Foundation" [ 22.077] compiled for 1.12.2, module version = 1.13.0 [ 22.077] Module class: X.Org Server Extension [ 22.077] ABI class: X.Org Server Extension, version 6.0 [ 22.077] (II) Loading extension RECORD [ 22.077] (II) LoadModule: "dri" [ 22.077] (II) Loading /usr/lib/xorg/modules/extensions/libdri.so [ 22.082] (II) Module dri: vendor="X.Org Foundation" [ 22.082] compiled for 1.12.2, module version = 1.0.0 [ 22.082] ABI class: X.Org Server Extension, version 6.0 [ 22.082] (II) Loading extension XFree86-DRI [ 22.082] (II) LoadModule: "dri2" [ 22.082] (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so [ 22.083] (II) Module dri2: vendor="X.Org Foundation" [ 22.083] compiled for 1.12.2, module version = 1.2.0 [ 22.083] ABI class: X.Org Server Extension, version 6.0 [ 22.083] (II) Loading extension DRI2 [ 22.083] (II) LoadModule: "fglrx" [ 22.083] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_dr [ 22.399] (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." [ 22.399] compiled for 1.4.99.906, module version = 8.96.4 [ 22.399] Module class: X.Org Video Driver [ 22.399] (II) Loading sub module "fglrxdrm" [ 22.399] (II) LoadModule: "fglrxdrm" [ 22.399] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/linux/libfglrxdr [ 22.445] (II) Module fglrxdrm: vendor="FireGL - ATI Technologies Inc." [ 22.445] compiled for 1.4.99.906, module version = 8.96.4 [ 22.445] (II) ATI Proprietary Linux Driver Version Identifier:8.96.4 [ 22.445] (II) ATI Proprietary Linux Driver Release Identifier: 8.96.7 [ 22.445] (II) ATI Proprietary Linux Driver Build Date: Mar 12 2012 13:06:50 [ 22.445] (++) using VT number 7 [ 22.445] (WW) Falling back to old probe method for fglrx [ 23.043] (II) Loading PCS database from /etc/ati/amdpcsdb [ 23.082] (--) Chipset Supported AMD Graphics Processor (0x6898) found [ 23.107] (WW) fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found [ 23.107] (II) fglrx: intel VGA device detected, load intel driver. [ 23.107] (II) LoadModule: "intel" [ 23.211] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 23.475] (II) Module intel: vendor="X.Org Foundation" [ 23.475] compiled for 1.12.2, module version = 2.19.0 [ 23.475] Module class: X.Org Video Driver [ 23.475] ABI class: X.Org Video Driver, version 12.0 [ 23.476] ukiDynamicMajor: found major device number 249 [ 23.476] ukiDynamicMajor: found major device number 249 [ 23.476] ukiOpenByBusid: Searching for BusID PCI:1:0:0 [ 23.476] ukiOpenDevice: node name is /dev/ati/card0 [ 23.476] ukiOpenDevice: open result is 8, (OK) [ 23.476] ukiOpenByBusid: ukiOpenMinor returns 8 [ 23.476] ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 [ 23.540] (WW) PowerXpress feature is not supported [ 23.540] (EE) No devices detected. [ 23.540] (==) Matched intel as autoconfigured driver 0 [ 23.540] (==) Matched vesa as autoconfigured driver 1 [ 23.540] (==) Matched fbdev as autoconfigured driver 2 [ 23.540] (==) Assigned the driver to the xf86ConfigLayout [ 23.540] (II) LoadModule: "intel" [ 23.540] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 23.540] (II) Module intel: vendor="X.Org Foundation" [ 23.540] compiled for 1.12.2, module version = 2.19.0 [ 23.540] Module class: X.Org Video Driver [ 23.540] ABI class: X.Org Video Driver, version 12.0 [ 23.540] (II) UnloadModule: "intel" [ 23.540] (II) Unloading intel [ 23.540] (II) Failed to load module "intel" (already loaded, 32522) [ 23.540] (II) LoadModule: "vesa" [ 23.583] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 23.620] (II) Module vesa: vendor="X.Org Foundation" [ 23.620] compiled for 1.12.2, module version = 2.3.1 [ 23.620] Module class: X.Org Video Driver [ 23.620] ABI class: X.Org Video Driver, version 12.0 [ 23.620] (II) LoadModule: "fbdev" [ 23.620] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 23.661] (II) Module fbdev: vendor="X.Org Foundation" [ 23.661] compiled for 1.12.2, module version = 0.4.2 [ 23.661] Module class: X.Org Video Driver [ 23.661] ABI class: X.Org Video Driver, version 12.0 [ 23.661] (II) ATI Proprietary Linux Driver Version Identifier:8.96.4 [ 23.661] (II) ATI Proprietary Linux Driver Release Identifier: 8.96.7 [ 23.661] (II) ATI Proprietary Linux Driver Build Date: Mar 12 2012 13:06:50 [ 23.661] (II) intel: Driver for Intel Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43, B43, Clarkdale, Arrandale, Sandybridge Desktop (GT1), Sandybridge Desktop (GT2), Sandybridge Desktop (GT2+), Sandybridge Mobile (GT1), Sandybridge Mobile (GT2), Sandybridge Mobile (GT2+), Sandybridge Server, Ivybridge Mobile (GT1), Ivybridge Mobile (GT2), Ivybridge Desktop (GT1), Ivybridge Desktop (GT2), Ivybridge Server, Ivybridge Server (GT2) [ 23.661] (II) VESA: driver for VESA chipsets: vesa [ 23.661] (II) FBDEV: driver for framebuffer: fbdev [ 23.661] (++) using VT number 7 [ 23.661] (WW) xf86OpenConsole: setpgid failed: Operation not permitted [ 23.661] (WW) xf86OpenConsole: setsid failed: Operation not permitted [ 23.661] (WW) Falling back to old probe method for fglrx [ 23.661] (II) Loading PCS database from /etc/ati/amdpcsdb [ 23.661] (WW) Falling back to old probe method for vesa [ 23.661] (WW) Falling back to old probe method for fbdev [ 23.661] (EE) No devices detected. [ 23.661] Fatal server error: [ 23.661] no screens found [ 23.661] Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 23.661] Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 23.661] xorg.conf: cat /etc/X11/xorg.conf Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" Load "glx" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Is there a way to get the Radeon to work in a hybrid configuration or to use the Radeon as an OpenCL only device?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >