Search Results

Search found 2950 results on 118 pages for 'andrew martin'.

Page 14/118 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Is php|architect any good?

    - by Andrew Heath
    Kind of a hard topic to search for, as architect turns up a lot about software architects instead. After 8 months of PHP self-study, I finally stumbled across the php|architect site. The length of time it took me to find it makes me suspicious of its quality. 3 related questions: do professional PHP coders read/care about php|architect? is it a good source for PHP beginners? assuming yes to either of the above, how far back in the archives to articles remain relevant? (ex: does stuff written about PHP4 still matter?)

    Read the article

  • White Paper on Parallel Processing

    - by Andrew Kelly
      I just ran across what I think is a newly published white paper on parallel  processing in SQL Server 2008 R2. The date is October 2010 but this is the first time I have seen it so I am not 100% sure how new it really is. Maybe you have seen it already but if not I recommend taking a look. So far I haven’t had time to read it extensively but from a cursory look it appears to be quite informative and this is one of the areas least understood by a lot of dba’s. It was authored by Don Pinto...(read more)

    Read the article

  • How do I write a good talk proposal for a FOSS programming conference?

    - by Andrew Grimm
    I'm hoping to give a talk at RubyKaigi this year, and I'd like to know what makes a good talk proposal. RubyKaigi is a conference run by Ruby enthusiasts (as opposed to it being a trade conference, or an academic conference). The proposal form can be seen here. So far, my draft proposal about a program I'm working on mentions: What the program is useful for and why it is relevant. How it works. What topics it touches upon (such as metaprogramming and testing) Is there anything that I should mention in my proposal? Also, how thorough should I be in my "Details of your talk" section? Should I be exhaustive, or only have a couple of short paragraphs?

    Read the article

  • Transition from maintenance programming to design

    - by andrew wang
    What to do people do develop a design for a s/w for a given set of requirements? I like many people joined a Semiconductor MNC and got stuck in maintenance for quite a couple of years. My work was usually changing a lines of code for windows drivers supplied by my company or a couple of small script (style like) C programs for validating h/w. As a result I developed the bad habit of 'programming by coincidence'. I have not developed the ability for designing tools/programs from scratch. I was the only s/w member of the local team and thus some grunt work from the well established other site of the company came to be done by me. Now I have moved to a different company and thus finding developing from scratch very difficult. How do I unlearn my bad habit and develop this ability of designing s/w and then coding it ?

    Read the article

  • IRM and Consumerization

    - by martin.abrahams
    As the season of rampant consumerism draws to its official close on 12th Night, it seems a fitting time to discuss consumerization - whereby technologies from the consumer market, such as the Android and iPad, are adopted by business organizations. I expect many of you will have received a shiny new mobile gadget for Christmas - and will be expecting to use it for work as well as leisure in 2011. In my case, I'm just getting to grips with my first Android phone. This trend developed so much during 2010 that a number of my customers have officially changed their stance on consumer devices - accepting consumerization as something to embrace rather than resist. Clearly, consumerization has significant implications for information control, as corporate data is distributed to consumer devices whether the organization is aware of it or not. I daresay that some DLP solutions can limit distribution to some extent, but this creates a conflict between accepting consumerization and frustrating it. So what does Oracle IRM have to offer the consumerized enterprise? First and foremost, consumerization does not automatically represent great additional risk - if an enterprise seals its sensitive information. Sealed files are encrypted, and that fundamental protection is not affected by copying files to consumer devices. A device might be lost or stolen, and the user might not think to report the loss of a personally owned device, but the data and the enterprise that owns it are protected. Indeed, the consumerization trend is another strong reason for enterprises to deploy IRM - to protect against this expansion of channels by which data might be accidentally exposed. It also enables encryption requirements to be met even though the enterprise does not own the device and cannot enforce device encryption. Moving on to the usage of sealed content on such devices, some of our customers are using virtual desktop solutions such that, in truth, the sealed content is being opened and used on a PC in the normal way, and the user is simply using their device for display purposes. This has several advantages: The sensitive documents are not actually on the devices, so device loss and theft are even less of a worry The enterprise has another layer of control over how and where content is used, as access to the virtual solution involves another layer of authentication and authorization - defence in depth It is a generic solution that means the enterprise does not need to actively support the ever expanding variety of consumer devices - the enterprise just manages some virtual access to traditional systems using something like Citrix or Remote Desktop services. It is a tried and tested way of accessing sealed documents. People have being using Oracle IRM in conjunction with Citrix and Remote Desktop for several years. For some scenarios, we also have the "IRM wrapper" option that provides a simple app for sealing and unsealing content on a range of operating systems. We are busy working on other ways to support the explosion of consumer devices, but this blog is not a proper forum for talking about them at this time. If you are an Oracle IRM customer, we will be pleased to discuss our plans and your requirements with you directly on request. You can be sure that the blog will cover the new capabilities as soon as possible.

    Read the article

  • What's a good 2D animation program for Linux (an alternative for e.g. Flash CS)?

    - by Martin Zeltin
    I don't mean the flash player here, I'm talking about the flash program that i can make animations with. Like Adoble Flash CS (formerly known as Macromedia flash). Is there a program on linux that i can make animations? I want to make a movie like animator vs animation. I used easy gif animator on windows lol it was a bit harder than flash but i'm on linux and I'd like to know what it has to offer. Worse case scenario, what gif animators are there on linux. :) Thanks!

    Read the article

  • Transiltion from maintenance programing to design

    - by andrew wang
    What to do guys do develop a design for a s/w for a given set of requirements? I like many people joined a Semiconductor MNC and got stuck in maintenance for quite a couple of years. My work was usually changing a lines of code for windows drivers supplied by my company or a couple of small script (style like) C programs for validating h/w. As a result I developed the bad habit of 'programming by coincidence'. I have not developed the ability for designing tools/programs from scratch. I was the only s/w member of the local team and thus some grunt work from the well established other site of the company came to be done by me. Now I have moved to a different company and thus finding developing from scratch very difficult. How do I unlearn my bad habit and develop this ability of designing s/w and then coding it ?

    Read the article

  • Androids development life cycle model query [closed]

    - by Andrew Rose
    I have been currently researching Google and their approach to marketing the Android OS. Primarily using an open source technique with the Open Hand Alliance and out souring through third-party developers. I'm now keen to investigate their approach using various development life cycle models in the form of waterfall, spiral, scrum, agile etc. And i'm just curious to have some feedback from professionals and what approach they think Google would use to have a positive effect on their business. Thanks for your time Andy Rose

    Read the article

  • Will bing bot index pages with invalid SSL certificates?

    - by Martin
    Bingbot and Yahoo slurp do not support SNI(Server Name Indication when using SSL). Ignoring other workarounds (multi domain certificates, non-SSL content etc.), will Bingbot index pages that have an invalid SSL certificate, eg. issued for example.net, but used on example.com? If possible please provide an example from Yahoo or Bing. I have found websites in bing, that use self signed certificates and are indexed correctly, but what about invalid certificates?

    Read the article

  • USB modeswitch to mass storage device

    - by Andrew
    I'm trying to install a Sierra AirCard 320U (branded as "Telstra USB 4G") into a VirtualBox Windows XP machine, on an Ubuntu 10.10 host. usb_modeswitch offers me a way to disable the "mass storage device" option, but I can't see a way to permanently re-enable it so it's usable by VirtualBox (device filters briefly detect it but then it disappears again). lsusb shows the device as: 0f3d:68aa Airprime, Incorporated When I first insert the device, it shows as 1199:0fff Sierra Wireless, Inc. Is there any way to re-enable the storage device so VirtualBox can see it?

    Read the article

  • Has anyone else read "Programming video games for the Evil Genius"

    - by Martin
    I bought this book called "Programming Video Games for the Evil Genius" by Ian Cinnamon. If there is anyone who has read or is familiar with this book I am wondering if they think it is worth reading. I am interested in making video games. I have already taken intro courses in C++, Java and Python and got through okay. I've been going through this book for about a month now(SLOWLY). All I have to do is type the code exactly in the book, BUT a lot of the code is not clearly explained. I do some research online but I usually still have some trouble answering my questions. Then I found stack overflow. It's been a ton of help. Right now I am trying to make a racing game right out of this book and I got to a point where the author left a bunch of errors in his code. One of the members of this website fixed it up for me, but added some stuff that I'm having trouble understanding. I spend more time trying to figure out the authors errors and fix them or get someone to help me fix them than I actually do learning code. I REALLY want to learn how to do this and I am ready and willing to put in the time, but I'm not sure if my time would be better spent learning from a different source. Are there any veterans out there that are familiar with this book and think it's worth it/not worth it? Should I try to move onto another book? Any advice for a fresh start for someone who wants to learn some video game programming?

    Read the article

  • Is there a language between C and C++?

    - by Robert Martin
    I really like the simple and transparent nature of C: when I write C code I feel unencumbered by "leaky abstractions" and can almost always make a shrewd guess as to the assembly I'm producing. I also like the simple, familiar syntax for C. However, C doesn't have these simple, helpful doodads that C++ offers like classes, simplified non-cstring handling, etc. I know that it's all possible to implement in C using jump tables and the like, but that's a bit wordy at times, and not very type-safe for various reasons. I'm not a fan of the over-emphasis on objects in C++, though, and I'm gun shy of the 'new' operator and the like. C++ seems to have just a few too many hiccups to, for instance, be used as a system programming language. Does there exist a language that sits between C and C++ on the scale of widgets and doodads? Disclaimer: I mean this as purely a factual question. I do not intend to anger you because I don't share your view that C{,++} is good enough to do whatever I'm planning.

    Read the article

  • IRM Item Codes &ndash; what are they for?

    - by martin.abrahams
    A number of colleagues have been asking about IRM item codes recently – what are they for, when are they useful, how can you control them to meet some customer requirements? This is quite a big topic, but this article provides a few answers. An item code is part of the metadata of every sealed document – unless you define a custom metadata model. The item code is defined when a file is sealed, and usually defaults to a timestamp/filename combination. This time/name combo tends to make item codes unique for each new document, but actually item codes are not necessarily unique, as will become clear shortly. In most scenarios, item codes are not relevant to the evaluation of a user’s rights - the context name is the critical piece of metadata, as a user typically has a role that grants access to an entire classification of information regardless of item code. This is key to the simplicity and manageability of the Oracle IRM solution. Item codes are occasionally exposed to users in the UI, but most users probably never notice and never care. Nevertheless, here is one example of where you can see an item code – when you hover the mouse pointer over a sealed file. As you see, the item code for this freshly created file combines a timestamp with the file name. But what are item codes for? The first benefit of item codes is that they enable you to manage exceptions to the policy defined for a context. Thus, I might have access to all oracle – internal files - except for 2011_03_11 13:33:29 Board Minutes.sdocx. This simple mechanism enables Oracle IRM to provide file-by-file control where appropriate, whilst offering the scalability and manageability of classification-based control for the majority of users and content. You really don’t want to be managing each file individually, but never say never. Item codes can also be used for the opposite effect – to include a file in a user’s rights when their role would ordinarily deny access. So, you can assign a role that allows access only to specified item codes. For example, my role might say that I have access to precisely one file – the one shown above. So how are item codes set? In the vast majority of scenarios, item codes are set automatically as part of the sealing process. The sealing API uses the timestamp and filename as shown, and the user need not even realise that this has happened. This automatically creates item codes that are for all practical purposes unique - and that are also intelligible to users who might want to refer to them when viewing or assigning rights in the management UI. It is also possible for suitably authorised users and applications to set the item code manually or programmatically if required. Setting the item code manually using the IRM Desktop The manual process is a simple extension of the sealing task. An authorised user can select the Advanced… sealing option, and will see a dialog that offers the option to specify the item code. To see this option, the user’s role needs the Set Item Code right – you don’t want most users to give any thought at all to item codes, so by default the option is hidden. Setting the item code programmatically A more common scenario is that an application controls the item code programmatically. For example, a document management system that seals documents as part of a workflow might set the item code to match the document’s unique identifier in its repository. This offers the option to tie IRM rights evaluation directly to the security model defined in the document management system. Again, the sealing application needs to be authorised to Set Item Code. The Payslip Scenario To give a concrete example of how item codes might be used in a real world scenario, consider a Human Resources workflow such as a payslips. The goal might be to allow the HR team to have access to all payslips, but each employee to have access only to their own payslips. To enable this, you might have an IRM classification called Payslips. The HR team have a role in the normal way that allows access to all payslips. However, each employee would have an Item Reader role that only allows them to access files that have a particular item code – and that item code might match the employee’s payroll number. So, employee number 123123123 would have access to items with that code. This shows why item codes are not necessarily unique – you can deliberately set the same code on many files for ease of administration. The employees might have the right to unseal or print their payslip, so the solution acts as a secure delivery mechanism that allows payslips to be distributed via corporate email without any fear that they might be accessed by IT administrators, or forwarded accidentally to anyone other than the intended recipient. All that remains is to ensure that as each user’s payslip is sealed, it is assigned the correct item code – something that is easily managed by a simple IRM sealing application. Each month, an employee’s payslip is sealed with the same item code, so you do not need to keep amending the list of items that the user has access to – they have access to all documents that carry their employee code.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Configuring UCM content cache invalidation for a custom portal application

    - by Martin Deh
    Recently, I had blogged about enabling the UCM content cache invalidator for Spaces (found here).  This can also be enabled for a WebCenter Custom Portal application as well.  The much overlooked setting is done through the Content Repository connection definition in the JDeveloper Application Resources section.   Enabling the cache invalidator "sweeper" can be invaluable, where UCM content is being updated from within UCM (console) and not within the portal.

    Read the article

  • What could be causing this long waiting time on page load?

    - by Andrew Findlay
    What could be causing a 1.18s wait time when my page loads? Just to make sure I did not have any conflicting or parallel scripts loading, I completely deleted all the script on my home page and ran the speed test again. Although I had a blank website and 5kb file size, there was still a 900ms "waiting" time. I'm wondering if it could be my server? Any other thoughts or suggestions as it doesn't seem to be scripts. EDIT - Just ran a DNS test on pingdom and here are my results. Does this tell me anything? No nameservers found at child?

    Read the article

  • Should GeeksWithBlogs move to the Wordpress Platform?

    - by Martin Hinshelwood
    Geekswithblogs was my first ever blog and my first post was on 22nd June 2006. Since then very little functionality has been added. This is not a complaint, but rather an observation that it is very hard to keep up with all of the blogging capabilities that people want. My point would be: “Why bother!” Vote now for a migration of the awesome Geekswithblogs content from SubText to Wordpress. Having been a long time user of GWB I have been worried of late by my envy of other blogging platforms. I made a number of requests around 10 months ago for things that almost all blogging platforms provide, but which are not available on GWB. Support other comment frameworks – The current comment system is so antiquated that it does not even have the common filters like Facebook, twitter or Google login to help prevent spam. Tags are not listed in the RSS – This can prevent you from getting the Google juice that you deserve 301 redirects to a single URL– If you use a custom URL then all your posts are split between both the GWB URL and the custom one. This is VERY bas for SEO. I realise that it is difficult to find time to add all of the features that all the uber geeks on this site want, so why bother… lets move to the most popular and moded platform available and allow everyone to add whatever widgets they like. Figure: Vote for Geekswithblogs moving to Wordpress Why would I want this? There are over 13k plugins available Easy augmentation model Full mobile support Regular releases lots more… This could be a turning point in the legendary history of Geeks With Blogs, be part of it… What can I do to make this happen? I need your help to make this happen: Vote for it Discuss it

    Read the article

  • Allow JMX connection on JVM 1.6.x

    - by Martin Müller
    While trying to monitor a JVM on a remote system using visualvm the activation of JMX gave me some challenges. Dr Google and my employers documentation quickly revealed some -D opts needed for JMX, but strangely it only worked for a Solaris 10 system (my setup: MacOS laptop monitoring SPARC Solaris based JVMs) On S11 with the same opts I saw that "my" JVM listening on port 3000 (which I chose for JMX), but visualvm was not able to get a connection. Finally I found out that at least my S11 installation needed an explicit setting of the RMI host name. This what finally worked:         -Dcom.sun.management.jmxremote=true \        -Dcom.sun.management.jmxremote.ssl=false \        -Dcom.sun.management.jmxremote.authenticate=false \        -Dcom.sun.management.jmxremote.port=3000 \        -Djava.rmi.server.hostname=s11name.us.oracle.com \ Maybe this post saves someone else the time I spent on research 

    Read the article

  • Radeon HD 2000, 3000, 4000 on 12.10 Quantal: fglrx (legacy) 12.6 unsupported, what to do?

    - by Andrew Mao
    After upgrading to 12.10 quantal, the packaged version of fglrx no longer works. I discovered that this is because there is a separate 'legacy' fglrx driver for the HD 2k-4k series cards, but it is incompatible with the xorg server on 12.10. This is the most current version of the driver for HD 2000 through HD 4000 series cards. You can't use the non-legacy fglrx driver, but you can use the open-source radeon driver if you prefer your WM compositing to be laggy and your YouTube videos to play like they would on a Pentium MMX series: http://support.amd.com/us/kbarticles/Pages/catalyst126legacyproducts.aspx Usually this driver can be installed in the following way, necessary because apt-get install fglrx would pull in the non-legacy driver: wget http://www2.ati.com/drivers/legacy/amd-driver-installer-12.6-legacy-x86.x86_64.zip unzip amd-driver-installer-* sudo sh ./amd-driver-installer-*.run --buildpkg Ubuntu/quantal sudo dpkg -i fglrx*.deb sudo aticonfig --initial -f If you use a different version of fglrx (for example, a newer 12.9 that doesn't support those cards) then the final command will give you an error no supported hardware detected or something similar. However, everything works at this point and you will get a reasonable xorg.conf: ... other stuff Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:5:0" EndSection ... other stuff At this point you're supposed to reboot and everything will be working with the fglrx driver. However, upon rebooting, you'll be treated to the following errors in Xorg.0.log when fglrx attempts to load: (EE) Failed to load /usr/lib/xorg/modules/drivers/fglrx_drv.so: /usr/lib/xorg/modules/drivers/fglrx_drv.so: undefined symbol: noXFree86DRIExtension Some searching around will show that this is a problem with the legacy ATI drivers not supporting xserver 1.13 or newer. (Arch Linux thread) ATI has released a fixed driver for its most recent (HD 5000 series or later) cards, but not for the 'legacy' cards yet. The non-legacy ATI drivers can't be used with the old cards. What should an Ubuntu user, using one of these HD 2000-4000 series cards, do? Wait for an updated 'legacy' ATI driver that properly works with xserver 1.13? Downgrade back to 12.04 Precise, which uses xserver 1.11? Try to downgrade xserver on 12.10 Quantal to 1.12, which could possibly break Unity and GNOME? Forced upgrade to HD 5000 series or later card? (Not possible with integrated graphics...) Some other 1337 action that fixes this problem painlessly?

    Read the article

  • What tasks should be explicitly mentioned in a job reference? [closed]

    - by Martin
    Glossary A job reference (see also the german version) is a letter from the (former) employer that states what the employee did, and how well he did it. There are oh so weird rules here on how to phrase stuff therein, but this is not what this question is about. Question I hope this can even be generally answered, but even if country/region specific, I think there is enough international know-how on this site to get useful answers for different regions. I was wondering how detailed the tasks a programmer / developer did should be spelled out in a job reference. (After all, they can be spelled out in all detail in a CV when applying for a new job.) So how much detail is usual for a job reference? Example Developed Windows applications in C++ or Developed Windows Desktop Applications using C++ with MS Visual Studio 2005 and MFC, utilising Boost 1.47 and specif library xyz, focusing on subsystem abc for numerical calculations of ... etc. What makes more sense?

    Read the article

  • ASP.NET MVC Cookbook &ndash; public review

    - by Andrew Siemer - www.andrewsiemer.com
    I have recently started writing another book.  The topic of this book is ASP.NET MVC.  This book differs from my previous book in that rather than working towards building one project from end to end – this book will demonstrate specific topics from end to end.  It is a recipe book (hence the cookbook name) and will be part of the Packt Publishing cookbook series.  An example recipe in this book might be how to consume JSON, creating a master /details page, jquery modal popups, custom ActionResults, etc.  Basically anything recipe oriented around the topic of ASP.NET MVC might be acceptable.  If you are interested in helping out with the review process you can join the “ASP.NET MVC 2 Cookbook-review” group on Google here: http://groups.google.com/group/aspnet-mvc-2-cookbook-review Currently the suggested TOC for the project is listed.  Also, chapters 1, 2, and most of 8 are posted.  Chapter 5 should be available tonight or tomorrow. In addition to reporting any errors that you might find (much appreciated), I am very interested in hearing about recipes that you want included, expanded, or removed (as being redundant or overly simple).  Any input is appreciated!  Hearing user feedback after the book is complete is a little late in my opinion (unless it is positive feedback of course). Thank you!

    Read the article

  • Change from tri-boot to dual-boot

    - by Andrew Robinson
    I have been tri-booting Windows 7, Windows 8 Release Candidate and Ubuntu 12.04 LTS for a few months now. I have decided that, since I have no touch screen, I will not purchase Win 8. I now want to get rid of the Win 8 RC, then add that partition space to my Ubuntu partition, but have no idea how to accomplish this. Do I need to uninstall Win 8 RC from within Windows first? The grub loader sends me to the Win 8 loader, where I have Win 7 as the default. Does that complicate things? Any assistance anyone can give would be greatly appreciated.

    Read the article

  • Smarty: Configurable Comments and Code Templates

    - by Martin Fousek
    Hello, today we would like to show you few improvements we have prepared in PHP Smarty Framework for NetBeans 7.3. So let's talk about adjustable toggle comment action and code templates. Configurable Comments As some of you requested we implemented toggle comment action with adjustable behavior. In NetBeans 7.3 you can choose in Options between commenting as a "Smarty comments everywhere" or "Language sensitive comments" in Smarty Templates. Toggle comment language sensitive: Toggle comment as Smarty comment everywhere: Code Templates In NetBeans 7.3 we will provide by default many code templates inside Smarty templates or directly inside Smarty tags. Available should be code templates for all built-in or custom functions and modifiers of Smarty 3.x. Besides that you should be able to define additional custom templates easily in Options -> Editor -> Code Templates for "Smarty Templates" or directly for "Smarty Markup" (which means code templates inside Smarty tag). You can also take advantage of selection's template which are able to wrap your code with chosen Smarty tag. That's all for today. As always, please test it and report all the issues or enhancements you find in NetBeans BugZilla (component php, subcomponent Smarty).

    Read the article

  • Do you know the minimum builds to create on any branch?

    - by Martin Hinshelwood
    You should always have three builds on your team project. These should be setup and tested using an empty solution before you write any code at all. Figure: Three builds named in the format [TeamProject].[AreaPath]_[Branch].[Gate|CI|Nightly] for every branch.   These builds should use the same XAML build workflow; however you may set them up to run a different set of tests depending on the time it takes to run a full build. Gate – Only needs to run the smallest set of tests, but should run most if not all of the Unit Test. This is run before developers are allowed to check-in CI – This should run all Unit Tests and all of the automated UI tests. It is run after a developer check-in. Nightly – The Nightly build should run all of the Unit Tests, all of the Automated UI tests and all of the Load and Performance tests. The nightly build is time consuming and will run but once a night. Packaging of your Product for testing the next day may be done at this stage as well. Figure: You can control what tests are run and what data is collected while they are running. Note: We do not run all the tests every time because of the time consuming nature of running some tests, but ALL tests should be run overnight. Note: If you had a really large project with thousands of tests including long running Load tests you may need to add a Weekly build to the mix.     Figure: Bad example, you can’t tell what these builds do if they are in a larger list   Figure: Good example, you know exactly what project, branch and type of build these are for.   Technorati Tags: SSW,SSW Rules,VS2010,VS ALM,Team Build 2010,Team Build

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >