Search Results

Search found 2844 results on 114 pages for 'daniel martin'.

Page 15/114 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • How to install Radeon 3670 HD graphics drivers for Ubuntu 10.04 64 bit with OpenGL 2.0 support?

    - by Daniel
    I've been having trouble with getting graphics drivers to work that support OpenGL 2.0. I've had some luck with the Ubuntu drivers, however these only support OpenGL 1.3. I thought I would document the methods that I have tried both to see if anyone else has ideas, and to save time for people with a similar problem. System details: Ubuntu 10.04 (Lucid) 64 bit Kernel Linux 2.6.32-44-generic GNOME 2.30.2 ATI Mobility Radeon HD 3670 Attempted Methods The methods I have tried are: 1. Installing Proprietary Drivers using the "Hardware Drivers" (Jockey) GUI This GUI offers an "ATI/AMD proprietary FGLRX graphics driver" however any attempts to install it result in a "Sorry, installation of this driver failed" error. The log file is here. There is an Ask Ubuntu question that covers this scenario, and notes that there is a known bug with Jockey. 2. Installing the Proprietary Drivers manually The answer to the question above linked to this wiki page, which gives instructions for installing Catalyst 12.6. This supported hardware list states that the 3670 is not supported in 12.6, and 12.4 must be used. This is somewhat confusing, as AMD's website suggests that the 12.6 driver should be installed for the 3670. There have been user reports that R600 (the GPU inside the 3670 card) doesn't work with 12.6, so I'm sticking with 12.4. I'm following these instructions to install the proprietary drivers on Lucid. I downloaded the 12.4 driver from the AMD website. Building the package worked fine, generating the fglrx, fglrx-dev, fglrx-amdcccle, and fglrx-modaliases deb packages successfully. However, when I try to install these using dpkg it gives me these errors. The make log referenced in the error is here. Ask Ubuntu References What is the correct way to install ATI Catalyst Video Drivers? Cannot install ATI/AMD FGLRX restricted graphic drivers Is my ATI graphics card supported in Ubuntu?

    Read the article

  • How to perform regular expression based replacements on files with MSBuild

    - by Daniel Cazzulino
    And without a custom DLL with a task, too . The example at the bottom of the MSDN page on MSBuild Inline Tasks already provides pretty much all you need for that with a TokenReplace task that receives a file path, a token and a replacement and uses string.Replace with that. Similar in spirit but way more useful in its implementation is the RegexTransform in NuGet’s Build.tasks. It’s much better not only because it supports full regular expressions, but also because it receives items, which makes it very amenable to batching (applying the transforms to multiple items). You can read about how to use it for updating assemblies with a version number, for example. I recently had a need to also supply RegexOptions to the task so I extended the metadata and a little bit of the inline task so that it can parse the optional flags. So when using the task, I can pass the flags as item metadata as follows:...Read full article

    Read the article

  • What are the options for simple Ajax calls for a Java webapp?

    - by Cedric Martin
    I've got a very simple need and I don't know what are the options available. If I simplify, users see webpage like this server by a Java webapp server: [-] red [x] green [-] blue [-] yellow The selected color is green And then I want the user to be able to select the yellow color and have the part of the page containing the relevant text change to: [-] red [-] green [-] blue [x] yellow The selected color is yellow Basically I want something a bit more user friendly than simply using HTTP GET all the time. There shall be a lot of options the user can select from and this shall affect an (HTML formatted) text displayed on the page. And I want the user to see his change as soon as possible, without having the page to fully reload and without being redirected to another page. There shall be a client/server round-trip (the information to display depending on the options selected ain't available on the client-side so I cannot do it all in JavaScript in the browser). I'd like to use Ajax requests but I don't know which way to go: jQuery GWT something else What are my options and what would be the pros and cons of the various approach? P.S: I'm very familiar with Java (SCP since the last century and basically being a Java programmer for the last 12 years or so) but not familiar at all with JavaScript (though I did hack a few Ajaxy-calls years ago, way before great libraries existed).

    Read the article

  • What micro web-framework has the lowest overhead but includes templating

    - by Simon Martin
    I want to rewrite a simple small (10 page) website and besides a contact form it could be written in pure html. It is currently built with classic asp and Dreamweaver templates. The reason I'm not simply writing 10 html pages is that I want to keep the layout all in 1 place so would need either includes or a masterpage. I don't want to use Dreamweaver templates, or batch processing (like org-mode) because I want to be able to edit using notepad (or Visual Studio) because occasionally I might need to edit a file on the server (Go Daddy's IIS admin interface will let me edit text). I don't want to use ASP.NET MVC or WebForms (which I use in my day job) because I don't need all the overhead they bring with them when essentially I'm serving up 9 static files, 1 contact form and 1 list of clubs (that I aim to use jQuery to filter). The shared hosting package I have on Go Daddy seems to take a long time to spin up when serving aspx files. Currently the clubs page is driven from an MS SQL database that I try to keep up to date by manually checking the dojo locator on the main HQ pages and editing the entries myself, this is again way over the top. I aim to get a text file with the club details (probably in JSON or xml format) and use that as the source for the clubs page. There will need to be a bit of programming for this as the HQ site is unable to provide an extract / feed so something will have to scrape the site periodically to update my clubs persistence file. I'd like that to be automated - but I'm happy to have that triggered on a visit to the clubs page so I don't need to worry about scheduling a job. I would probably have a separate process that updates the persistence that has nothing to do with the rest of the site. Ideally I'd like to use Mercurial (or git) to publish, I know Bitbucket (and github) both serve static page sites so they wouldn't work in this scenario (dynamic pages and a contact form) but that's the model I'd like to use if there is such a thing. My requirements are: Simple templating system, 1 place to define header, footers, menu etc., that can be edited using just notepad. Very minimal / lightweight framework. I don't need a monster for 10 pages Must run either on IIS7 (shared Go Daddy Windows hosting) or other free host

    Read the article

  • Granular Clipboard Control in Oracle IRM

    - by martin.abrahams
    One of the main leak prevention controls that customers are looking for is clipboard control. After all, there is little point in controlling access to a document if authorised users can simply make unprotected copies by use of the cut and paste mechanism. Oddly, for such a fundamental requirement, many solutions only offer very simplistic clipboard control - and require the customer to make an awkward choice between usability and security. In many cases, clipboard control is simply an ON-OFF option. By turning the clipboard OFF, you disable one of the most valuable edit functions known to man. Try working for any length of time without copying and pasting, and you'll soon appreciate how valuable that function is. Worse, some solutions disable the clipboard completely - not just for the protected document but for all of the various applications you have open at the time. Normal service is only resumed when you close the protected document. In this way, policy enforcement bleeds out of the particular assets you need to protect and interferes with the entire user experience. On the other hand, turning the clipboard ON satisfies a fundamental usability requirement - but also makes it really easy for users to create unprotected copies of sensitive information, maliciously or otherwise. All they need to do is paste into another document. If creating unprotected copies is this simple, you have to question how much you are really gaining by applying protection at all. You may not be allowed to edit, forward, or print the protected asset, but all you need to do is create a copy and work with that instead. And that activity would not be tracked in any way. So, a simple ON-OFF control creates a real tension between usability and security. If you are only using IRM on a small scale, perhaps security can outweigh usability - the business can put up with the restriction if it only applies to a handful of important documents. But try extending protection to large numbers of documents and large user communities, and the restriction rapidly becomes really unwelcome. I am aware of one solution that takes a different tack. Rather than disable the clipboard, pasting is always permitted, but protection is automatically applied to any document that you paste into. At first glance, this sounds great - protection travels with the content. However, at any scale this model may not be so appealing once you've had to deal with support calls from users who have accidentally applied protection to documents that really don't need it - which would be all too easily done. This may help control leakage, but it also pollutes the system with documents that have policies applied with no obvious rhyme or reason, and it can seriously inconvenience the business by making non-sensitive documents difficult to access. And what policy applies if you paste some protected content into an already protected document? Which policy applies? There are no prizes for guessing that Oracle IRM takes a rather different approach. The Oracle IRM Approach Oracle IRM offers a spectrum of clipboard controls between the extremes of ON and OFF, and it leverages the classification-based rights model to give granular control that satisfies both security and usability needs. Firstly, we take it for granted that if you have EDIT rights, of course you can use the clipboard within a given document. Why would we force you to retype a piece of content that you want to move from HERE... to HERE...? If the pasted content remains in the same document, it is equally well protected whether it be at the beginning, middle, or end - or all three. So, the first point is that Oracle IRM always enables the clipboard if you have the right to edit the file. Secondly, whether we enable or disable the clipboard, we only affect the protected document. That is, you can continue to use the clipboard in the usual way for unprotected documents and applications regardless of whether the clipboard is enabled or disabled for the protected document(s). And if you have multiple protected documents open, each may have the clipboard enabled or disabled independently, according to whether you have Edit rights for each. So, even for the simplest cases - the ON-OFF cases - Oracle IRM adds value by containing the effect to the protected documents rather than to the whole desktop environment. Now to the granular options between ON and OFF. Thanks to our classification model, we can define rights that enable pasting between documents in the same classification - ie. between documents that are protected by the same policy. So, if you are working on this month's financial report and you want to pull some data from last month's report, you can simply cut and paste between the two documents. The two documents are classified the same way, subject to the same policy, so the content is equally safe in both documents. However, if you try to paste the same data into an unprotected document or a document in a different classification, you can be prevented. Thus, the control balances legitimate user requirements to allow pasting with legitimate information security concerns to keep data protected. We can take this further. You may have the right to paste between related classifications of document. So, the CFO might want to copy some financial data into a board document, where the two documents are sealed to different classifications. The CFO's rights may well allow this, as it is a reasonable thing for a CFO to want to do. But policy might prevent the CFO from copying the same data into a classification that is accessible to external parties. The above option, to copy between classifications, may be for specific classifications or open-ended. That is, your rights might enable you to go from A to B but not to C, or you might be allowed to paste to any classification subject to your EDIT rights. As for so many features of Oracle IRM, our classification-based rights model makes this type of granular control really easy to manage - you simply define that pasting is permitted between classifications A and B, but omit C. Or you might define that pasting is permitted between all classifications, but not to unprotected locations. The classification model enables millions of documents to be controlled by a few such rules. Finally, you MIGHT have the option to paste anywhere - such that unprotected copies may be created. This is rare, but a legitimate configuration for some users, some use cases, and some classifications - but not something that you have to permit simply because the alternative is too restrictive. As always, these rights are defined in user roles - so different users are subject to different clipboard controls as required in different classifications. So, where most solutions offer just two clipboard options - ON-OFF or ON-but-encrypt-everything-you-touch - Oracle IRM offers real granularity that leverages our classification model. Indeed, I believe it is the lack of a classification model that makes such granularity impractical for other IRM solutions, because the matrix of rules for controlling pasting would be impossible to manage - there are so many documents to consider, and more are being created all the time.

    Read the article

  • Setting Up IRM Test Content

    - by martin.abrahams
    A feature of the 11g IRM Server that sometimes gets overlooked is the ability to set up some test content that any IRM user can access to verify that their IRM Desktop can reach the server, authenticate successfully, and render protected content successfully. Such test content is useful for new users, and in troubleshooting scenarios. Here's how to set up some test content... In the management console, go to IRM - Administration - Test Content, as shown. The console will display a list of test content - initially an empty list. Use the Add option to specify the URL of a document or image, and define one or more labels for the test content in whichever languages your users favour. Note that you do not need to seal the image or document in order to use it as test content. Nor do you need to set up any rights for the test content. The IRM Server will handle the sealing and rights assignment automatically such that all authenticated users are authorised to view the test content. Repeat this process for as many different types of content as you would like to offer for test purposes - perhaps a Word document, a PDF document, and an image. To keep things simple the first time I did this, I used the URL of one of the images in the IRM Server's UI - so there was no problem with the IRM Server being able to reach that image. Whatever content you want to use, the IRM Server needs to be able to reach it at the URL you specify. Using Test Content Open a browser and browse to the URL that the IRM Desktop normally uses to access the IRM Server, for example: http://irm11g.oracle.com/irm_desktop If you are not sure, you can find this URL in the Servers tab of the IRM Options dialog. Go to the Test tab, and you will see your test content listed. By opening one of the items, you can verify that your IRM Desktop is healthy and that you can authenticate to the IRM Server.

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

  • Jupiter in Ubuntu 13.10 (Laptop Overheating)

    - by Daniel Pacheco
    I was wondering if Jupiter (Interface for display, power and device control) will work in Ubuntu 13.10, because my laptop (Toshiba Satellite C855D, AMD A6-4400M with Radeon HD Graphics running Ubuntu 13.04 x64) keeps overheating, I tried some other tools, like laptop-mode-tools or TLP, none of those work, not at all. Jupiter was the only option and it's supposedly discontinued, the version I'm using is being maintained by JoliCloud team, but they told me they're not sure if it will work with 13.10... If it doesn't work, I'm definitely not upgrading, since overheating is a major issue for me... Thanks in advance!

    Read the article

  • Hancon / Hanwang Graphics Tablet not recognised

    - by Martin Kyle
    I'm totally lost. I've just built a new system and installed Ubuntu 12.04. It's my first time with Linux and getting into the terminal / command line for the first time since IBMDOS 5 and Windows 3.1 has been a steep learning curve. However, the interface works beautifully apart from it doesn't recognize my Hanvon Artmaster AM1209. I have sent diagnostics to Digimend and Favux was kind enough to advise that the tablet should be using the Wacom X driver as the Hanvon is actually a Hanwang and these should be supported. lsusb reports: ID 0b57:8501 Beijing HanwangTechnology Co., Ltd xinput list reports: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2+USB Mouse id=8 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Eee PC WMI hotkeys id=9 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=10 [slave keyboard (3)] Favux suggested inspecting /var/log/Xorg.0.log for the tablet but I cannot see any mention of it, and that is as far as I have got. I've tried researching the problem but I am struggling with all the new terminology and the fact that I want the PC to be a means to an end and not the end in itself where I spend the rest of my days tweaking and testing rather than just using it. Hope there is some help out there.

    Read the article

  • Keep Your Eye on the Ball

    - by [email protected]
    With the FIFA World Cup 2010 in South Africa almost a week underway, the soccer fans all around the World are talking about at least 2 things. That typical vuvuzela sound and the new Jabulani ball, saying it moves unpredictably, is difficult to handle and somehow the altitude of the World Cup stadiums also seem to be a contributing factor.(Picture taken from http://www.flickr.com/photos/warrenski/4143923059/ under a Creative Commons license)Although the FIFA states that it hasn't received any official complaints, the end users don't seem to be very happy with this new ball. This brings me to a comparison with IT management and testing. When you're in a situation where you're introducing a new product, in IT terms, introducing a new application, you would like to test all possible scenarios that your end users could be using and experiencing. However, that's a very time and resource intensive process to do for every application change or update.  It's like getting ready for the big game but you have no game plan.That's why a new approach has been developed. One that's based on the 80/20 rule. Testing 80% of the application will cost about 20% of the efforts. The remaining 20% of your application will not be tested before deployment, but monitored with a real user monitoring solution immediately after deployment. These tools track all user experiences, including error messages and the performance and availability metrics from an end user perspective. Should any anomaly occur, you would be able to repair it quickly so you and your end users can get back into the game.These real user sessions can be easily converted into testing scripts, so the 80% of the application testing can be complimented with the remaining 20%.Oracle Enterprise Manager 11g group of products offers both the real user monitoring solution with Oracle Real User Experience Insight, as well as the required testing solution with Oracle Application Testing Suite. Visit our Oracle Enterprise Manager 11g resource center and find out how it's Business-Driven IT Management approach will help you keep your eye on your business ball.Happy World Cup.

    Read the article

  • Are project managers useful in Scrum?

    - by Martin Wickman
    There are three roles defined in Scrum: Team, Product Owner and Scrum Master. There is no project manager, instead the project manager job is spread across the three roles. For instance: The Scrum Master: Responsible for the process. Removes impediments. The Product Owner: Manages and prioritizes the list of work to be done to maximize ROI. Represents all interested parties (customers, stakeholders). The Team: Self manage its work by estimating and distributing it among themselves. Responsible for meeting their own commitments. So in Scrum, there is no longer a single person responsible for project success. There is no command-and-control structure in place. That seems to baffle a lot of people, specifically those not used to agile methods, and of course, PM's. I'm really interested in this and what your experiences are, as I think this is one of the things that can make or break a Scrum implementation. Do you agree with Scrum that a project manager is not needed? Do you think such a role is still required? Why?

    Read the article

  • Jerome has written a nice article on integrating SceneBuilder with several IDEs

    - by daniel
    My colleague Jerome Cambon has written a very nice article about how to get SceneBuilder working with several IDEs. The JavaFX SceneBuilder is at the root a stand-alone tool - but there are various tweaks and tricks that you can use to make its use in conjunction with your favorite IDE a more enjoyable experience. In his article - Jerome shows how this can be done with NetBeans (7.3), Eclipse, with Tom's excellent e(fx)clipse plugin, and IntelliJ IDEA. Good work Jerome!

    Read the article

  • Display a JSON-string as a table

    - by Martin Aleksander
    I'm totally new to JSON, and have a json-string I need to display as a user-friendly table. I have this file, http://ish.tek.no/json_top_content.php?project_id=11&period=week, witch is showing ID-numbers for products (title) and the number of views. The Title-ID should be connected to this file; http://api.prisguide.no/export/product.php?id=158200 so I can get a table like this: ID | Product Name | Views 158200 | Samsung Galaxy SIII | 21049 How can I do this?

    Read the article

  • Hide collision layer in libgdx with TiledMap?

    - by Daniel Jonsson
    I'm making a 2D game with libgdx, and I'm using its TileMapRenderer to render my map which I have made in the map editor Tiled. In Tiled I have a dedicated collision layer. However, I can't figure out how I'm supposed to hide it and its tiles in the game. This is how a map is loaded: TiledMap map = TiledLoader.createMap(Gdx.files.internal("maps/map.tmx")); TileAtlas atlas = new TileAtlas(map, Gdx.files.internal("maps")); tileMapRenderer = new TileMapRenderer(map, atlas, 32, 32); Currently the collision tiles are rendered on top of everything else, as I see them in the map editor.

    Read the article

  • How can architects work with self-organizing Scrum teams?

    - by Martin Wickman
    An organization with a number of agile Scrum teams also has a small group of people appointed as "enterprise architects". The EA group acts as control and gatekeeper for quality and adherence to decisions. This leads to overlaps between the team decision and EA decisions. For instance, the team might want to use library X or want to use REST instead of SOAP, but the EA does not approve of that. Now, this can lead to frustration when team decisions are overruled. Taken far enough, it can potentially lead to a situation where the EA people "grabs" all power and the team ends up feeling demotivated and not very agile at all. The Scrum guides has this to say about it: Self-organizing: No one (not even the Scrum Master) tells the Development Team how to turn Product Backlog into Increments of potentially releasable functionality. Is that reasonable? Should the EA team be disbanded? Should the teams refuse or simply comply?

    Read the article

  • Windows expand over 2 monitors in quad-monitor setup

    - by Martin
    i just installed ubuntu 11.10 with my previous hardware setup: 4 monitors and 2 identical nvidia graphic cards. draging windows around all 4 monitors works nice, but when i maximize a window it expand always over 2 screens. (2x twinview). i had an workaround for this in 11.04 but cant remember what it was... may one of you guys have quad monitors up and running with window maximizing on only one screen my xorg.conf looks like this: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 280.13 (buildd@allspice) Thu Aug 11 20:54:45 UTC 2011 # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" # Removed Option "Xinerama" "1" # Removed Option "Xinerama" "0" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" 0 1080 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SMB2220N" HorizSync 31.0 - 80.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Samsung SMB2220N" HorizSync 31.0 - 80.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 550 Ti" BusID "PCI:2:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 550 Ti" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "TwinView" "True" # Removed Option "MetaModes" "nvidia-auto-select, nvidia-auto-select" # Removed Option "metamodes" "CRT-0: nvidia-auto-select 1920x1080 +0+0, CRT-1: nvidia-auto-select 1920x1080 +1920+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1920+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "TwinView" "True" # Removed Option "MetaModes" "nvidia-auto-select, nvidia-auto-select" # Removed Option "metamodes" "CRT-0: nvidia-auto-select 1920x1080 +0+0, CRT-1: nvidia-auto-select 1920x1080 +1920+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1920+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

  • Amazon SOA: database as a Service

    - by Martin Lee
    There is an interesting interview with Werner Vogels which is partly about how Amazon does Service Oriented Architecture: For us service orientation means encapsulating the data with the business logic that operates on the data, with the only access through a published service interface. No direct database access is allowed from outside the service, and there’s no data sharing among the services. I do not understand that. Why do they need to 'wrap' a database into some layer if it already can be consumed as a service by other service through database adaptors? Does Amazon do that just because they need to expose the database to third parties or because of anything else? Why "no direct database access is allowed"? What are the advantages of such an architectural decision?

    Read the article

  • How to use T4 templates in WP7, Silverlight, Desktop or even MonoDroid apps

    - by Daniel Cazzulino
    In other words, how to use T4 templates without ANY runtime dependencies? Yes, it is possible, and quite simple and elegant actually. In a desktop project, just open the Add New Item dialog, and search for "text template": From the two available templates, the one that gives you a zero-dependency runtime-usable template is the first one: Preprocessed Text Template. Once unfolded, you get the .tt file, but also a dependent .cs file automatically generated. Note the Custom Tool associated with the file: If you open up the .cs file, you will see that it doesn't contain the rendered "Hello World!!!" I added in the .tt, but rather a full class named after the template file itself: namespace ConsoleApplication1 { using System; #line 1 "C:\Temp\ConsoleApplication1\ConsoleApplication1\PreTextTemplate1.tt" [System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.VisualStudio.TextTemplating", "10.0.0.0")] public partial class PreTextTemplate1 : PreTextTemplate1Base { public virtual string TransformText() { this.GenerationEnvironment = null; this.Write("Hello World!!!"); return this.GenerationEnvironment.ToString(); } } #region Base class ... #endregion } ... Read full article

    Read the article

  • mpirun -np N, what if N is larger than my core number?

    - by Daniel
    Say I have a 4-core workstation, what would linux (Ubuntu) do if I execute mpirun -np 9 XXX Q1. Will 9 run immediately together, or they will run 4 after 4? Q2. I suppose that using 9 is not good, because the remainder 1, it will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the computer will decide which core among the 4 cores will be used?) Or it will be randomly picked. Who decide which one core to call? Q3. If I feel my cpu is not bad and my ram is okay and large enough, and my case is not very big. Is it a good idea in order to fully use my cpu and ram, that I do mpirun -np 8 XXX, or even mpirun -np 12 XXX. Q4. Who decides all of these effciency optimization, Ubuntu, or linux, or motherboard or cpu? Your enlightenment would be really appreciated.

    Read the article

  • Why are two indicator-network versions being worked on?

    - by Daniel Rodrigues
    Some months ago, on the road to Ubuntu Maverick, a new system indicator, network (with connman as a backend), started to be developed. The plan was to get it into UNE and release it with no notifcation area. Unfortunately it didn't make it into the final version. However, continued efforts are still being made to improve it, and I'm getting regular updates. From a blueprint from the last UDS, I read that the plan was to ship no notification area and only indicators. For that, it was defined that nm-applet (backend: NetworkManager) should be ported to the appindicator library. Today I discovered that those efforts are going on and a initial version is available for testing, available from Matt Trudel PPA (Natty only). So, my questions is, to whoever has the necessary info: wouldn't it be easier to join efforts and concentrate the work in just one version (probably NetworkManager backend, as that's the official plan), instead of breaking those efforts apart and hampering both testing and developing? Both indicators are being developed by Canonical engineers, and that really doesn't make much sense. So, any Canonical engineer willing to clarify this?

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • Geographically limited / gradual release process

    - by daniel.sedlacek
    I am looking for more information on a gradual release process - that is when you release new version of a software only to certain set of end users, mostly geographically limited (or limited by a reach of particular server). Google seems to be blind to this term - that indicates that's not how it's called. What's the name then? EDIT: An example of what I mean is when Facebook rolled out new image galleries they were first visible to certain users only, then to whole US and then to the rest of the world.

    Read the article

  • rewrite rule if iphone?

    - by daniel Crabbe
    hello there. just need one url on my site to check if its a mobile device and then rerite the url accordingly. want to rewrite; /play-reel/miranda-bowen/playpeaches-and-cream to /mobile/play-reel/miranda-bowen/playpeaches-and-cream RewriteCond %{HTTP_USER_AGENT} ^.*iPhone.*$ [NC] RewriteRule ^play-reel(.*)\$ mobile/play-reel$1 [R=302,NC] RewriteRule ^mobile/play-reel/([a-zA-Z0-9\-]+)/([a-zA-Z0-9\-]+)$ play-reel-new-html5-02.php?director=$1&video=$2 [L] # the 3rd line works but cant get the url to change for it to be picked up can anyone see what's wrong? There's no erro best, Dan

    Read the article

  • AS3 Stage3D Mouse click problem?

    - by Martin K
    I have a problem with Mouse interaction and Stage3D. The only way I found to register to listen to mouse clicks and interact with Stage3D, is to add a mouse eventListener directly to the .stage. However this will result in any time i click anywhere in the flash application the mouse click will fire, even if there is an overlaid 2D menu where the user intended to click. IE I have a 3D application running in the background, which listens to clicks, and I have some floating User Interface elements in the foreground, and ideally if I clicked a button in the foreground, then that would NOT fire a click event that the Stage3D would register. Any idea how to solve this problem?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >