Search Results

Search found 20021 results on 801 pages for 'software engeneering learner'.

Page 548/801 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • Kinect will recognise your finger movement

    - by Boonei
    Sources inside Microsoft suggest they MS guys are tying to improve motion controller. This could be a huge boost to gaming. There are quite a few things that we can do with our fingers when playing games like driving, shooting,sports etc. If fingers are captured then  XBox will give more realistic version of our avatar. Eurogamer has also suggested the same according to their sources. It would be only(mostly) a software update and would not require a new camera, because the USB controller interface currently in place in Kinect can take in data up-to 35MBps. The current utilization is only around roughly 1/2 of it. So there is currently a facility to send more data. Little more tech data, Kinect does transmit 320×240-pixel in 30 fps, if the device could capture and transfer at 640×480 pixels, then better resolution can detect more movements compared to current level of detection. Lets wait and watch ! This article titled,Kinect will recognise your finger movement, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Running programs in cache and registers

    - by OSX NINJA
    In my operating systems class we were shown a picture depicting a hierarchy of memory starting from most expensive and fastest at the top and least expensive and slowest at the bottom. At the very top was registers and underneath it was cache. The professor said that the best place to run programs is in cache. I was wondering why programs can't be run in registers? Also, how can a program load itself into cache? Isn't the cache something that's controlled by the CPU and works automatically without software control?

    Read the article

  • ASUS EAH4670 vs. ASUS EAH4670 V2 -- Whats the difference?

    - by roosteronacid
    I've been offered to buy a used ASUS EAH4670/DI/1GD3 graphics card. I went price-scouting to see if the offer I was given was fair, and I found out that there's a similar card, labelled ASUS EAH4670/DI/1GD3 V2. Question is; What's the difference? What's with the V2? What does it mean? Is it just a BIOS upgrade that I can do myself? Updated driver-software which I can download myself? Or is the card actually a better version--faster, more reliable (physical changes to the print), etc.?

    Read the article

  • Remote login/access on windows

    - by acidzombie24
    Hi I was wondering what software I can use to access my and other machines remotely? I used ssh which is nice but i don't know how it would be like on windows. (I assume its the same idea but windows console instead of a bash terminal?) Windows has a lot of applications that require GUI/MouseClicks. Actually I don't know a single ssh or vpn command line installer not that i'm complaining (but is helpful if you can mention some). I haven't use a VPN, is this taking control of a users screen/session? Or is it another instance/session as if you logged in as a different user on that box? What solutions are at my disposal for windows? (7)

    Read the article

  • Is my disk hot or not?

    - by adriangrigore
    I am trying to figure out the temperature of my dedicated server's harddisk. For this purpose, I've downloaded HDTune to monitor the S.M.A.R.T. Status. The problem is that the current temperature for "C2 Temperature" is 73 degrees celsius, but the little thermometer on top of the window shows 27 degrees celsius. See this screenshot: http://screencast.com/t/OGJhZGIxND Another monitoring software (Anfibia Reactor) shows similar behavior: Disk temperature is around 30 degrees, but it says that the disk is too hot. So, is my disk hot or not? Since this is a dedicated server, I can't just open the case and put a thumb on it.

    Read the article

  • Gateway NE56R & mouse swipe gesture...how to disable?

    - by Anders
    Can anyone tell me how to shut off the swipe gesture mechanism on my computer? It's driving me crazy. I cannot use a single application without having my computer screen minimize every time I move my hand over the mouse following a mouse-click, pointer movement, etc. Having to maximize my spreadsheets, documents, and applications so much is undercutting my productivity. How some software engineer/inventor imagined that this mouse/gesture swipe gimmick would be helpful to computer users is inconceivable to me. It is a massive annoyance. I've found instructions online for disabling this obnoxious feature, but all the instructions involve messing with my registry, which I don't want to do. I will be SO GRATEFUL to any techie who can tell me how to disable this horrible mouse swipe mechanism without having to alter my registry! I'm using a Gateway NE56R Notebook, Windows 7 operating system, and an Inland USB Mouse (model no. 37535). Thank you in advance!

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Laptop Suggestions and Alternatives [closed]

    - by Pennf0lio
    Hi, I need some suggestions and alternatives in buying laptops. Me and my Girlfriend are planning to buy laptops. Her budget is $600 and mine is $850. We both love apple products, because of it's elegant design and yet powerful. But we can't yet afford mac laptops.. so we are looking for alternatives. One we see is The Stylish Sony E-Series (white version) see it here. To give you an Idea, Software she frequently use are Photoshop, MS Office and firefox. Mine is Adobe CS4, Cinema4d, MS Office and firefox. We are looking for light, slick and clean interface of a laptop (very close to mac notebooks). Thanks!

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • links for 2011-02-17

    - by Bob Rhubart
    ArchitectACEs - Oracle Wiki Putting a Face on the Architect ACE The Oracle ACE s listed here have identified themselves, or have been identified by fellow ACEs, as software architects. As... (tags: ping.fm) Debra's thoughts on Oracle and User Groups: I did it - I did the Fusion UX Demo Oracle ACE Director Debra Lilley shares her experience in presenting a Fusion Applications demo at RMOUG. (tags: oracle otn oracleace) The Blas from Pas: JRuby Script to Monitor a Oracle WebLogic GridLink Data Source Remotely "In WebLogic 10.3.4 release, a single data source implementation has been introduced to support Oracle RAC cluster. To simplify and consolidate its support for Oracle RAC, WebLogic Server has provided a single data source that is enhanced to support the capabilities of Oracle RAC." (tags: oracle otn weblogic) Show Notes: Bob Hensle on IT Strategies from Oracle (ArchBeat) In Part 1 Bob Hensle talked about the various documents in the IT Strategies from Oracle library. In Part 2 (now available) Bob talks about how SOA and other factors are reflected in those documents. (tags: oracle otn entarch podcast) PODCAST: Examining the state of EA and findings of recent survey | Open Group Blog A transcript of a podcast panel discussion on the findings from a study on the current state and future direction of enterprise architecture from The Open Group Conference, San Diego 2011. (tags: entarch opengroup) A Virtual Dilemma (Antony Reynolds' Blog) SOA author Anthony Reynolds shares a solution. (tags: oracle otn soa) Webcast: Live Online Forum: Oracle Security - February 24, 9:00am PT Speakers: Mary Ann Davidson, Chief Security Officer, Oracle; Tom Kyte, Senior Technical Architect, Oracle; Jeff Margolies, Partner, Security Practice, Accenture; Vipin Samar, VP, Database Security Product Development Oracle; and Nishant Kaushik, Chief Strategist, Identity and Access Management. (tags: oracle security) Obama banks on cloud, consolidation, to hold down IT costs | Computerworld NZ President Obama's fiscal 2012 budget proposal keeps IT spending almost flat compared to fiscal 2010 mostly due to the consolidation of data centers and a shift to cloud computing systems. (tags: ping.fm)

    Read the article

  • Show Notes: Debra Lilley on Fusion Applications

    - by Bob Rhubart
    The latest ArchBeat program features a three-part interview with Oracle ACE Director Debra Lilley (ACE Profile). Debra is Oracle Alliance Director at Fujitsu, Executive Member at the International Oracle Users Group Community (IOUG), Director and Deputy Chair at the UK Oracle Users Group (UKOUG), and a partner at Oracle UK.  So yeah, she’s connected.  In this interview Debra talks about her connection to Oracle Fusion Applications. Listen to Part 1 Debra talks about her role as the as the Director and Deputy Chairperson of the UKOUG and about the UKOUG development group’s involvement in Oracle Fusion Applications. Listen to Part 2 (March 9) Debra shares her insight into what Fusion Applications will bring to Enterprise Architecture, and the importance of user experience in enterprise architecture. Listen to Part 3 (March 16) Debra discusses the need to  close the gap between IT and business, and about how business users should be able to use applications without having to think about the underlying technology. Debra is very active in social networks, so if you have questions or comments you can connect with her via the following: Blog: http://www.debrasoracle.blogspot.com/ Twitter: @debralilley LinkedIn:  http://uk.linkedin.com/pub/debra-lilley/1/438/bba And if you’d like to learn more about Oracle Fusion Applications: http://www.oracle.com/us/products/applications/fusion/index.html Coming Soon Dr. Frank Munz, author of Middleware and Cloud Computing: Oracle Fusion Middleware on Amazon Web Services and Rackspace Cloud.  Andy MacMillan (VP, Enterprise 2.0, Oracle) on the socialization of the enterprise. A panel discussion on “Who gets to be a software architect?” Stay tuned: RSS Technorati Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG del.icio.us Tags: oracle,fusion applications,enterprise architecture,IOUG,UKOUG

    Read the article

  • Video from vhs via USB capture device on Linux

    - by Juanlu001
    I want to transfer video from a VHS tape to my computer with Arch Linux, using a USB capture device that comes with Roxio Easy VHS to DVD, which I recently bought. I tried to plug in the device, and it was properly recognized. From /var/log/messages.log, em28xx #0: Identified as Pinnacle Dazzle DVC 90/100/101/107 / Kaiser Baas Video to DVD maker / Kworld DVD Maker 2 (card=9) saa7115 3-0025: saa7113 found (1f7113d0e100000) @ 0x4a (em28xx #0) em28xx #0: Config register raw data: 0x50 em28xx #0: AC97 vendor ID = 0xffffffff em28xx #0: AC97 features = 0x6a90 em28xx #0: Empia 202 AC97 audio processor detected em28xx #0: v4l2 driver version 0.1.2 em28xx #0: V4L2 video device registered as video1 em28xx #0: V4L2 VBI device registered as vbi0 Is there any software that enables me to capture video coming from this device? I don't mind if it is console or GUI based.

    Read the article

  • Can't ssh to instance

    - by megas
    I have a linode instance, I was successfully connecting to it via ssh. But I've decided to rebuild my instance and then I can not connect to that instance via ssh. The linode works correctly because I can get access via Lish (lonode ssh) I've tried to clear known_hosts with: ssh-keygen -R 212.71.xxx.xx But I still getting message: ssh [email protected] -v OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 212.71.238.74 [212.71.238.74] port 22. debug1: Connection established. debug1: identity file /home/megas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/megas/.ssh/id_rsa-cert type -1 debug1: identity file /home/megas/.ssh/id_dsa type -1 debug1: identity file /home/megas/.ssh/id_dsa-cert type -1 debug1: identity file /home/megas/.ssh/id_ecdsa type -1 debug1: identity file /home/megas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA c5:c3:a7:c0:5a:25:a1:64:c4:04:0c:42:bb:46:f6:96 debug1: Host '212.71.238.74' is known and matches the ECDSA host key. debug1: Found key in /home/megas/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/megas/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/megas/.ssh/id_dsa debug1: Trying private key: /home/megas/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey,password). How to resolve this problem? Thanks

    Read the article

  • Compiz Fusion And Unity Tool

    - by David Michael Rice
    I tried to install compiz fusion on ubuntu studio 13 and all I got was this Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: compiz-fusion-plugins-extra:i386 : Depends: compiz-core:i386 but it is not going to be installed Depends: compiz-fusion-plugins-main:i386 but it is not going to be installed Recommends: compizconfig-settings-manager:i386 but it is not going to be installed compiz-gnome : Depends: libcompizconfig0 but it is not going to be installed compizconfig-settings-manager : Depends: python-compizconfig (>= 1:0.9.9~daily13.04.18.1~13.04-0ubuntu1) but it is not going to be installed libcompizconfig-backend-gconf:i386 : Depends: libcompizconfig0:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. david@Nebuchadnezzar:~$ Also I installed Unity tweak tool through the software center, and now I cannot find it at all, not even in search apps. I feel like every time I try to install something I have to jump through flaming hoops...

    Read the article

  • Providing a public Web-Hosting service on MS Windows - advice and resources

    - by crgnz
    Are there any resources offering advice on how to setup a microsoft based web-hosting service? I currently offer LAMP hosting with cPanel, but there is some demand for IIS & SQL Server. As far as I can tell MS Windows Web Server 2008 R2 edition allows unlimited IIS connections. And a per-processor license for MS SQL Server Web Edition 2008 also permits unlimited connections. Where I am falling down is that I can't figure out how to get "unlimited" Active Directory users. I can't use 2008R2 Web Server edition for AD, so I will need the 2008R2 standard edition, I think. Does Microsoft have a provision for using AD in an ISP scenario? I am looking at using the cPanel Enkompas system to manage the Windows software, and Enkompas requires AD for user authentication. Any advice would be greatly appreciated!

    Read the article

  • How to share problem solving knowledge in a multiteam group?

    - by jonathan
    I've been working in multiteam groups for as long as I'm a webdeveloper, for me a team can be a lonely soldier or several people, generally a company will have multiple teams working in different projects and once the project is out in the wild, any team can perform the maintenance. This is a small picture since I'm not talking only about project wise knowledge, but "craft wise" knowledge, but it gives the picture of how I'm used to work, so: Since we work on modularised teams, sometimes I feel like the teams are too tightly enclosed in their projects, I've seen cases where after an hour of discussion, someone asked the question aloud and other person totally unrelated answered in a much simpler fashion. The problem is not so simple to solve as people tend not to be available all the time, also sometimes people can't afford the time to go through a problem with the "asker", but could do it alone. I've thought about software based solutions, something in the lines of SE, but I'd like to know other programmers opinions on the subject. EDIT I don't know if this is a wikipedia complex, but I feel that Wikis don't encourage the user to actually ask questions, but rather to write articles, and sometimes we don't know the knowledge we need, before needing it.

    Read the article

  • Preparing Automatic Repair appears out of nowhere & takes forever?

    - by Jörg B.
    I shutdown my work machine (Windows 8.1 Pro) without any errors appearing (and without installing any new software and especially no new drivers the last couple days) yesterday evening, turned it on this morning and it automatically started into 'Preparing Automatic Repair' mode. 1st of - is there any way to get more information WHY this is appearing out of the blue and.. 2ndly, how long is this usually supposed to take? This is a rather beefy machine (Intel 3770k / 4 core 3.9ghz, Intel SSDs only and 32 gb of ram) but this screen has been 'loading' for almost 1.5 hours now. Is there any sort of debug/verbose mode that would give me ANY indication what's going on?

    Read the article

  • Mac OS needs Windows Live Writer &ndash; badly!

    - by digitaldias
    I recently bought a new  Macbook Pro (the 13” one) to dive into a new world of programming challenges as well as to get a more powerful netbook than my Packard Bell Dot which I’ve been using since last year. I’ve had immense pleasure using the netbook format and their small size in meetings (taking notes with XMind), surfing “anywhere”, and, of course blogging with windows live writer. So far the Mac is holding up, it’s sleek, responsive, and I’ve even begun looking at coding in Objective C with it, but in one arena, it is severely lacking: Blogging software. There is nothing that even comes close to Live Writer for getting your blog posts out. The few blogger applications that do exist on mac both look and feel medieval in comparison, AND some even cost money! It looks like some mac users actually install a virtual machine on their mac to run Windows XP just so they can use WLW. I’m not that extreme; instead, I’m hoping that the WLW team will write it’s awesome application as a Silverlight 4 app. That way, it would run on Mac and Windows (as a desktop app). I wonder if it will ever happen though…   PS: The image is of me, took it with the built-in camera on the mac and emailed it to the windows PC that I am writing on :)

    Read the article

  • Anyone tried dd'ing Raidmembers?

    - by DusteD
    I want replace all disks in a 10 disk raid6 (linux software raid). I could do this by pulling a disk, let the array rebuild, rinse, repeat. But this would take a very long time, and cause 10 rebuilds, which would most likely stress all 10 disks much more than simply reading each disk through once. My question is thus: Could I just shut down the array, and dd each old disk to a new disk and then start the array with the 10 new disks? In an ideal world, I would build another server and just copy the data via network, but this is not an ideal world.

    Read the article

  • Tips for switching jobs and moving into web based programming?

    - by JerryC
    I graduated in 2006 with a computer science degree and got solid grades (3.5 overall 3.8 in my major) For the past 4.5 years I've been working as a Software Engineer doing primarily rich client development. Most of my experience is with Java, Swing and C++. I've done a lot of network programming and I have acquired some skill working & debugging in distributed environments. I would like to switch jobs and move into a role where I can get exposure to some new technologies and frameworks. I would like to move into a more web development role but I find my lack of web development experience is hurting me. 90% of the jobs I see advertised are looking for one of two skill sets: 1) Stereotypical server side Java web developer. Experience with Spring, Hibernate, J2EE, etc. 2) Stereotypical front end web developer. Experience with Javascript, jQuery, HTML5, GWT, CSS, etc I find most of these companies are looking really specifically for this experience and they are not willing to take on good programmers/ CS fundamental guys who lack experience with this stuff. I would love to get a job doing stuff like this, but have my skills become out of date and unmarketable? Any opinions on ways to sell myself to help get a new position?

    Read the article

  • Work Item Keyboard Shortcuts, Resolving Mercurial Work Items, WikiPlex 2.0

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed the latest version of the CodePlex software yesterday. Keyboard Shortcuts With this release, we have added a set of keyboard shortcuts for common tasks in the Issue Tracker.  This feature is a popular request in the CodePlex Issue Tracker.  The CodePlex team visits the issue tracker frequently when researching and considering new features.  If you haven’t visited it recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.   To view the available shortcuts, type ? from any page within the issue tracker to see this help dialog: You can see what each shortcut invokes below: Please give us feedback on this feature and let us know what additional shortcuts would be useful. Resolve Work Items When Pushing Mercurial Changes Another feature we added is the ability to resolve work items when push changes to your Mercurial repository, which has been available to our TFS / SVN users for quite some time. The required format is identical to the SVN format listed here. When committing your changes locally, add "Work Items: Id, AnotherId" to your commit message. When you push, CodePlex will detect this comment, add a commit message, and resolve the work item. WikiPlex Goes 2.0! CodePlex continues to improve WikiPlex, our open source wiki engine.  Wikiplex hit another major milestone today with the release of version 2.0!  We have added several new features, including:  interleaving ordered and unordered lists, specifying the height and width for images, a multi-line indentation macro, and a restructuring of some of the API. Visit Matt's announcement for more information on the release or grab the binaries via NuGet or CodePlex.

    Read the article

  • Satellite now think for themselves, Skynet becomes self-aware

    - by iamjames
    From the movies-become-reality department comes this little gem: New control system will allow satellites to 'think for themselves' "...engineers from the University of Southampton have developed what they say is the world’s first control system for programing satellites to think for themselves. It’s a cognitive software agent called sysbrain, and it allows satellites to read English-language technical documents, which in turn instruct the satellites on how to do things such as autonomously identifying and avoiding obstacles." Gee, why does this sound so incredibly familiar?  Skynet (Terminator) "In the Terminator storyline, Skynet was originally installed into the U.S. military mainframe to control the national arsenal on August 4, 1997. On August 29 it gained self-awareness[1] and the panicking operators, realizing the extent of its abilities, attempted to shut it down. Skynet perceived the attempt to deactivate it as an attack and came to the conclusion that all of humanity would attempt to destroy it. To defend itself, it determined that humanity should be exterminated." Alright so it's not in control of the national arsenal, but it's only a matter of time before one of these satellites read Snooki's book and convinces a military satellite that we need to be exterminated.

    Read the article

  • Weird internet connection behavior

    - by ShadowBroker
    I'm having a strange problem with my internet connection. At the moment my ISP is Fastweb (an italian provider) and i noticed that sometimes i can't load some sites, then all i have to do is wait 5 minutes and then i can load the site again. Some other times, instead, images won't load on certains sites (this always happens with 500px.com or Facebook rarely). I noticed that if i use VPN software like Privitize VPN or Spotflux everything works like a charm. Is something wrong with my internet configuration? Thank you.

    Read the article

  • General questions regarding open-source licensing

    - by ndg
    I'm looking to release an open-source iOS software project but I'm very new to the licensing side of the things. While I'm aware that the majority of answers here will not lawyers, I'd appreciate it if anyone could steer me in the right direction. With the exception of the following requirements I'm happy for developers to largely do whatever they want with the projects source code. I'm not interested in any copyleft licensing schemes, and while I'd like to encourage attribution in derivative works it is not required. As such, my requirements are as follows: Original source can be distributed and re-distributed (verbatim) both commercially and non-commercially as long as the original copyright information, website link and license is maintained. I wish to retain rights to any of the multi-media distributed as part of the project (sound effects, graphics, logo marks, etc). Such assets will be included to allow other developers to easily execute the project, but cannot be re-distributed in any manner. I wish to retain rights to the applications name and branding. Futher to selecting an applicable license, I have the following questions: The project makes use of a number of third-party libraries (all licensed under variants of the MIT license). I've included individual licenses within the source (and application) and believe I've met all requirements expressed in these licenses, but is there anything else that needs to be done before distributing them as part of my open-source project? Also included in my project is a single proprietary, close-sourced library that's used to power a small part of the application. I'm obviously unable to include this in the source release, but what's the best way of handling this? Should I simply weak-link the project and exclude it entirely from the Git project?

    Read the article

  • Integrating with Oracle Fusion Applications: Discovering Integration Artifacts

    - by Lionel Dubreuil
    Oracle Enterprise Repository serves as the core element to the Oracle SOA Governance solution. An industry-leading metadata repository, Oracle Enterprise Repository provides a solid foundation for delivering governance throughout the service-oriented architecture (SOA) lifecycle by acting as the single source of truth for information surrounding SOA assets and their dependencies. For Fusion Applications, the use of OER has been extended to include other integration asset types such as interface tables and other technical information such as data models, tables, views, lookups, profile options, et cetera. E-Business Suite users familiar with iRepository or eTRM will recognize the functionality in Fusion Applications OER. Oracle Enterprise Repository for Fusion Applications provides a common catalog of technical information, searchable using many different mechanisms. Customers can locate technical information by the name, description or keyword of the information they are looking for. They can also search by the type of asset they are trying to locate and/or where the asset sits in the product taxonomy. They can also see the how the asset dances in the choreography of some illustrative co-existence scenarios. These scenarios are laid out as both functional flow diagrams as well as technical interaction diagrams. Rajesh Raheja, software architect at Oracle, has recently posted an article on this topic: visibility and control are the key tenets to SOA governance, and the first step in integrating with Oracle Fusion Applications is to find out what are the integration options available. Oracle Enterprise Repository, an industry-leading metadata repository, provides this visibility. You can find his full blog post here.

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >