Search Results

Search found 3459 results on 139 pages for 'if modified since'.

Page 31/139 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Tutorial (or livedisk) for multiseat setup with dual-head display supporting openGL direct rendering?

    - by Tobias Kienzler
    I'm currently running Ubuntu 10.10 (64 bit), the GPU is an ATI Radeon HD 4290 onboard a ASUS M4A89GTD PRO/USB3 mainboard. The Ubuntu wiki doesn't cover 10.10 yet and I also don't know if that method would support direct rendering. I heard mentions of xephyr and xgl, what are the differences? Where should I start? I tried the mdm livedisk but that doesn't boot. I'm also willing to try a different distribution if necessary. Edit: 1 Will the HowTo: A well performing, full eye-candy, accelerated pseudo-multiseat setup on a single dualhead GPU. for Ubuntu 9.04 still work? I'm afraid it omits how to setup two input devices and sound however... Edit: 2 http://multiseatonlinux.blogspot.com/2010/06/part-1-setting-up-base.html covers 10.10 but requires pinning gdm. Is that circumvenatable? Also how does that setup have to be modified for dual-head?

    Read the article

  • Casing of COM-Interop registered components

    - by Marko Apfel
    During a refactoring i realized that renaming of components, which will be registered for COM-Interop, must be done carefully. In my case i changed the casing of XyzToolbar to XyzToolBar. At the developing machine everything works fine. But after installing the modified stuff at the production machine, the toolbar was not visible. Using regasm with the new assemblies helped. So this was the hint: we use WIX to build the setup. And during setup-development the heat-tool extracted the needed registry-keys. And in these keys still was the old name XyzToolbar. Refreshing the names corrected the problem.

    Read the article

  • Websites or tools similar to Ginwiz (mobile website creator)

    - by t3st
    I have a website which i want to make more mobile friendly(currently its not). While searching about this i found this awesome website Ginwiz; my website can be modified into an mobile friendly site without any additional coding. But i find two disadvantages with this website (free version) 1)We cant add our domain to it with out upgrading (i dont have enough money to pay for it) 2)We can only "Advanced edit" one page Do you know any website which is similar to Ginwiz but can use our domain address instead of theirs (in free version). Do you have any idea about any tools which can be also used to convert my website to mobile website by trimming my current website easily.

    Read the article

  • Pong Collision Help in C# w/ XNA

    - by Ramses Brown
    Edit: My goal is to have it function like this: Ball hits 1st Quarter = rebounds higher (aka Y++) Ball hits 2nd Quarter = rebounds higher (using random value) Ball hits 3rd Quarter = rebounds lower (using random value) Ball hits 4th Quarter = rebounds lower (aka Y--) I'm currently using Rectangle Collision for my collision detection, and it's worked. Now I wish to expand it. Instead of it simply detecting whether or not the paddle/ball intersect, I want to make it so that it can determine what section of the paddle gets hit. I wanted it in 4 parts, with each having a different reaction to impact. My first thought is to base it on the Ball's Y position compared to the Paddle's Y position. But since I want it in 4 parts, I don't know how to do that. So it's essentially be if (ball.Y > Paddle.Y) { PaddleSection1 == true; } Except modified so that instead of being top half/bottom half, it's 1st Quarter, etc.

    Read the article

  • Can I use GPL software in a commercial application

    - by Petah
    I have 3 questions about the GPL here: If I use GPL software in my application, but don't modify or distribute it, do I have to release my application under the GPL? What if I modify some software that my application uses. Then do I have to release my application under the GPL, or can I just supply the modified software under the GPLs terms. And what if I use GPL software, but don't modify it, can I distribute it with my application? My case in point is, I have a PHP framework which I use the GeSHi library to highlight some output. Because GeSHi is GPL, does my framework have to be GPL? Can I modify GeSHi for particular use cases of my application if I supply the modifications back to the GeSHi maintainers? Can I redistribute my framework with GeSHi?

    Read the article

  • synchronization web service methodologies or papers

    - by Grady Player
    I am building a web service (PHP+JSON) to sync with my iphone app. The main goals are: Backup Provide a web view for printing / sorting, manipulating. allow a group sync up and down. I am aware of the logic problems with all of these items, Ie. if one person deletes something, do you persist this change to other users, collisions, etc. I am looking for just any book or scholarly work, or even words of wisdom to address common issues. when to detect changes of data with hashes, vs modified dates, or combination. how do address consolidation of sequential ID's originating on different client nodes (can be sidestepped in my context, but it would be interesting.) dealing with collisions (is there a universally safe way to do so?). general best practices. how to structure the actual data transaction (ask for whole list then detect changes...)

    Read the article

  • Why ubuntu does not use the kernels installed by automatic update?

    - by Guillaume Coté
    I used the script describe in this question to list the kernel installed on the computer : How do I to remove or hide old kernel versions, to clean up the boot menu? In the 3.2.0, I have 33, 34, 35, 36, 37, 38, 39, 41, 43, 44, 45 and 48. I would expect to be running 3.2.0-48 after a reboot, but I am still running 3.2.0-32. Why the kernels installed by auto update are not used? [I am running 12.04 LTS] /boot/grub/menu.lst was modified for the last time June 16 2013, it contains 3.2.0-32-generic 2.6.32-45-generic 2.6.32-44-generic 2.6.32-43-generic A recovery for each of those and a memtest. I would have expected the kernel between 3.2.0-33 to 3.2.0-48 to be in this file before 3.2.0-32.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • Compact DIY Office-in-a-Cart Packs Away Into a Closet

    - by Jason Fitzpatrick
    Many geeks know the pain of losing a home office when a new baby comes along, but not many of them go to such lengths to miniaturize their offices like this. With a little ingenuity an entire home office now fits inside a heavily modified IKEA work table. Ian, an IKEAHacker reader and Los Angeles area geek, explains the motivation for the build: I had to surrender my home office to make room for my new baby boy ;) I took an Ikea stainless steel kitchen “work table”, some Ikea computer tower desk trays, two steel tabletops, and two grated steel shelves to make an “office” that I could pack away into a closet. Hit up the link below to check out the full photo set, the build includes quite a few clever design choices like mounted monitors, a ventilation system, and more. Home Office In A Box [IKEAHacker] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • How to automatically change volume level when un-/plugging headphones?

    - by htorque
    What I want is the following: When I plug in my headphones, I want the sound to be un-muted and set to a specific volume level. When I unplug my headphones, I want the sound to be muted (or set to a specific volume level). Setting the volume levels isn't the problem, but I somehow need to do this when un-/plugging the headphones, so I'm looking for a way to get notified of those events. I quickly found /proc/asound/card0/codec#0 to indicate whether headphones are plugged in or not, so I tried to monitor it using inotifywait and change the volume level based on modified notifications. Unfortunately inotifywait failed because proc isn't an ordinary filesystem. Are there other ways to do this (maybe via PulseAudio)? Audio device: Intel HDA, audio codec: Conexant CX20585.

    Read the article

  • Is making my own copyright licence safe?

    - by abcd
    I've seen various open source libraries (actually I've seen it for assets as well) having a home-baked license in the following manner : SomeGuy's License:1. You can use this code freely in commercial projects and modify it as you wish, but not sell it2. If you want to sell a modified version, drop me an email first, or give credits to me Edit: The above example is ambiguous, so I am giving another one, I want to know if 3 lines of license will hold some ground: SomeGuy's License:1. You can use this code in a commercial project as a 3rd party library2. You can't sell it as a derivative work I know that such license is not polished at all, for example the Creative Commons set of licenses seem to be short, but actually have some large legal stuff underneath it, but I wonder if at least some level of protection can be gained with a hobby license like that ? My question is, could this hold any ground in the court, or would the corporative lawyers of the company X tear it apart ?

    Read the article

  • Why fork a library for your own application?

    - by Mr. Shickadance
    Why should a programmer ever fork a library for inclusion in a widely used application? I ask this question because I was reading an article about why Chromium isn't packaged for many Linux distros like Fedora. Apparently its largely due to the fact that Google has forked a number of libraries, modified them, and included them in Chromium. This has driven up the complexity of packaging releases. There are a number of reasons why this can be a bad thing, but how strong a case can you actually make for doing so in a large widely used application such as Chromium? The original article: http://ostatic.com/blog/making-projects-easier-to-package-why-chromium-isnt-in-fedora Isn't it usually worth the effort to make slight modifications to your own program in order to use a popular and well developed library?

    Read the article

  • How to Banish Duplicate Photos with VisiPic

    - by Jason Fitzpatrick
    You meant well, you intended to be a good file custodian, but somewhere along the way things got out of hand and you’ve got duplicate photos galore. Don’t be afraid to delete them and lose important photos, read on as we show you how to clean safely. Deleting duplicate files, especially important ones like personal photos, makes a lot of people quite anxious (and rightfully so). Nobody wants to be the one to realize that they deleted all the photos of their child’s first birthday party during a hard drive purge gone wrong. In this tutorial we’re going to show you how to go beyond the limited reach of  tools which simply compare file names and file sizes. Instead we’ll be using a program that combines that kind of comparison with actual image analysis to help you weed out not just perfect 1:1 file duplicates but also those piles of resized for email images, cropped images, and other modified images that might be cluttering up your hard drive. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Isolated Unit Tests and Fine Grained Failures

    - by Winston Ewert
    One of the reasons often given to write unit tests which mock out all dependencies and are thus completely isolated is to ensure that when a bug exists, only the unit tests for that bug will fail. (Obviously, an integration tests may fail as well). That way you can readily determine where the bug is. But I don't understand why this is a useful property. If my code were undergoing spontaneous failures, I could see why its useful to readily identify the failure point. But if I have a failing test its either because I just wrote the test or because I just modified the code under test. In either case, I already know which unit contains a bug. What is the useful in ensuring that a test only fails due to bugs in the unit under test? I don't see how it gives me any more precision in identifying the bug than I already had.

    Read the article

  • Cheerp -- C++ for web: advance or regression?

    - by Henrique Barcelos
    Recently I've run into Cheerp, a C++ to Javascript compiler, which uses a modified version of clang to generate Javascript code from C++ sources. That makes me wonder: why in the seven kingdoms would someone do this in their right mind? I mean: why would you take a language that is not designed for web at all, that is far more convoluted and bureaucratic, write your code and then compile it into Javascript itself? Can anybody see any advantages in doing so? We surely can discard performance as a reason, because in the end it generates pure Javascript code. Is there anyone here that have real experience with this? P.S.: I'm not sure if this is an on topic question, but this is the most general forum about programming that I could find in the StackExchange network. Edit Although this seems like a subjective question, it is not. I am asking for reasons that this tool could be useful. I got interested at first, but started wondering why would someone use it.

    Read the article

  • Software license restricting commercial usage like CC BY-NC-SA

    - by Nick
    I want to distribute my software under license like Creative Commons Attribution - Non commercial - Share Alike license, i.e. Redistribution of source code and binaries is freely. Modified version of program have to be distributed under the same license. Attribution to original project should be supplied to. Restrict any kind of commercial usage. However CC does not recommend to use their licenses for software. Is there this kind of software license I could apply? Better if public license, but as far as I know US laws says that only EULA could restrict usage of received copy?

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • How can I downgrade my version of Evolution to the one used in Ubuntu 11.04?

    - by Johnny
    I just upgraded, and like a few other users I had issues with Evolution email after the upgrade from 11.04 to 11.10 and then 12.04. I know to make backups, but in this case I stupidly didn't think that the program would be changed (Firefox wasn't modified at all), and so I failed to make a backup. Three days later I am still having issues, and recovering the emails is proving to be difficult with only partial recovery or it not working at all. My question is, can I add in some source to use the 11.04 version of Evolution, since that version was working fine and would know what to do with the current files (Inbox, Outbox, etc.) I also noticed that Evolution's restore feature said it changed the way emails are handled, so it seems like a downgrade could put everything back to normal. Worst case scenario, I start over, but I wanted to try everything first. Thanks in advance! I'm also open to any suggestions for restoring the old emails files to the current version of Evolution.

    Read the article

  • How can I find files quicker than find or locate?

    - by Chaitanya
    I have been using find command to find files on my 1 tb hard disk. it takes very long. then I used locate which proved to be faster with regular update using updatedb. But the limitation of locate is that I cannot find files with certain size or modified/created time. can you suggest me any ideas on how to find files at more speed or in that case how to pipe output of locate command in a way that all other information like size, time, etc. can be displayed or redirected to a file.

    Read the article

  • How do I efficiently code both the client and server at the same time?

    - by liamzebedee
    I'm coding my game using a client-server model. When playing on singleplayer, the game starts a local server, and interacts with it just like a remote server (multiplayer). I have done this to avoid coding separate singleplayer and multiplayer code. I have just started coding and have encountered a major problem. Currently I'm developing the game in Eclipse, having all the game classes organized into packages. Then, in my server code, I just use all the classes in the client packages. The problem is, these client classes have variables that are specific to rendering, which obviously wouldn't be performed on a server. Should I create modified versions of the client classes to use in the server? Or should I just modify the client classes with a boolean, to indicate if its the client/server using it. Are there any other options I have? I just had a thought about maybe using the server class as the core class, then extending it with rendering stuff?

    Read the article

  • How to remove java.sql.BatchUpdateException in Grails? [closed]

    - by aman.nepid
    I have a domain like this: class BusinessOrganization { static hasMany = [organizationBusinessTypes:OrganizationBusinessType] String name String icon static constraints = { name(blank:false,unique:true) icon(unique:true) } String toString() { return "${name}" } } When I save some data for first time it works fine. But when by the next time it shows this error : Error 500: Internal Server Error URI /nLocatePortal/businessOrganization/save Class java.sql.BatchUpdateException Message Batch entry 0 insert into business_organization (version, icon, name, id) values ('0', '', 'dddd', '2') was aborted. Call getNextException to see the cause. **Around line 24 of grails-app/controllers/com/nlocate/portal/BusinessOrganizationController.groovy** 21: 22: def save() { 23: def businessOrganizationInstance = new BusinessOrganization(params) 24: if (!businessOrganizationInstance.save(flush: true)) { 25: render(view: "create", model: [businessOrganizationInstance: businessOrganizationInstance]) 26: return 27: } Please someone help me why this is happening. I am new to Grails. I have not modified the controllers but still I get this error.

    Read the article

  • Finding files at more speed

    - by Chaitanya
    I have been using find command to find files on my 1 tb hard disk. it takes very long. then I used locate which proved to be faster with regular update using updatedb. But the limitation of locate is that I cannot find files with certain size or modified/created time. can you suggest me any ideas on how to find files at more speed or in that case how to pipe output of locate command in a way that all other information like size, time, etc. can be displayed or redirected to a file.

    Read the article

  • SharePoint Client Object Model: Step One

    - by PeterBrunone
    I almost didn't make it out alive.  I followed the instructions in every piece of sample code and every forum post by someone who had no idea why their client OM code wasn't working, and my code still wouldn't get past the page load.  I kept getting "'Type' is undefined" errors when sp.core.js tried to register the SP namespace.As it turns out, you need the help of the default master page (or one like it) to get the object model loaded.  Once I told my sample page to use the default master and modified everything accordingly, it hooked up and ran just fine.Now I can finally get some work done.

    Read the article

  • Most appropriate OSS license for infrastructure code

    - by Richard Szalay
    I'm looking into potentially releasing some infrastructure code (related to automated builds and deployments) as OSS and I'm curious about how the various OSS licenses effect it. Specifically, LGPL prevents the code itself (part/whole) being modified into a commercial product (which is what I'm after), but allows it to be "linked to" in the creation of commercial products (also ok). How does the "linked to" clause relate to infrastructure code, which is not deployed with the product itself? Would the application still be required to provide "appropriate legal notices" (which I'm not fussed over)? Would I be better off looking at the Eclipse Public License?

    Read the article

  • Unity dash search

    - by To Do
    I have my personal folders to a different partition other than home, I had a series of symlinks in my home folder pointing to the folders on the other partition. This was causing multiple entries in Dash search do I modified my ~/.config/user-dirs.dirs file pointing them to the folders on the second partition. The issue is that when I search for any one of these folders in Dash, I still get two entries, one pointing the folder and another that points to the /home/username/Documents folder. If I click on this link I get a Could not find /home/username/Documents error. Why is this and how do I delete these entries from Dash's records? If deleting records is not possible, is there a way to "reindex" the dash search database?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >