Search Results

Search found 14125 results on 565 pages for 'apache commons io'.

Page 94/565 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • MacOSX VirtualHost: "You don't have permission to access / on this server" error

    - by David Casillas
    The Apache instalation of MacOSX is running Ok. I have tried to create a VirtualHost called test.local, but as soon as I uncomment from /private/etc/apache2/httpd.conf the line Include /private/etc/apache2/extra/httpd-vhosts.conf , and try to access test.local virtualhost I get an error "You don't have permission to access / on this server". The VirtualHost configuration in /private/etc/apache2/extra/httpd-vhosts.conf is: <VirtualHost *:80> ServerName test.local DocumentRoot "/Users/username/Sites/Test/public" <Directory "/Users/username/Sites/Test/public"> Options Indexes FollowSymLinks Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have also include the VirtualHost in hosts file: 127.0.0.1 test.local

    Read the article

  • responsibility for storage

    - by Stefano Borini
    A colleague and I were brainstorming about where to put the responsibility of an object to store itself on the disk in our own file format. There are basically two choices: object.store(file) fileformatWriter.store(object) The first one gives the responsibility of serialization on the disk to the object itself. This is similar to the approach used by python pickle. The second groups the representation responsibility on a file format writer object. The data object is just a plain data container (eventually with additional methods not relevant for storage). We agreed on the second methodology, because it centralizes the writing logic from generic data. We also have cases of objects implementing complex logic that need to store info while the logic is in progress. For these cases, the fileformatwriter object can be passed and used as a delegate, calling storage operations on it. With the first pattern, the complex logic object would instead accept the raw file, and implement the writing logic itself. The first method, however, has the advantage that the object knows how to write and read itself from any file containing it, which may also be convenient. I would like to hear your opinion before starting a rather complex refactoring.

    Read the article

  • Symbolic link not allowed or link target not accessible: /var/www on Ubuntu 11.04

    - by Jamie Hutber
    I am getting a 403 when i access http://mayfieldafc.local/ upon looking in the apache logs i am getting [Wed Nov 16 12:32:59 2011] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www I have what i believe to be the correct permissions set on /var/www. hutber can create and delete files, hutber being my user. I can also execute as program on this folder. in mayfields vhost its: <Directory /var/www/mayfieldafc/docroot> Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> I am pulling my hair out not being able to work on my sites with my work ubuntu install. I know of nothing else that could be effecting this. So any ideas?

    Read the article

  • How can I recursively change the permissions of files and directories?

    - by Nikhil
    I have ubuntu installed on my local computer with apache / php / mysql. I now have a directory at /var/www - inside which I have several of my ongoing projects. I also work with opensource ( drupal, magento, sugarcrm ). The problem I am facing is changing file permission with terminal. Sometime I need to change the permission of entire folder and its subsequent sub-folders and files. I have to individually change using sudo chmod 777 foldername How can I do this recursively. Also why do I have to always do it 777, I tried 755 for folders and 644 for files, but that won't work.

    Read the article

  • Apache2 Unwantingly Allowing Proxy Requests

    - by Kevin
    I'm not sure if this is the right location, but this is fairly urgent. I have completely removed all traces of mod_proxy and the other mod_proxy mods, although the Apache server continues to allow proxy requests. I have restarted numerous times, and have shut down until I can find an answer. I've noticed lots of requests from IPs in and around China to external sites such as free movie downloads and such. I'd like to prevent this from happening. I'll be grateful for any help I get.

    Read the article

  • How are Java ByteBuffer's limit and position variable's updated?

    - by Dummy Derp
    There are two scenarios: writing and reading Writing: Whenever I write something to the ByteBuffer by calling its put(byte[]) method the position variable is incremented as: current position + size of byte[] and limit stays at the max. If, however, I put the data in a view buffer then I will have to, manually, calculate and update the position Before I call the write(ByteBuffer) method of the channel to write something, I will have to flip() the Bytebuffer so that position points to zero and limit points to the last byte that was written to the ByteBuffer. Reading: Whenever I call the read(ByteBuffer) method of a channel to read something, the position variable stays at 0 and the limit variable of the ByteBuffer points to the last byte that was read. So, if the ByteBuffer is smaller than the file being read, the limit variable is pushed to max This means that the ByteBuffer is already flipped and I can proceed to extracting the values from the ByteBuffer. Please, correct me where I am wrong :)

    Read the article

  • CDN for site with target market in Australia

    - by Jae Choi
    I was told that http://www.edgecast.com/ is very good CDN provider for Australian market. I have a cloud server based in Sydney Australia but was wondering whether it's even worth getting cdn as my target market is only Australia based also. Would I see any performance gain if I use above CDN services or would this be more for sites that target international visitors? I have Apache installed in our server but I would like to install Nginx. Would I see much more gain in performance on this change than CDN or should I go for both as they are all beneficial?

    Read the article

  • Fork dead SVN based project on GitHub

    - by Quinn Bailey
    I previously asked this at stack overflow but it was closed, I believe because 'programmers' is a more appropriate venue for this question. I have done some work on the SVN Importer project (Apache license), which appears to be effectively dead (no published changes in 5 years). I have a login to their svn server but do not have commit rights. At any rate, I'd like to convert this project to Git and push my own changes to GitHub. The GitHub site suggests the svn2git tool for converting svn projects to Git, so I was planning to convert the SVN repository to Git, add my changes, and then push this Git repository to GitHub. I'm wondering, what are the legal requirements and common conventions of this process? Is it acceptable to clone the entire history of the project and move it to GitHub? Also, even though this is essentially a dead project, once I've translated the repository to Git should I put all of my commits onto a non master branch or is it acceptable to use master in this case?

    Read the article

  • my server suddenly crashes every 2 days or so. Programmer has no idea, please help find the cause, here is the top

    - by Alex
    Every couple of days my server suddenly crashes and I must request hardware reset at data center to get it back running. Today I came back to my shell and saw the server was dead and "top" was running on it, and see below for the "top" right before the crash. I opened /var/log/messages and scrolled to the reboot time and see nothing, no errors prior to the hard reboot. (I checked in /etc/syslog.conf and I see "*.info;mail.none;authpriv.none;cron.none /var/log/messages" , isn't this good enough to log all problems?) Usually when I look at the top, the swap is never used up like this! I also don't know why mysqld is at 323% cpu (server only runs drupal and its never slow or overloaded). Solver is my application. I don't know whats that 'sh' doing and 'dovecot' doing. Its driving me crazy over the last month, please help me solve this mystery and stop my downtimes. top - 01:10:06 up 6 days, 5 min, 3 users, load average: 34.87, 18.68, 9.03 Tasks: 500 total, 19 running, 481 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 96.6%sy, 0.0%ni, 1.7%id, 1.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8165600k total, 8139764k used, 25836k free, 428k buffers Swap: 2104496k total, 2104496k used, 0k free, 8236k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4421 mysql 15 0 571m 105m 976 S 323.5 1.3 9:08.00 mysqld 564 root 20 -5 0 0 0 R 99.5 0.0 2:49.16 kswapd1 25767 apache 19 0 399m 8060 888 D 79.3 0.1 0:06.64 httpd 25781 apache 19 0 398m 5648 492 R 79.0 0.1 0:08.21 httpd 25961 apache 25 0 398m 5700 560 R 76.7 0.1 0:17.81 httpd 25980 apache 25 0 10816 668 520 R 75.0 0.0 0:46.95 sh 563 root 20 -5 0 0 0 D 71.4 0.0 3:12.37 kswapd0 25766 apache 25 0 399m 7256 756 R 69.7 0.1 0:39.83 httpd 25911 apache 25 0 398m 5612 480 R 58.8 0.1 0:17.63 httpd 25782 apache 25 0 440m 38m 648 R 55.2 0.5 0:18.94 httpd 25966 apache 25 0 398m 5640 556 R 55.2 0.1 0:48.84 httpd 4588 root 25 0 74860 596 476 R 53.9 0.0 0:37.90 crond 25939 apache 25 0 2776 172 84 R 48.9 0.0 0:59.46 solver 4575 root 25 0 397m 6004 1144 R 48.6 0.1 1:00.43 httpd 25962 apache 25 0 398m 5628 492 R 47.9 0.1 0:14.58 httpd 25824 apache 25 0 440m 39m 680 D 47.3 0.5 0:57.85 httpd 25968 apache 25 0 398m 5612 528 R 46.6 0.1 0:42.73 httpd 4477 root 25 0 6084 396 280 R 46.3 0.0 0:59.53 dovecot 25982 root 25 0 397m 5108 240 R 45.9 0.1 0:18.01 httpd 25943 apache 25 0 2916 172 8 R 44.0 0.0 0:53.54 solver 30687 apache 25 0 468m 63m 1124 D 42.3 0.8 0:45.02 httpd 25978 apache 25 0 398m 5688 600 R 23.8 0.1 0:40.99 httpd 25983 root 25 0 397m 5272 384 D 14.9 0.1 0:18.99 httpd 935 root 10 -5 0 0 0 D 14.2 0.0 1:54.60 kjournald 25986 root 25 0 397m 5308 420 D 8.9 0.1 0:04.75 httpd 4011 haldaemo 25 0 31568 1476 716 S 5.6 0.0 0:24.36 hald 25956 apache 23 0 398m 5872 644 S 5.6 0.1 0:13.85 httpd 18336 root 18 0 13004 1332 724 R 0.3 0.0 1:46.66 top 1 root 18 0 10372 212 180 S 0.0 0.0 0:05.99 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.95 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00 .06 ksoftirqd/1 here is a normal top, when server is working fine: top - 01:50:41 up 21 min, 1 user, load average: 2.98, 2.70, 1.68 Tasks: 271 total, 2 running, 269 sleeping, 0 stopped, 0 zombie Cpu(s): 15.0%us, 1.1%sy, 0.0%ni, 81.4%id, 2.4%wa, 0.1%hi, 0.0%si, 0.0%st Mem: 8165600k total, 2035856k used, 6129744k free, 60840k buffers Swap: 2104496k total, 0k used, 2104496k free, 283744k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2204 apache 17 0 466m 83m 19m S 25.9 1.0 0:22.16 httpd 11347 apache 15 0 466m 83m 19m S 25.9 1.0 0:26.10 httpd 18204 apache 18 0 481m 97m 19m D 25.2 1.2 0:13.99 httpd 4644 apache 18 0 481m 100m 19m D 24.6 1.3 1:17.12 httpd 4727 apache 17 0 481m 99m 19m S 24.3 1.2 1:10.77 httpd 4777 apache 17 0 482m 102m 21m S 23.6 1.3 1:38.27 httpd 8924 apache 15 0 483m 99m 19m S 22.3 1.3 1:13.41 httpd 9390 apache 18 0 483m 99m 19m S 18.9 1.2 1:05.35 httpd 4728 apache 16 0 481m 101m 19m S 14.3 1.3 1:12.50 httpd 4648 apache 15 0 481m 107m 27m S 12.6 1.4 1:18.62 httpd 24955 apache 15 0 467m 82m 19m S 3.3 1.0 0:21.80 httpd 4722 apache 15 0 503m 118m 19m R 1.7 1.5 1:17.79 httpd 4647 apache 15 0 484m 105m 20m S 1.3 1.3 1:40.73 httpd 4643 apache 16 0 481m 100m 20m S 0.7 1.3 1:11.80 httpd 1561 root 15 0 12900 1264 828 R 0.3 0.0 0:00.54 top 4434 mysql 15 0 496m 55m 4812 S 0.3 0.7 0:06.69 mysqld 4646 apache 15 0 481m 100m 19m S 0.3 1.3 1:25.51 httpd 1 root 18 0 10372 692 580 S 0.0 0.0 0:02.09 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.03 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.03 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.02 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/7

    Read the article

  • Is there an easier way to implement 301 redirects when converting a site to WordPress

    - by Amanda
    I have just converted a website to WordPress. The old site has hundreds of hard-coded html files, and the new site does not match the old site's directory structure or file naming system (bad SEO in the original site), so I can't place any "blanket" 301 redirects. Its been at least 2 months, and the old links are still appearing in Google searches, despite a google-friendly sitemap.xml. Do I need to hardcode a 301 for every individual page in my htaccess file, or am I just misunderstanding 301s and apache? Is there some other way I can update Google about the fact that my entire site structure has changed?

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • Using components with different permissive licenses in a commercial app. How to display copyright correctly?

    - by Ivaylo Slavov
    I am writing a commercial application that will make use of some open libraries licensed under different licenses. For example one library will be licensed under the Apache 2.0 license, another will use the LGPL license. Both licenses allow usage in commercial applications, but differ in the way the attributions of licensed work is given. It is my first commercial application that uses 3rd party libraries and I want to do the right thing so that the 3rd party licenses are satisfied. I am not only asking what I should do, but also what I must not do.

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • Port numbers appended to anchor tags

    - by glifchits
    I've built a static site. Locally, when I serve the content with python -m SimpleHTTPServer everything behaves normally, but when I copy the HTML onto the server and browse the site at the server's URL, some links will have a port number appended to the domain. For example: url.com:84/path where the correct path is url.com/path. The port number is usually different, always between 81-85. It is an Apache server. I'm not experienced with web server configuration, and I'm not the admin of the server. Let me know if there is more information that can help solve my problem. ~> cat /etc/*release* SuSE SLES-8 (i386) VERSION = 8.1 UnitedLinux 1.0 (i586) VERSION = 1.0 LSB_VERSION="1.2" DISTRIB_ID="UnitedLinux" DISTRIB_RELEASE="1.0" DISTRIB_DESCRIPTION="UnitedLinux 1.0 (i586)"

    Read the article

  • Gzip compress offline?

    - by shoosh
    I've configured my site to serve compressed content by putting this line in .htaccess AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/javascript application/json This works perfectly for almost all files except a few large JSON files that are above 200Kb. For some reason they are not being compressed. I see that they don't using the net tab in firebug and the Network section in chrome. So as a workaround I thought I could compress these files offline and have Apache read them compressed. What tool should I use to compress them? is the linux gzip the one? any special flags or something I should use? What should I put in .htaccess so that the server would know to serve these files with content-encoding gzip ?

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • Webmin - Setting up multiple virtual hosts - Subdomains

    - by Aaron
    Can someone please help me in using WEBMIN to setup virtual hosts. My current domain www.MYDOMAINLOLFAKE.com currently functions. Settings are as follows - Apache - Handles the name-based server www.MYDOMAINLOLFAKE.com on all addresses Address Any Port 80 Server Name www.MYDOMAINLOLFAKE.com Document Root /var/www/html BIND DNS Server - Master Zone MYDOMAINLOLFAKE.com ns1.mydomainlolfake.com IPHERE - works ns2.mydomainlolfake.com IPHERE - works mydomainlolfake.com IPHERE - works www.mydomainlolfake.com IPHERE -works mail.mydomainlolfake.com IPHERE - works ftp.mydomainlolfake.com IPHERE - works What I need - something.mydomainlolfake.com -- CANT GET THIS TO WORK What I tried - Create new virtual host Handles the name-based server something.mydomainlolfake.com on something.mydomainlolfake.com Address Any Port 81 Document Root: /var/www/vhosts/something What happens - I create the new VHOST and then ALL address try to go to that new Document root. I need different addresses to go to their respective folders. Can someone please give me better instructions on how to set that up using webmin? TLDR# How do I make a something.mydomainlolfake.com subdomain work in webmin on my CENTOS 6 web server?

    Read the article

  • At what point is asynchronous reading of disk I/O more efficient than synchronous?

    - by blesh
    Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously? I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?

    Read the article

  • iPhone in-app purchasing for Ecommerce [closed]

    - by Kyle B.
    This may not be the appropriate location for this, but would like to ask in the hopes an iOS developer with familiarity on the rules and regulations could comment. I would like to develop an iOS app that performs Ecommerce transactions. If I roll my own payment processor, and checkout process: 1) Is this allowed by Apple's rules, and 2) Would I be required to remit 30% of the transaction sale to Apple?

    Read the article

  • How can I determine the trending pages on my site?

    - by Dogweather
    I'm looking to what what the "hot" pages are on one of my sites. I want to see for various timeframes, what the top-50 pages are. I'm going to create a data feed with this info which will be input to another app. I have Apache logs, and complete control of the machine to install what I want. I'm mostly wondering if there's something out there already that I can use, or if I have to implement it myself, what good algorithms or strategies might be. Thanks.

    Read the article

  • Ubuntu 12.04 LTS Desktop 64 bit user permissions or apache2 rewrite problem

    - by mtm
    have installed 12.04 Desktop 64 bit, manually installed LAMP, phpmyadmin, php5-dev,PEAR, PECL, apc, ssh, created user to own /var/www/ and transferred 3 sites to /www/. sites are in subfolders, sites - available all configured, and enabled. One site is pure html, two athers - php. Enabled curl, but phpmyadmin started at first, also php sites, than stopped working /show blank pages/ sites said Clean urls cannot be enabled. Html site still working. Where is the problem, and why the php sites stop working? In all apache .conf files Allow Override is set to ALL. php sites have .htaccess files. And this configuration worked with Ubuntu 10.04.

    Read the article

  • Automatic generate code: "derived work"?

    - by Peregring-lk
    For example, I've GPL software. I'm the author of this GPL software. This GPL software has, between its code, Doxygen comments. These Doxygen comments are written to generate a CC-BY-SA html page, in order to upload this generated documentation in my project website under CC-BY-SA license. But, the Doxygen documentation output is a "derivate work"? After all, this documentation is based on my GPL source code. In this case, the documentation must be GPL. But, I want the documentation is CC-BY-SA, because it is documentation. GFDL doesn't help. GPL code can't become GFDL (the opposite yes). If this output is really a derivate work, I think, creates a strange situation, because, if I distribute my work, the recipient users can't legally distribute the generated documentation: while with my work I can do I want, the users don't, thus, they have to distribute any derivated work with the same license I offer them. What is the solution?

    Read the article

  • Ubuntu 13.XX unable to mount USB HDD. Tried everything. I/O error boot sector/file system

    - by XaviGG
    I know that there are many posts related but none of them helped me. I will jump to the last test because it is the one that should work, but it does not. An external HDD with single partition slow NTFS formatted in Windows, empty and clean. Checked for errors, it tells that not errors where found. Moving to Ubutnu 13.04... Gparted throws the first error when trying to read the disk: Input/output error It appears as unknown the content of the disk. Unable to create partition table or format it, getting the same error when trying. If I try to mount it in the terminal it tells me the same, specifying that also there is an I/O error reading the boot sector. I have this problem since I upgraded (always with fresh install) to 13.04. I thought it will be solved by the 13.10 but it has the same behavior. I tried with two different HDD (HD and SSHD) that work perfectly in Windows 7. In 13.04 at least I got a trying of mounting where the icon of the drive started showing and disappearing until finally it disappeared. But now it doesn't even try. Possible causes: -The HDD was my old main HDD, so it had WIN,RECOVERY,SYSTEM,UBU,SWAP partitions. Maybe the way or place where the partition table is defined is not the best for an external HDD but I don't know a lot in that topic. I would appreciate a lot if someone can give me a guideline to convert one of these HDD in a working external HDD. No files to recover, nothing to care about. Just format completely the disk and be able to use it for storing backups without having to move the files first to the windows partition, load windows and then copy them to the external HDD. Because I want to use a file comparator for the backups. Thanks a lot Edit 1: I found an option in Windows to convert it to a dynamic HDD that warns me that I wont be able to run O.S. after changing. I suppose that is what I need because in the current mode I cannot safely extract it. But it tells me an error that it couldn't change the mode.

    Read the article

  • How many disk should I use to meed the capacity and IOPS need?

    - by facebook-100005613813158
    An application needs 1.6 TB of storage capacity and performs 1000 IOPS. How many disks are required to meet the application requirement and offer acceptable response time? The disk specifications are as follows: - Drive capacity = 100 GB - 15K RPM - Each disk can perform 50 IOPS 4 candidate , 10,12,16,20, which one is the most likely answer? in my opinion,16 disks can only meet the capacity need ,but cannot meet the IOPS need, so ,the right answer should be 20 disks?right?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >