Search Results

Search found 17924 results on 717 pages for 'order by'.

Page 378/717 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • License compatibility question

    - by Ivaylo Slavov
    I have a question regarding software licenses. I plan to put a license to a framework that I have written. My intention is that the license should be open, in order to maintain a community. Also I want to control when a new version is released and which changes will be included. The license should allow the framework to be used with commercial products, therefore respecting their own license. I have done some quick research and I decided to double license my work under the Apache License 2.0 (ASL) and Eclipse Public License (EPL). My point is that the EPL will provide me the ability to control the release cycle as well as the contributions to the project and the Apache license will take care for any patents a 3rd party might want to use in a derived work. Also both are open licenses. My question is related to the GLP and LGPL licenses. If I have the above licenses to my framework, will it be possible and legal, for someone to create a derived work of my framework, that is also a derived work of, or links a library that is under the LGPL license? Thanks in advance. EDIT: To be clear I will explain how I expect things to work. The framework will define some common API for certain functionalities as well as a Wrapper class that will invoke an implementation of that API. The Wrapper will be part of the framework, but it will internally call the actual implementation. This implementation should be in a separate library, and such libraries I would like to be developed and maintained by community. Surely the community will have to access the framework but I want to limit changes to the framework by the community but I want to provide freedom for any API implementation (a derived work of the framework). The framework will enable flexible configuration mechanisms that will tell which implementation of an API will be used.

    Read the article

  • What is the best way to store a table in C++

    - by Topo
    I'm programming a decision tree in C++ using a slightly modified version of the C4.5 algorithm. Each node represents an attribute or a column of your data set and it has a children per possible value of the attribute. My problem is how to store the training data set having in mind that I have to use a subset for each node so I need a quick way to only select a subset of rows and columns. The main goal is to do it in the most memory and time efficient possible (in that order of priority). The best way I have thought of is to have an array of arrays (or std::vector), or something like that, and for each node have a list (array, vector, etc) or something with the column,line(probably a tuple) pairs that are valid for that node. I now there should be a better way to do this, any suggestions? UPDATE: What I need is something like this: In the beginning I have this data: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False But for the second node I just need this data: Paris 4 5.0 New York 7 1.3 Paris 9 6.8 And for the third node: Tokio 2 9.1 Tokio 0 8.4 But with a table of millions of records with up to hundreds of columns. What I have in mind is keep all the data in a matrix, and then for each node keep the info of the current columns and rows. Something like this: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False Node 2: columns = [0,1,2] rows = [0,1,3] Node 3: columns = [0,1,2] rows = [2,4] This way on the worst case scenario I just have to waste size_of(int) * (number_of_columns + number_of_rows) * node That is a lot less than having an independent data matrix for each node.

    Read the article

  • Ubuntu installer shows three small screens

    - by Sylan
    I'm trying to install Ubuntu (or Kubuntu, which I tried first) 13.04 on a new laptop. I need the install in UEFI mode in order to properly dual boot with Windows 8. I've managed to overcome most of the UEFI issues up to the installer, which appeared as a black screen until I used nomodeset. Now the installer will appear, however it does fit my screen size. Instead, it appears as three small identical screens across the top of the monitor. I thought the problem could be solved by changing the display resolution in GRUB via changing the vga number, but this simply expanded the width of the three screens. While I could install it at this point, the identical screens are too small for me to be able to read the installer. As well, the "Try Ubuntu" option simply gets stuck at a black screen. I'm afraid these problems may persist through the installation if I attempted to continue. Additional information: The laptop is a Lenovo Ideapad Y580 with an i7 3630qm processor, and a GeForce GTX 660m graphics card which works alongside an Intel HD 4000 integrated card via Optimus.

    Read the article

  • How to forward traffic using iptables rules?

    - by ProbablePattern
    I am new to iptables and I have been doing Google searches for a few days now without finding a good solution to this problem. I have computer A with a public ip address (say 192.0.2.1) that can access the Internet unrestricted. I have another computer B with a private ip address (192.168.1.1) that can only access computer A. How do I use iptables to forward network traffic from B through A to the Internet? I need to use http, ftp, and https in order to use apt-get with sudo. Both computers run Ubuntu linux. I have tried using Squid but I think it is far too complicated for what I need to do.

    Read the article

  • Use Amazon EC2 as a backup server

    - by MikeMurko
    I would like to use Amazon EC2 as an emergency backup database+web server in the event our primary host becomes unavailable. I feel like I wouldn't have trouble setting up a Windows instance, install SQL Server and get the web server up and running (would take a few hours, plus installing various libraries, our source code, etc). My question relates to pricing. If I simply "stop" the instance rather than "terminate" it, does that stop counting "instance-hours"? I would prefer not to terminate the instance and lose all that work I spent setting it up. If I must "terminate" in order to stop the billing - is it possible to make an image of the server after I have set it all up, then save that image somewhere (S3?) Is this something that people do regularly? Ideally this instance would just be waiting in the wings for an issue with our host, but costing us nothing except perhaps data storage costs.

    Read the article

  • Dual boot :Windows 7 partition deleted after Kubuntu 14.04 install...Weird!

    - by user292152
    I've bought two new SSD's in order to install Kubuntu on one and Win 7 on the other one. Before I had Linux Mint and Win7 together one just one SSD. So first I installed win7 as recommended, and then used the guided installer of Kubuntu to install Kubuntu. I selected the second SSD, chose the option "use entire disk and install", but to my surprise after rebooting and selecting win7 boot loader from grub2, I got a prompt that my windows installation is damaged, and I need to run the repair option from the installation disk. So I booted into Kubuntu again, fired up kparted and saw that indeed my windows partition got deleted, except the recovery partition. I don't understand what happened. I am not new to this topic, and this was not my first time installing Ubuntu alongside windows. I have never ever had that problem. What can I do to make sure this won't happen again, so I won't waste another 2 hours of my life? ?? Thanks a lot !

    Read the article

  • Edit write-protected files by breaking hard links

    - by Taymon
    A directory which I own and can write to contains hard links to files that I don't own and don't have write permission for. I want to open and edit these files in Emacs. When I save my changes, Emacs should rename the existing hard link by appending ~, then write my new version of the file as a new file owned by me. I was under the impression that Emacs could just do this (because of the way it does backups), but it's not working; when I save, it attempts to change the file's permissions in order to write to it (and fails because I don't own the file). How do I make this happen?

    Read the article

  • Virtualise Excel in a browser

    - by Macros
    Is it possible to give users access to a virtualised instance of Excel - I don't want to give them access to a full OS (although this will clearly be running in the background, all they can access is Excel - they don't even see any other screens)? Secondly, if it is possible, is it possible to do within a browser? Edit I am building a system which is designed to test candidates skills in Excel and for this reason needs to use the full desktop version and not a web app. I don't want to have to ensure Excel is installed on the client machine as there will be issues around differing versions and security as the workbook(s) that are used in the test use VBA extensively to customise and mark the exercises. Ideally my web app would be able to open a session to the server which then just puts the user into an instance of Excel without ever seeing a desktop. I would also need to be able to pass in command line parameters in order to define which workbook to open and also pass in a unique token to identify the user

    Read the article

  • local wordpress installation not accessible from the outside world

    - by hello
    I have a working installation of wordpress located in /var/www/html/wordpress It is accessible in my local network at [local-machine-ip]/wordpress/ There is also a test page located in /var/www/html/test.html It is also accessible in my local network at [local-machine-ip] I would like the wordpress website to be accessible from the outside world. I know that my ISP blocks incoming requests on port 80, so I set my router to redirect requests from port 8080 to 80. This feature appears to be working correctly since I can access the test.html page using my public ip address as follows: [public-ip]:8080 However, I cannot access [public-ip]:8080/wordpress Here is my Apache config : <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html ServerName [my.domain.com] <Directory /var/www/html/> Options FollowSymLinks Indexes MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Thanks!

    Read the article

  • Upgraded to Mountain Lion, now 127.0.0.1 is not resolving

    - by Shanimal
    I used to be able to type 127.0.0.1 (or my network IP 10.10.53.32) and it would resolve to my "default" virtual host. 127.0.0.1/~Shanimal and shanimal.dev both resolve to their appropriate folders. localhost and 127.0.0.1 give me a 404 - "Not Found The requested URL / was not found on this server." Basically, my "It works!" screen no longer works. /private/etc/apache2/Shanimal.conf: <Directory "/Users/Shanimal/Sites/_www"> Options Indexes Multiviews AllowOverride AuthConfig Limit Order allow,deny Allow from all </Directory> hosts: 127.0.0.1 localhost 127.0.0.1 shanimal.dev

    Read the article

  • Use SSH reverse tunnel to bypass VPN [on hold]

    - by John J. Camilleri
    I have shell access to a server M, but I need to log into a VPN on my machine L in order to access it. I want to be able to get around this VPN, and I've heard I can do this by creating a reverse SSH tunnel and using a intermediate server E (which I can access without the VPN). This is what I am trying: Turn on VPN on L, open SSH session to M On M, execute the command: ssh -f -N -T -R 22222:localhost:22 user@E From L, try to open SSH session to E on port 22222, hoping to end up at M Step 2 seems to work without any complaint, but on step 3 I keep getting "connection refused". I have made sure that port 22222 is open on E: 7 ACCEPT tcp -- anywhere anywhere tcp dpt:22222 I'm pretty new to SSH tunnelling and not sure what the problem could be. Any ideas what I can try?

    Read the article

  • Is it okay to just add a page or two PHP page to add some functionality to a Drupal site?

    - by Zaemz
    I'm not familiar with Drupal, really. I can dig around the admin interface and navigate the directories and find the files that I need to just fine as well. What I'm really not familiar with is adding modules or extending modules. The site currently takes an order and sets up recurring payments through Ubercart and uses Authorize.net as a gateway. Right now, when a payment fails, a single e-mail gets sent out to the admin. We'd like to extend it to send an e-mail to the user and let them change their payment information through another page on the site. Authorize has a service called Silent Post URL that basically just posts a carbon copy in XML to whatever URL you give it. We'd like to accept that XML, deserialize it, parse the data, send a notice to the user and give them the page for updating their information. So, I guess it'll be two PHP pages. One for the XML API call from Authorize.net, and then one for the page for the users' to update their payment information. Could I just create two simple pages each handling their own tasks, or should I check out properly extending a module? If it's appropriate for me to write up the pages and not have to hook them into the module, what would be the best way to handle setting up what needs to get done? (The most experience I've had with extending a PHP site has been hacking away at someone else' poorly constructed, custom framework, so if anyone has any good resources perhaps on PHP best practices that they could share through a PM or a comment, I'd appreciate It) (Also, I'm still getting the hang of Stack Exchange, so if this isn't appropriate please let me know. I'll delete it.)

    Read the article

  • Software Architecture: How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • "ssh root@server" hangs indefinitely long

    - by Thibaut
    Hi, Sometimes my ssh client will take forever to login. This happens when the server is not responding (overloaded, killed processed, ...). My automated scripts will then fail because the ssh process will never exit. Is there a ssh configuration value to set a timeout in order to fail if ssh can't login after a predefined number of seconds? I know there are knobs on the server side, but I have to set this on the client side as the sshd process is not responding, or responding incorrectly. Thanks!

    Read the article

  • What's the best way to handle numerous recurring log entries in game loop?

    - by Kaa
    I have a custom logging system, use of which is scattered all over the engine and game. The system is linked to a "LogStore" that has an std::vector<string> logs[NUM_LOG_TYPES] - each vector corresponds with it's log type (info, error, debug, etc.). There's one extra std::vector that has "coordinates" to all log entries in the order they were received. Now, all the logging output is also displayed inside my development console in the game. The game console is handled by HTML-type GUI and therefore requires a new <p> element being added for each log output. My problem is that the log entries that are generated in the main loop each frame freeze the engine, because they continue to add elements to the in-game console, and if the console or guy generates a warning - that creates an infinite logging loop. I want to solve it by handling the recurring log entries in an elegant way that lets you know that something is critically wrong, but won't freeze the engine - like displaying the count of errors in the last 60 frames instead of displaying errors themselves. But how do you guys handle this? Does anyone know any nifty tricks to do this? I understand the question may sound vague, but if someone came across this type of issue I'm sure they would know exactly what's happening. Example problematic log entries: OpenGL warnings (I actually do check for errors every frame in many places) Really any prints anywhere in the main loop (may be debugging, may be warnings)

    Read the article

  • Adding custom script on ESXi 5.0

    - by Quzar
    I have an ESXi server that I would like to have run a custom script on every boot that contains esxcli and other commands. I have tried adding the script into init.d and creating an rc.local.d folder with a script, but the etc folder gets rebuilt on startup. I've also tried modifying state.tgz and local.tgz in the /bootbank folder in order to force these files to appear, but that does not seem to work either. Is there any way I can run custom commands on boot? Note: I've tried the advice here ESXi boot process / state storage to no avail. Seems the system was changed between 4.1 and 5.0

    Read the article

  • Solr performance (tomcat) - High load

    - by Ward Loockx
    I'm relatively new to solr. I have a production site running on a VPS, but now I'm having serious load issues. I don't know where to start in order to get the load down... VPS specs (linode.com 512) 512 MB RAM 4 CPU (1x priority) Looks like my solr server (tomcat) is using a lot of CPU power You can find my solrconfig.xml on http://pastebin.com/qdfi8Med and my schema.xml on http://pastebin.com/rRusDP8b I've tried to increaese the cache size, but this didn't do anything on the load. You can see the stats page below. EDIT - Because the screenshot was unclear, I took smaller screenshots if what (I think) is important. Dismax query handler stats Caches stats Thanks for the help!

    Read the article

  • X11 forwarding through SSH

    - by martinjlowm
    I have been playing around with X11 forwarding the past few hours and so far I've managed to forward my desktop pc's X Server to my laptop, using X11VNC as server and X2VNC as client. X2VNC uses Xinerama to provide a dual-screen-like behavior between my laptop and my desktop pc. It's actually really great! I know that most Linux systems run Xorg and desktop environments on TTY7. Therefore I was thinking, is it in any way possible to have the VNC-tunnel tied to it's own TTY? It would be great to be able to switch forth and back between two TTY's in order to choose which machine to manage. And I would like this approach more than using Xinerama or a GUI.

    Read the article

  • How to schedule download of windows 7 updates?

    - by atoMerz
    To put it short: I'd like to schedule my windows updates to start/stop at certain times of day. How can I do this? More explanation: This is because my internet traffic is limited by ISP and it's free only during a specific period throughout the day (2:00am-7:00am). I've set windows update setting to check for updates but notify me before downloading in order to prevent it from automatically using up my traffic. But then I have to manually tell it when to start downloading. I obviously don't want to stay up that late just to push a button. So again, how can I schedule windows updates to start/stop at specified times?

    Read the article

  • Slurm: How to find out how much memory is not allocated at a given Node

    - by PlagTag
    i am new to SLURM. I am searching for a comfortable way, to see how many memory at an node/nodelist is available for my srun allocation. I already played around with sinfo and scontrol and sstat but none of them gives me the information i need in one comfortable overview. I had the idea to write a shell script, in order to fetch all fields of all jobs from scontrol and sum them up. But there must be an easier way. Would be great if anyone has an hint or idea!

    Read the article

  • Managing TFS Workspaces

    - by Enrique Lima
    You are the administrator (or since you may be the one that knows the most about it) and you need to do some cleanup on what is connected and perhaps even cleanup after people that have left the organization and left some code checked out in their workspace. What permissions do I need? You will need to have Administer Workspaces permission to perform the following tasks. The commands. In order to execute the commands, you will need to open a Visual Studio Command Prompt, once there you will be able to use the tf command.  This has a nice set of options, which I will be providing a listing for later on in another post. To list all workspaces registered: tf workspaces /collection:<url to your TPC> <workspace>;<owner> To delete a specific workspace: tf workspace /delete /server:<url to your TPC> <workspace>;<owner> If for any reason a workspace has embedded spaces, then surround that with “” (double quotes).

    Read the article

  • How do I get 12.04 to recognize swap partition so that I can hibernate?

    - by Kayla
    I justed installed 12.04 and used gparted to erase and enlarge my swap partition. When I rebooted, gparted said that the file partition for the swap was unknown. Gparted doesn't let me change the file partition to "linux-swap". It does let me change it to NTFS, but when I reboot, it goes back to "unknown". Thanks in advance for your help. Output from sudo swapon -s: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 9025532 0 -1 Output from sudo fdisk -l: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9d63ac84 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2459647 1228800 7 HPFS/NTFS/exFAT /dev/sda2 2459648 197836472 97688412+ 7 HPFS/NTFS/exFAT /dev/sda3 466890752 488395119 10752184 7 HPFS/NTFS/exFAT /dev/sda4 197836798 466890751 134526977 5 Extended /dev/sda5 197836800 448837631 125500416 83 Linux /dev/sda6 448839680 466890751 9025536 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/mapper/cryptswap1: 9242 MB, 9242148864 bytes 255 heads, 63 sectors/track, 1123 cylinders, total 18051072 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x951b7f53 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table

    Read the article

  • How can I get rid of the motd message "*** /dev/sdb1 will be checked for errors at next reboot ***"? [duplicate]

    - by kmm
    This question already has an answer here: Persistent “disk will be checked…” in the message of the day (motd) even after reboot 3 answers My motd persistently has: *** /dev/sdb1 will be checked for errors at next reboot *** The problem is that I don't have /dev/sdb1 on my system. I only have /dev/sdb2 (mouted as /) and /dev/sda1 which mounts to /media/backup. I delete that line from /etc/motd, but it reappears after reboot. Here's my df output: Filesystem Size Used Avail Use% Mounted on /dev/sdb2 73G 3.7G 66G 6% / udev 490M 4.0K 490M 1% /dev tmpfs 200M 760K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 498M 0 498M 0% /run/shm /dev/sda1 1.9T 429G 1.4T 25% /media/backup Update Here is the output of sudo fdisk -l Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dfc2 Device Boot Start End Blocks Id System /dev/sda1 63 3907024064 1953512001 83 Linux Disk /dev/sdb: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00049068 Device Boot Start End Blocks Id System /dev/sdb1 152301568 156301311 1999872 82 Linux swap / Solaris /dev/sdb2 * 2048 152301567 76149760 83 Linux Partition table entries are not in disk order I guess /dev/sdb1 is my swap space.

    Read the article

  • Vagrant sahara plugin - multiple snapshots

    - by BazZy
    How I can make more than one snapshot when in sahara sandbox mode? Or, can I see list of all commits I've ever made and rollback to any? Why do I need all this? I just want to setup initial VM. After that I want to compile number of packages from sources, and this takes pretty long time. So right now I thinking of this order: Setup initial ubuntu 12.04 vagrant box Snapshot this state Compile sources, install system wide rbenv (it make compilation also) Snapshot second state Start all my infrastructure experiments Rollback to any of my previous states or commit third state

    Read the article

  • Fedora: "Login Incorrect"

    - by darkblackcorner
    I've just set up a minimal install on my netbook (the default was too resource hungry, so I figured I'd customize the install and learn something about linux at the same time!) No problems logging in as root, but when I create a new user and try to login as them I just get the "Login incorrect" error. I'm certain the password is correct, though the secure log displays an authentication error. Am I missing a permission somewhere? useradd test usermod -p [pwd] test Shell is added automatically I think (checking password file says shell is /bin/bash) I've tried adding the user to the sudo-ers group usermod -a -G wheel which doesn't help. I've kept the password simple in order to rule out human error.

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >