Search Results

Search found 5279 results on 212 pages for 'execution counter'.

Page 90/212 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How to SET TIMING ON for parallel upgrades to 12c?

    - by Mike Dietrich
    Have you asked yourself how to get timings in an Oracle Database 12c upgrade for all statements? When you run the parallel upgrade via catctl.pl, the parallel upgrade Perl driving script in Oracle Database 12c, you may also want to get timings written in your logfile during execution. As catctl.pl does not offer an option yet the best way to achieve this is to edit the catupses.sql script in $ORACLE/rdbms/admin as this script will get called all time over and over again throughout all steps of theupgrade run. Just add these lines marked in RED to catupses.sql and start your upgrade: Rem =============================================Rem Call Common session settingsRem =============================================@@catpses.sql Rem =============================================Rem  Set Timing On during the UpgradeRem =============================================SET TIMING ON; Rem =============================================Rem Turn off PL/SQL event used by APPSRem =============================================ALTER SESSION SET EVENTS='10933 trace name context off'; -Mike PS: This may become the default in a future patch set

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • Scene graphs and spatial partitioning structures: What do you really need?

    - by tapirath
    I've been fiddling with 2D games for awhile and I'm trying to go into 3D game development. I thought I should get my basics right first. From what I read scene graphs hold your game objects/entities and their relation to each other like 'a tire' would be the child of 'a vehicle'. It's mainly used for frustum/occlusion culling and minimizing the collision checks between the objects. Spatial partitioning structures on the other hand are used to divide a big game object (like the map) to smaller parts so that you can gain performance by only drawing the relevant polygons and again minimizing the collision checks to those polygons only. Also a spatial partitioning data structure can be used as a node in a scene graph. But... I've been reading about both subjects and I've seen a lot of "scene graphs are useless" and "BSP performance gain is irrelevant with modern hardware" kind of articles. Also some of the game engines I've checked like gameplay3d and jmonkeyengine are only using a scene graph (That also may be because they don't want to limit the developers). Whereas games like Quake and Half-Life only use spatial partitioning. I'm aware that the usage of these structures very much depend on the type of the game you're developing so for the sake of clarity let's assume the game is a FPS like Counter-Strike with some better outdoor environment capabilities (like a terrain). The obvious question is which one is needed and why (considering the modern hardware capabilities). Thank you.

    Read the article

  • A short but intense GCC Gathering in London

    - by user817571
    About one week ago I joined in London many long time GCC friends and acquaintances for a gathering organized by Google (in particular I guess should be thanked Diego and Ian). Only a weekend, and I wasn't able to attend on Sunday morning, but a very good occasion to raise some issues in a very relaxed way, in particular those at the border between areas of competence, which are the most difficult to discuss during the normal work days. If you are interested in a general overview and some notes this is a good link: http://gcc.gnu.org/wiki/GCCGathering2011 As you may easily guess, the third topic is mine, which I managed to have up quite early on Friday morning thanks to the votes of some good friends like Dodji (the ordering of the topics resulted from democratic voting on Friday evening!). I learned a lot from the discussion: for example that certainly the new C++11 'final' should be exploited largely in the c++ front-end; the various reasons why devirtualization can be quite trick (but I'm really confident that Martin and Honza are going to make a good progress also basing on a set of short testcases which I promised to collect); that, as explained by Ian, the gold linker already implements the nice --icf (Identical Code Folding) facility, which some friends of mine are definitely going to like (however, see: http://sourceware.org/bugzilla/show_bug.cgi?id=12919). I also enjoyed the observations made by Lawrence, where he remarked that in C+11 we are going to see more pointer iterations implicitly produced by the new range-based for-loop and we really want to make sure the loop optimizers are able to deal with those as well as loops explicitly using a counter. All in all, I really hope we are going to do it again!

    Read the article

  • Navigant Consulting Implements Oracle's PeopleSoft Enterprise 9.1 to Integrate Financial and HR Information

    - by jay.richey
    Integration to Help Global Consultancy Increase Business Productivity and Streamline Operations Redwood Shores, Calif. - Dec. 15, 2010 "Our business is based on the seamless execution and expertise of our highly-trained consultants and we're always seeking ways to improve processes so they can focus on providing excellent client service," said Changappa Kodendera, CIO, Navigant Consulting. "Our phased implementation of Oracle's PeopleSoft Enterprise 9.1 will provide us with a solid technology foundation that we can rely on to support our global consulting business, with a scalable platform that facilitates further improvement." Read the press release Watch their video

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days have been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • What are best practices for testing programs with stochastic behavior?

    - by John Doucette
    Doing R&D work, I often find myself writing programs that have some large degree of randomness in their behavior. For example, when I work in Genetic Programming, I often write programs that generate and execute arbitrary random source code. A problem with testing such code is that bugs are often intermittent and can be very hard to reproduce. This goes beyond just setting a random seed to the same value and starting execution over. For instance, code might read a message from the kernal ring buffer, and then make conditional jumps on the message contents. Naturally, the ring buffer's state will have changed when one later attempts to reproduce the issue. Even though this behavior is a feature it can trigger other code in unexpected ways, and thus often reveals bugs that unit tests (or human testers) don't find. Are there established best practices for testing systems of this sort? If so, some references would be very helpful. If not, any other suggestions are welcome!

    Read the article

  • Plan variable and call dependencies

    - by Gerenuk
    I'd like to write down the design of my program to understand the dependencies and calls better. I know there are class diagrams which show inheritance and attribute variables. However I'd also like to document the input parameters to method functions and in particular which calls the methods function executes inside (e.g. on the input parameters). Also sometimes it might be useful to show how actual objects are connected (if there is a standard structure). This way I can have a better understanding of the modules and design before starting to program. Can you suggest a method to do this software design? It should be one-to-one to programming code structure so that I really notice all quirks beforehand (instead of high-level design where thing are hard to implement without further work). Maybe some special diagram or tool or a combination? It is static dependency and call design rather than time dependent execution monitoring. (I use Python if you have any specialized recommendations).

    Read the article

  • Should components have sub-components in a component-based system like Artemis?

    - by Daniel Ingraham
    I am designing a game using Artemis, although this is more of philosophical question about component-based design in general. Let's say I have non-primitive data which applies to a given component (a Component "animal" may have qualities such as "teeth" or "diet"). There are three ways to approach this in data-driven design, as I see it: 1) Generate classes for these qualities using "traditional" OOP. I imagine this has negative implications for performance, as systems then must be made aware of these qualities in order to process them. It also seems counter to the overall philosophy of data-driven design. 2) Include these qualities as sub-components. This seems off, in that we are now confusing the role of components with that of entities. Moreover out of the box Artemis isn't capable of mapping these subcomponents onto their parent components. 3) Add "teeth", "diet", etc. as components to the overall entity alongside "animal". While this feels odd hierarchically, it may simply be a peculiarity of component-based systems. I suspect 3 is the correct way to think about things, but I was curious about other ideas.

    Read the article

  • How to add a new developer to the team

    - by lortabac
    I run a small company composed of only 2 developers. For one of our clients we are building a very big application, whose development has gone on for 1.5 years. Now this client has found an important sponsorship, and they are organizing some events related to this project, so we have a deadline in 2 months and we can't miss it. We are thinking of adding a new developer to the team, and I am wondering what we can do to help his integration. This is the situation: We are approaching the threshhold of Brooks's law, the point when adding new developers will be counter-productive. The application is relatively well designed, but the implementation is chaotic in some points (especially older code). There are unit tests only for more recent code. When this project started, we didn't have the habit of doing tests. Documentation and comments are incomplete. The application is both large and complex. The client has written down almost every detail about his project, in a very clear and "programmer-friendly" way. Is it a good idea to add a person now? If so, what can we do in order to help the new developer integrate into the team?

    Read the article

  • Control convention for circular movement?

    - by Christian
    I'm currently doing a kind of training project in Unity (still a beginner). It's supposed to be somewhat like Breakout, but instead of just going left and right I want the paddle to circle around the center point. This is all fine and dandy, but the problem I have is: how do you control this with a keyboard or gamepad? For touch and mouse control I could work around the problem by letting the paddle follow the cursor/finger, but with the other control methods I'm a bit stumped. With a keyboard for example, I could either make it so that the Left arrow always moves the paddle clockwise (it starts at the bottom of the circle), or I could link it to the actual direction - meaning that if the paddle is at the bottom, it goes left and up along the circle or, if it's in the upper hemisphere, it moves left and down, both times toward the outer left point of the circle. Both feel kind of weird. With the first one, it can be counter intuitive to press Left to move the paddle right when it's in the upper area, while in the second method you'd need to constantly switch buttons to keep moving. So, long story short: is there any kind of existing standard, convention or accepted example for this type of movement and the corresponding controls? I didn't really know what to google for (control conventions for circular movement was one of the searches I tried, but it didn't give me much), and I also didn't really find anything about this on here. If there is a Question that I simply didn't see, please excuse the duplicate.

    Read the article

  • Techniques for Working Without a Debugger [closed]

    - by ashes999
    Possible Duplicate: How to effectively do manual debugging? Programming in a debugger is ideal. When I say a debugger, I mean something that will allow you to: Pause execution in the middle of some code (like a VM) Inspect variable values Optionally set variable values and call methods Unfortunately, we're not always blessed to work in environments that have debuggers. This can be for reasons such as: Debugger is too too too slow (Flash circa Flash 8) Interpreted language (Ruby, PHP) Scripting language (eg. inside RPG Maker XP) My question is, what is an effective way to debug without a debugger? The old method of "interleave code with print statements" is time-consuming and not sufficient.

    Read the article

  • Am I an idealist?

    - by ereOn
    This is not only a question, this is also a call for help. Since I started my career as a programmer, I always tried to learn from my mistakes. I worked hard to learn best-practices and while I don't consider myself a C++ expert, I still believe I'm not a beginner either. I was recently hired into a company for C++ development. There I was told that my way to work was "against the rules" and that I would have to change my mind. Here are the topics I disagree with my hierarchy (their words): "You should not use separate header files for your different classes. One big header file is both easier to read and faster to compile." "Trying to use different headers is counter-productive : use the same super-set of headers everywhere, and enforce the use #pragma hdrstop to hasten compilation" "You may not use Boost or any other library that uses nested directories to organize its files. Our build-machine doesn't work with nested directories. Moreover, you don't need Boost to create great software." One might think I'm somehow exaggerated things, but the sad truth is that I didn't. That's their actual words. I believe that having separate files enhance maintainability and code-correctness and can fasten compilation time by the use of the proper includes. Have you been in a similar situation? What should I do? I feel like it's actually impossible for me to work that way and day after day, my frustration grows.

    Read the article

  • Oracle R Enterprise 1.1 Download Available

    - by Sherry LaMonica
    Oracle just released the latest update to Oracle R Enterprise, version 1.1. This release includes the Oracle R Distribution (based on open source R, version 2.13.2), an improved server installation, and much more.  The key new features include: Extended Server Support: New support for Windows 32 and 64-bit server components, as well as continuing support for Linux 64-bit server components Improved Installation: Linux 64-bit server installation now provides robust status updates and prerequisite checks Performance Improvements: Improved performance for embedded R script execution calculations In addition, the updated ROracle package, which is used with Oracle R Enterprise, now reads date data by conversion to character strings. We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for R-related software: Oracle R Distribution, Oracle R Enterprise, ROracle, Oracle R Connector for Hadoop.  As always, we welcome comments and questions on the Oracle R Forum.

    Read the article

  • C# output of running command prompt

    - by Kaushal Singh
    In this following code whenever I send any command like dir or anything this function get stuck in while loop... I want the output of the command prompt each time I send it command to execute without closing the process after the execution of command. public void shell(String cmd) { ProcessStartInfo PSI = new ProcessStartInfo(); PSI.FileName = "c:\\windows\\system32\\cmd.exe"; PSI.RedirectStandardInput = true; PSI.RedirectStandardOutput = true; PSI.RedirectStandardError = true; PSI.UseShellExecute = false; Process p = Process.Start(PSI); StreamWriter CSW = p.StandardInput; StreamReader CSR = p.StandardOutput; CSW.WriteLine(cmd); CSW.Flush(); while(!(CSR.EndOfStream)) { Console.WriteLine(CSR.ReadLine()); } CSR.Close(); }

    Read the article

  • Running a startup program in terminal as sudo

    - by Brandon
    I need to run a python script in a terminal, myscript.py at startup (on Lubunt). This script requires root. I've setup a .desktop file that runs the following command: lxterminal --command="python /home/d/Jarvis/alarm.py && /bin/bash" The terminal window opens at startup and runs the script, but then closes when the Python script returns an error (because it's not being run as root). When I change the Exec= to this... lxterminal --command="sudo python /home/d/Jarvis/alarm.py && /bin/bash" ... (prefixing command with 'sudo') which works. However, the terminal opens on startup and displays the [sudo] password for d: \ prompt, requiring me to input my password. I would like the execution of the python script at startup to be completely automatic with no user interaction. How can I accomplish this?

    Read the article

  • Why appending to a list in Scala should have O(n) time complexity?

    - by Jubbat
    I am learning Scala at the moment and I just read that the execution time of the append operation for a list (:+) grows linearly with the size of the list. Appending to a list seems like a pretty common operation. Why should the idiomatic way to do this be prepending the components and then reversing the list? It can't also be a design failure as implementation could be changed at any point. From my point of view, both prepending and appending should be O(1). Is there any legitimate reason for this?

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days has been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Developing Functional Specifications based on the UML Model

    A few days ago I found this white paper I did around 2004 way before I started really blogging:The Process OverviewUse-case to Specifications is a processing using UML use-cases to identify user requirements and model systems to be able to properly define functionality. This document is intended to serve as an execution based walk-through of this process.As background: The Unified Modeling Language (UML) is a language for specifying, visualizing, constructing, and documenting the artifacts of software...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Avoiding lag when rendering Texture2D for first time

    - by Emir Lima
    I have found a similar question here, but it is about playing sounds. I am using 2048 x 2048 textures for sprite sheets and every time I call spriteBatch.Draw using a sheet for the first time in game execution, causes a considerable lag. The lag doesn't appears for the next times. Someone has faced this problem before? What can I do to overcome this? Update: I inserted a code in the end of content load routine that draws EVERY Texture2D that is loaded into ContentManager before follow to the game screen. This works well. None lag occurs when different textures are rendered over the time, EXCEPT if the IsFullScreen are changed. Apparently, changing this property makes the textures loaded in the GPU gone. Is that correct?

    Read the article

  • How should I architect a personal schedule manager that runs 24/7?

    - by Crawford Comeaux
    I've developed an ADHD management system for myself that's attempting to change multiple habits at once. I know this is counter to conventional wisdom, but I've tried the conventional for years & am now trying it my way. (just wanted to say that to try and prevent it from distracting people from the actual question) Anyway, I'd like to write something to run on a remote server that monitors me, helps me build/avoid certain habits, etc. What this amounts to is a system that: runs 24/7 may have multiple independent tasks to run at once may have tasks that require other tasks to run first lets tasks be scheduled by specific time, recurrence (ie. "run every 5 mins"), or interval (ie. "run from 2pm to 3pm") My first naive attempt at this was just a single PHP script scheduled to run every minute by cron (language was chosen in order to use a certain library, but no longer necessary). The logic behind when to run this or that portion of code got hairy pretty quick. So my question is how should I approach this from here? I'm not tied to any one language, though I'm partial to python/javascript. Thoughts: Could be done as a set of scripts that include a scheduling mechanism with one script per bit of logic...but the idea just feels wrong to me. Building it as a daemon could be helpful, but still unsure what to do about dozens of if-else statements for detecting the current time

    Read the article

  • Is there a name for this functional programming construct/pattern?

    - by dietbuddha
    I wrote a function and I'd like to find out if it is an implementation of some functional programming pattern or construct. I'd like to find out the name of this pattern or construct (if it exists)? I have a function which takes a list of functions and does this to them: wrap(fn1, fn2, fn3, fn4) # returns partial(fn4, partial(fn3, partial(fn2, fn1))) There are strong similarities to compose, reduce, and other fp metaprogramming constructs, since the functions are being arranged together and returned as one function. It also has strong similarities to decorators and Python context managers since it provides a way to encapsulate pre and post execution behaviors in one function. Which was the impetus for writing this function. I wanted the ability that context managers provide, but I wanted to be able to have it defined in one function, and to be able to layer function after function on top.

    Read the article

  • sell applications that run only on a GPL v2 Server

    - by gadri mabrouk
    I have been testing Pentaho BI server community edition, and after reading their licensing terms I found out that the community edition is under GPLv2. As you may know the server is intended to host different types of files and applications (like reports, olap cubes,etc...) My question is : are we allowed to sell applications that run on the GPLv2 server ? (we will maybe modify the server source code a bit but we won't charge it to our clients. thus the modified GPL server will be just an execution environment or a container for the reports and applications that we intend to sell) ? this suggests that our clients must install the GPL v2 edition first and then buy from us the reports that work on it but not their source code. Thanks in advance.

    Read the article

  • Browser Alert -- cannot download links using Internet Explorer

    - by user554629
    Internet Explorer ( ie8, ie9 ) is mangling downloads from this blog. Links to files on this blog ( eg., dirstats ) are typically downloaded using browser:  R-click, SaveAs This works fine on Chrome, Firefox and Safari.  Internet Explorer is not handling the html reference to the file, and adds .html to the filename.   The file will be saved in an incorrect format.   Relatively harmless for a script file that is plain text, but binary files like obiaix.tar.gz , will be corrupted, and there is nothing you can do about it. "Don't get corrupted, get rid of cable  Internet Explorer, use firefox"  ( sorry, US TV advert reference ) The useful part of the compressed tar file is that you don't have to worry about Windows line-end characters corrupting the scripts, and you don't have to change execution permissions to get the scripts to work. dos2unix dirstats   chmod +x dirstats

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >