Search Results

Search found 30555 results on 1223 pages for 'closed source'.

Page 29/1223 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Modelio passe en version 2.1.1, l'outil de modélisation augmente son ouverture à l'open source

    Modelio passe en version 2.1.1 L'outil de modélisation augmente son ouverture à l'open source Modelio, l'outil de modélisation pour le développement de logiciels, la gestion de processus métiers et l'ingénierie des systèmes, passe en version 2.1.1, et accentue son ouverture open source. Modelio est par exemple désormais disponible nativement en format 64 bits ou 32 bits sous les différentes plateformes Linux RedHat, Ubuntu et Debian et gère la documentation Libre Office (en plus du HTML et de Microsoft Word). Résultat de plus de 20 ans de développement propriétaire, l'environnement open source Modelio 2 est disponible sous licence GPL v2 et est doté d'une architecture modulaire, d...

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • WCF- "The underlying connection was closed: The connection was closed unexpectedly"

    - by SumGuy
    Hi there. I'm recieving that wonderfuly ambiguous error message when using one of my webmethods on my WCF webservice. As that error message doesn't provide any explanation whatsoever allow me to post my theory. I believe it may have something to do with the return type I'm using I have a Types DLL which is refrenced in both the webservice and the client. In this DLL is the base class ExceptionMessages. There is a child of this class called DrawingExcepions. Here is some code: public class ExceptionMessages { public object[] ReturnValue { get; set; } } public class DrawingExceptions : ExceptionMessages { private List<DrawingException> des = new List<DrawingException>(); } public class DrawingException { public Exception ExceptionMsg { get; set; } public List<object> Errors { get; set; } } The using code: [OperationContract] ExceptionMessages createNewBom(Bom bom, DrawingFiles dfs); public ExceptionMessages createNewBOM(Bom bom, DrawingFiles dfs) { return insertAssembly(bom, dfs); } public DrawingExceptions insertAssembly(Bom bom, DrawingFiles dfs) { DrawingExceptions des = new DrawingExceptions(); foreach (DrawingFile d in dfs.drawingFiles) { DrawingException temp = insertNewDrawing(bom, d); if (temp != null) des.addDrawingException(temp); if (d.Child != null) des.addDrawingException(insertAssembly(bom, d.Child)); } return des; } Returns to: ExceptionMessages ems = client.createNewBom(bom, currentDFS); if (ems is DrawingExceptions) { } Basically the return type from the webmethod is ExceptionMessages however I would usually be sending the child class back instead. My only idea is that it's the child that's causing the error but as far as I've read, this should have no effect. Has anyone got any ideas what could be going wrong here? If any more info is required, just ask :) Thanks.

    Read the article

  • PHP: How to find connections between users so I can create a closed friend circle?

    - by CuSS
    Hi all, First of all, I'm not trying to create a social network, facebook is big enough! (comic) I've chosen this question as example because it fits exactly on what I'm trying to do. Imagine that I have in MySQL a users table and a user_connections table with 'friend requests'. If so, it would be something like this: Users Table: userid username 1 John 2 Amalia 3 Stewie 4 Stuart 5 Ron 6 Harry 7 Joseph 8 Tiago 9 Anselmo 10 Maria User Connections Table: userid_request userid_accepted 2 3 7 2 3 4 7 8 5 6 4 5 8 9 4 7 9 10 6 1 10 7 1 2 Now I want to find circles between friends and create a structure array and put that circle on the database (none of the arrays can include the same friends that another has already). Return Example: // First Circle of Friends Circleid => 1 CircleStructure => Array( 1 => 2, 2 => 3, 3 => 4, 4 => 5, 5 => 6, 6 => 1, ) // Second Circle of Friends Circleid => 2 CircleStructure => Array( 7 => 8, 8 => 9, 9 => 10, 10 => 7, ) I'm trying to think of an algorithm to do that, but I think it will take a lot of processing time because it would randomly search the database until it 'closes' a circle. PS: The minimum structure length of a circle is 3 connections and the limit is 100 (so the daemon doesn't search the entire database) EDIT: I've think on something like this: function browse_user($userget='random',$users_history=array()){ $user = user::get($userget); $users_history[] = $user['userid']; $connections = user::connection::getByUser($user['userid']); foreach($connections as $connection){ $userid = ($connection['userid_request']!=$user['userid']) ? $connection['userid_request'] : $connection['userid_accepted']; // Start the circle array if(in_array($userid,$users_history)) return array($user['userid'] => $userid); $res = browse_user($userid, $users_history); if($res!==false){ // Continue the circle array return $res + array($user['userid'] => $userid); } } return false; } while(true){ $res = browse_user(); // Yuppy, friend circle found! if($res!==false){ user::circle::create($res); } // Start from scratch again! } The problem with this function is that it could search the entire database without finding the biggest circle, or the best match.

    Read the article

  • Open Source Code Integrity - How does quality assurance work?

    - by rockinthesixstring
    I've thought about this before and this topic has often steered me away from Open Source projects. Recently DotNetPanel has changed it's name to WebSitePanel and gone Open Source. The rumor mill is speculating that Microsoft is behind this. My question (in multi-part) is quite simple. Can somebody please explain to me how quality assurance works on Open Source projects? How can a closed application get "only better" when Open Source? Doesn't the "too many cooks in the kitchen" theory apply when too many developers contribute (possibly bad) code to a project?

    Read the article

  • How can I get the source code for ASTassistant?

    - by cyclotis04
    I'm trying to develop an application similar to ASTassistant, and in the article the author says that he included "the source code with the binaries." After downloading the ZIP folder, however, I've found no source. The program is written in REAL Basic, which I don't know anything about. Do I need to purchase REAL Basic to view ASTassistant's source code, or is it somewhere I haven't looked? Thanks

    Read the article

  • Is it ok to put any existing open-source project into github?

    - by Sébastien Le Callonnec
    This question is more about Open-Source etiquette, and the new approach that the likes of github and gitorious gives to collaboration and source ownership. Can you just take any Open-Source project from somewhere else (e.g SourceForge, with a clear project team and community) and put it into your own github repository, provided that you respect the terms of the original license? And if yes, do you keep your version under the same name, or change it? I somehow have this nagging feeling that this is rude, and yet it is open-source after all...

    Read the article

  • What are the books about Open-Source that everyone interested in should read?

    - by Edu Zamora
    Currently I am working and running my first Open-Source project and though I am quite happy how things are working so far, I have the feeling that a lot of things could be done better. So, what books about Open-Source would you recommend me in order to help filling this gap and making things better every day? What are the books that influenced you the most? I am especially interested in: - How to organize and run an Open-Source project - Best practices - Manage and involve users and developers - How to announce and do the releases - Legal issues

    Read the article

  • What does open source license (like GNU-GPL) mean?

    - by Hemant
    I am looking forward to use an open source product which has GNU-GPL like license and it says that if I use that product, I must share the source code of my application. I am slightly confused about it. I understand that Linux is available under GNU-GPL license as well. Does it mean ALL linux application are and has to be open source? Does it mean I can ask for the source code of complete Oracle DB from Oracle Corp (at least the part that runs on Linux)?

    Read the article

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • routing based on source IP

    - by user1977050
    I am trying to do source-based routing, following the question http://unix.stackexchange.com/questions/131527/routing-based-on-source-ip. The source IP floating one and assigned to a cluster (consists from 2 servers). Let's say that the physical IP on server1 is 192.0.2.1, on server2 192.0.2.2, and the virtual IP is 192.0.2.3 (and this should be the source IP for outgoing traffic). How can I configure static source IP routing for this in RHEL?

    Read the article

  • Changing Word mail merge data source locations in bulk?

    - by Daft Viking
    I've just moved a number of Word mail merge files, and a number of Excel spreadsheets that are the data sources for the mail merges, from a Windows XP computer to a Windows 7 computer, and now all the paths for the merge sources are incorrect (used to be c:\documents and settings\user\my documents.... now c:\users\documents....). While I can correct the path of the data source in each file individually, I was hoping that there would be some way of updating the files in bulk, as there are a relatively large number of them. Word 2007 is what is being used, but the documents are all in the previous DOC format (not DOCX).

    Read the article

  • Changing Word mail merge data source locations in bulk?

    - by Daft Viking
    I've just moved a number of Word mail merge files, and a number of Excel spreadsheets that are the data sources for the mail merges, from a Windows XP computer to a Windows 7 computer, and now all the paths for the merge sources are incorrect (used to be c:\documents and settings\user\my documents.... now c:\users\documents....). While I can correct the path of the data source in each file individually, I was hoping that there would be some way of updating the files in bulk, as there are a relatively large number of them. Word 2007 is what is being used, but the documents are all in the previous DOC format (not DOCX).

    Read the article

  • How to keep programs from source up to date?

    - by wizard
    I'm designing a new server setup for hosting multiple websites. (Shared hosting for my clients over at SliceHost.) I've recently moved away from the traditional LAMP setup and chosen Ubuntu, Nginx, php-fpm and mysql. I like it a lot better then my old Apache, suphp, mysql setup. It works great, provided encapsulation between sites and uses substantiallly less memory. However I have one major maintenance problem. In order to have a recent version of Nginx and in order to use php-fpm I've had to compile these programs from source. The reason I see this as a problem is that keeping track of updates, and build configurations will end up being a lot of work. For two programs (and a patch) I can handle it, but it seems like this setup would not scale with many packages and servers. Are there good ways to manage this situation? I'm sure people do this all the time.

    Read the article

  • backup util for binary/media files. (to use with source control)

    - by acidzombie24
    I am using git for my source control. I dont backup media such as gifs, pngs, etc. I am thinking everytime i tag a release it would be a good idea to backup the media files as well. But i dont want to make several copies of the same file each time i create a tag. I'd like an app to handle checking if the file already exists and handles restoring everything to a version i like What util might i use to do this? I'm using windows 7.

    Read the article

  • Do You Develop Your PL/SQL Directly in the Database?

    - by thatjeffsmith
    I know this sounds like a REALLY weird question for many of you. Let me make one thing clear right away though, I am NOT talking about creating and replacing PLSQL objects directly into a production environment. Do we really need to talk about developers in production again? No, what I am talking about is a developer doing their work from start to finish in a development database. These are generally available to a development team for building the next and greatest version of your databases and database applications. And of course you are using a third party source control system, right? Last week I was in Tampa, FL presenting at the monthly Suncoast Oracle User’s Group meeting. Had a wonderful time, great questions and back-and-forth. My favorite heckler was there, @oraclenered, AKA Chet Justice.  I was in the middle of talking about how it’s better to do your PLSQL work in the Procedure Editor when Chet pipes up - Don’t do it that way, that’s wrong Just press play to edit the PLSQL directly in the database Or something along those lines. I didn’t get what the heck he was talking about. I had been showing how the Procedure Editor gives you much better feedback and support when working with PLSQL. After a few back-and-forths I got to what Chet’s main objection was, and again I’m going to paraphrase: You should develop offline in your SQL worksheet. Don’t do anything in the database until it’s done. I didn’t understand. Were developers expected to be able to internalize and mentally model the PL/SQL engine, see where their errors were, etc in these offline scripts? No, please give Chet more credit than that. What is the ideal Oracle Development Environment? If I were back in the ‘real world’ of database development, I would do all of my development outside of the ‘dev’ instance. My development process looks a little something like this: Do I have a program that already does something like this – copy and paste Has some smart person already written something like this – copy and paste Start typing in the white-screen-of-panic and bungle along until I get something that half-works Tweek, debug, test until I have fooled my subconscious into thinking that it’s ‘good’ As you might understand, I don’t want my co-workers to see the evolution of my code. It would seriously freak them out and I probably wouldn’t have a job anymore (don’t remind me that I already worked myself out of development.) So here’s what I like to do: Run a Local Instance of Oracle on my Machine and Develop My Code Privately I take a copy of development – that’s what source control is for afterall – and run it where no one else can see it. I now get to be my own DBA. If I need a trace – no problem. If I want to run an ASH report, no worries. If I need to create a directory or run some DataPump jobs, that’s all on me. Now when I get my code ‘up to snuff,’ then I will check it into source control and compile it into the official development instance. So my teammates suddenly go from seeing no program, to a mostly complete program. Is this right? If not, it doesn’t seem wrong to me. And after talking to Chet in the car on the way to the local cigar bar, it seems that he’s of the same opinion. So what’s so wrong with coding directly into a development instance? I think ‘wrong’ is a bit strong here. But there are a few pitfalls that you might want to look out for. A few come to mind – and I’m sure Chet could add many more as my memory fails me at the moment. But here goes: Development instance isn’t properly backed up – would hate to lose that work Development is wiped once a week and copied over from Prod – don’t laugh Someone clobbers your code You accidentally on purpose clobber someone else’s code The more developers you have in a single fish pond, the greater chance something ‘bad’ will happen This Isn’t One of Those Posts Where I Tell You What You Should Be Doing I realize many shops won’t be open to allowing developers to stage their own local copies of Oracle. But I would at least be aware that many of your developers are probably doing this anyway – with or without your tacit approval. SQL Developer can do local file tracking, but you should be using Source Control too! I will say that I think it’s imperative that you control your source code outside the database, even if your development team is comprised of a single developer. Store your source code in a file, and control that file in something like Subversion. You would be shocked at the number of teams that do not use a source control system. I know I continue to be shocked no matter how many times I meet another team running by the seat-of-their-pants. I’d love to hear how your development process works. And of course I want to know how SQL Developer and the rest of our tools can better support your processes. And one last thing, if you want a fun and interactive presentation experience, be sure to have Chet in the room

    Read the article

  • On Her Majesty's Secret Source Code: .NET Reflector 7 Early Access Builds Now Available

    - by Bart Read
    Dodgy Bond references aside, I'm extremely happy to be able to tell you that we've just released our first .NET Reflector 7 Early Access build. We're going to make these available over the coming weeks via the main .NET Reflector download page at: http://reflector.red-gate.com/Download.aspx Please have a play and tell us what you think in the forum we've set up. Also, please let us know if you run into any problems in the same place. The new version so far comes with numerous decompilation improvements including (after 5 years!) support for iterator blocks - i.e., the yield statement first seen in .NET 2.0. We've also done a lot of work to solidify the support for .NET 4.0. Clive's written about the work he's done to support iterator blocks in much more detail here, along with the odd problem he's encountered when dealing with compiler generated code: http://www.simple-talk.com/community/blogs/clivet/96199.aspx. On the UI front we've started what will ultimately be a rewrite of the entire front-end, albeit broken into stages over two or three major releases. The most obvious addition at the moment is tabbed browsing, which you can see in Figure 1. Figure 1. .NET Reflector's new tabbed decompilation feature. Use CTRL+Click on any item in the assembly browser tree, or any link in the source code view, to open it in a new tab. This isn't by any means finished. I'll be tying up loose ends for the next few weeks, with a major focus on performance and resource usage. .NET Reflector has historically been a largely single-threaded application which has been fine up until now but, as you might expect, the addition of browser-style tabbing has pushed this approach somewhat beyond its limit. You can see this if you refresh the assemblies list by hitting F5. This shows up another problem: we really need to make Reflector remember everything you had open before you refreshed the list, rather than just the last item you viewed - I discovered that it's always done the latter, but it used to hide all panes apart from the treeview after a Refresh, including the decompiler/disassembler window. Ultimately I've got plans to add the whole VS/Chrome/Firefox style ability to drag a tab into the middle of nowhere to spawn a new window, but I need to be mindful of the add-ins, amongst other things, so it's possible that might slip to a 7.5 or 8.0 release. You'll also notice that .NET Reflector 7 now needs .NET 3.5 or later to run. We made this jump because we wanted to offer ourselves a much better chance of adding some really cool functionality to support newer technologies, such as Silverlight and Windows Phone 7. We've also taken the opportunity to start using WPF for UI development, which has frankly been a godsend. The learning curve is practically vertical but, I kid you not, it's just a far better world. Really. Stop using WinForms. Now. Why are you still using it? I had to go back and work on an old WinForms dialog for an hour or two yesterday and it really made me wince. The point is we'll be able to move the UI in some exciting new directions that will make Reflector easier to use whilst continuing to develop its functionality without (and this is key) cluttering the interface. The 3.5 language enhancements should also enable us to be much more productive over the longer term. I know most of you have .NET Fx 3.5 or 4.0 already but, if you do need to install a new version, I'd recommend you jump straight to 4.0 because, for one thing, it's faster, and if you're starting afresh there's really no reason not to. Despite the Fx version jump the Visual Studio add-in should still work fine in Visual Studio 2005, and obviously will continue to work in Visual Studio 2008 and 2010. If you do run into problems, again, please let us know here. As before, we continue to support every edition of Visual Studio exception the Express Editions. Speaking of Visual Studio, we've also been improving the add-in. You can now open and explore decompiled code for any referenced assembly in any project in your solution. Just right-click on the reference, then click Decompile and Explore on the context menu. Reflector will pop up a progress box whilst it decompiles your assembly (Figure 2) - you can move this out of the way whilst you carry on working. Figure 2. Decompilation progress. This isn't modal so you can just move it out of the way and carry on working. Once it's done you can explore your assembly in the Reflector treeview (Figure 3), also accessible via the .NET Reflector Explore Decompiled Assemblies main menu item. Double-click on any item to open decompiled source in the Visual Studio source code view. Use right-click and Go To Definition on the source view context menu to navigate through the code. Figure 3. Using the .NET Reflector treeview within Visual Studio. Double-click on any item to open decompiled source in the source code view. There are loads of other changes and fixes that have gone in, often under the hood, which I don't have room to talk about here, and plenty more to come over the next few weeks. I'll try to keep you abreast of new functionality and changes as they go in. There are a couple of smaller things worth mentioning now though. Firstly, we've reorganised the menus and toolbar in Reflector itself to more closely mirror what you might be used to in other applications. Secondly, we've tried to make some of the functionality more discoverable. For example, you can now switch decompilation target framework version directly from the toolbar - and the default is now .NET 4.0. I think that about covers it for the moment. As I said, please use the new version, and send us your feedback. Here's that download URL again: http://reflector.red-gate.com/Download.aspx. Until next time! Technorati Tags: .net reflector,7,early access,new version,decompilation,tabbing,visual studio,software development,.net,c#,vb

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >