Search Results

Search found 90607 results on 3625 pages for 'new features'.

Page 15/3625 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Quick introduction to the Web Load Test features of Visual Studio 2010

    any developers are not even aware that you can set up and run some very sophisticated web load tests for an ASP.NET Application right from within Visual Studio. This article provides a quick introduction to the Web Load Test features of Visual Studio 2010.  read moreBy Peter BrombergDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Need help understanding "TypeError: default __new__ takes no parameters" error in python

    - by Gordon Fontenot
    For some reason I am having trouble getting my head around __init__ and __new__. I have a bunch of code that runs fine from the terminal, but when I load it as a plugin for Google Quick Search Box, I get the error TypeError: default __new__ takes no parameters. I have been reading about the error, and it's kind of making my brain spin. As it stands I have 3 classes, with no sub-classes, each class has it's own defs. I never use def __init__ or def __new__, but I have gotten the distinct feeling that these are the functions (or the lack thereof) that would be giving me the error. I have no idea how to summarize the code down to a snippet that would be helpful here, since I'm a bit over my head, but the entire script can be found at github. Not expecting anyone to bugfix my code for me, I am just at my wit's end on this. A simple (plain english, not the quote from the python docs which I have read 20 times and still don't really understand) explination of why this error would pop up, or why I should be, or not be, using the __init__ and/or __new__ functions would be seriously appreciated. Thanks for any help you can give in advance.

    Read the article

  • error C3662: override specifier 'new' only allowed on member functions of managed classes

    - by William
    Okay, so I'm trying to override a function in a parent class, and getting some errors. here's a test case #include <iostream> using namespace std; class A{ public: int aba; void printAba(); }; class B: public A{ public: void printAba() new; }; void A::printAba(){ cout << "aba1" << endl; } void B::printAba() new{ cout << "aba2" << endl; } int main(){ A a = B(); a.printAba(); return 0; } And here's the errors I'm getting: Error 1 error C3662: 'B::printAba' : override specifier 'new' only allowed on member functions of managed classes c:\users\test\test\test.cpp 12 test Error 2 error C2723: 'B::printAba' : 'new' storage-class specifier illegal on function definition c:\users\test\test\test.cpp 19 test How the heck do I do this?

    Read the article

  • The new operator in C# isn't overriding base class member

    - by Dominic Zukiewicz
    I am confused as to why the new operator isn't working as I expected it to. Note: All classes below are defined in the same namespace, and in the same file. This class allows you to prefix any content written to the console with some provided text. public class ConsoleWriter { private string prefix; public ConsoleWriter(string prefix) { this.prefix = prefix; } public void Write(string text) { Console.WriteLine(String.Concat(prefix,text)); } } Here is a base class: public class BaseClass { protected static ConsoleWriter consoleWriter = new ConsoleWriter(""); public static void Write(string text) { consoleWriter.Write(text); } } Here is an implemented class: public class NewClass : BaseClass { protected new static ConsoleWriter consoleWriter = new ConsoleWriter("> "); } Now here's the code to execute this: class Program { static void Main(string[] args) { BaseClass.Write("Hello World!"); NewClass.Write("Hello World!"); Console.Read(); } } So I would expect the output to be Hello World! > Hello World! But the output is Hello World Hello World I do not understand why this is happening. Here is my thought process as to what is happening: The CLR calls the BaseClass.Write() method The CLR initialises the BaseClass.consoleWriter member. The method is called and executed with the BaseClass.consoleWriter variable Then The CLR calls the NewClass.Write() The CLR initialises the NewClass.consoleWriter object. The CLR sees that the implementation lies in BaseClass, but the method is inherited through The CLR executes the method locally (in NewClass) using the NewClass.consoleWriter variable I thought this is how the inheritance structure works? Please can someone help me understand why this is not working?

    Read the article

  • Set a callback function to a new window in javascript

    - by SztupY
    Is there an easy way to set a "callback" function to a new window that is opened in javascript? I'd like to run a function of the parent from the new window, but I want the parent to be able to set the name of this particular function (so it shouldn't be hardcoded in the new windows page). For example in the parent I have: function DoSomething { alert('Something'); } ... <input type="button" onClick="OpenNewWindow(linktonewwindow,DoSomething);" /> And in the child window I want to: <input type="button" onClick="RunCallbackFunction();" /> The question is how to create this OpenNewWindow and RunCallbackFunction functions. I though about sending the function's name as a query parameter to the new window (where the server side script generates the appropriate function calls in the generated child's HTML), which works, but I was thinking whether there is another, or better way to accomplish this, maybe something that doesn't even require server side tinkering. Pure javascript, server side solutions and jQuery (or other frameworks) are all welcomed.

    Read the article

  • Problems with classes (super new)

    - by user260036
    Hi, I've problems to figure it out what's happening in the following exercise, I'm learning Smalltalk, so I'm newbie. Class Anew ^super new initialize. Ainitialize a:=0. Class Bnew: aParameter |instance| instance := super new. instance b: instance a + aParameter. ^instance Binitialize b:=0. The problem says what happen when the following code is executed: B new:10. But I can't not figure it out why instance variable does not belong to A class. Thanks

    Read the article

  • The C++ 'new' keyword and C

    - by Florian
    In a C header file of a library I'm using one of the variables is named 'new'. Unfortunately, I'm using this library in a C++ project and the occurence of 'new' as a variable names freaks out the compiler. I'm already using extern "C" { #include<... }, but that doesn't seem to help in this respect. Do I have to aks the library developer to change the name of that variable even though from his perspective, as a C developer, the code is absolutely fine, as 'new' is not a C keyword?

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • New SQLOS features in SQL Server 2012

    - by SQLOS Team
    Here's a quick summary of SQLOS feature enhancements going into SQL Server 2012. Most of these are already in the CTP3 pre-release, except for the Resource Governor enhancements which will be in the release candidate. We've blogged about a couple of these items before. I plan to add detail. Let me know which ones you'd like to see more on: - Memory Manager Redesign: Predictable sizing and governing SQL memory consumption: sp_configure ‘max server memory’ now limits all memory committed by SQL ServerResource Governor governs all SQL memory consumption (other than special cases like buffer pool) Improved scalability of complex queries and operations that make >8K allocations Improved CPU and NUMA locality for memory accesses Single memory manager that handles page allocations of all sizes Consistent Out-of-memory handling & management across different internal components - Optimized Memory Broker for Column Store indexes (Project Apollo) - Resource Governor Support larger scale multi-tenancy by increasing Max. number of resource pools20 -> 64 [for 64-bit] Enable predictable chargeback and isolation by adding a hard cap on CPU usage Enable vertical isolation of machine resources Resource pools can be affinitized to individual or groups of schedulers or to NUMA nodes New DMV for resource pool affinity  - CLR 4 support, adds .NET Framework 4 advantages - sp_server_dianostics Captures diagnostic data and health information about SQL Server to detect potential failures Analyze internal system state Reliable when nothing else is working   - New SQLOS DMVs (in 2008 R2SP1) SQL Server related configuration - New DMVsys.dm_server_services OS related resource configurationNew DMVssys.dm_os_volume_statssys.dm_os_windows_infosys.dm_server_registry XEvents for SQL and OS related Perfmon counters Extend sys.dm_os_sys_info See previous blog posts here and here. - Scale / Mission critical Increased scalability: Support Windows 8 max memory and logical processorsDynamic Memory support in Standard Edition - Hot-Add Memory enabled when virtualized - Various Tier1 Performance Improvements, including reduced instructions for superlatches. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • JavaScript: using constructor without operator 'new'

    - by GetFree
    Please help me to understand why the following code works: <script> var re = RegExp('\\ba\\b') ; alert(re.test('a')) ; alert(re.test('ab')) ; </script> In the first line there is no new operator. As far as I know, a contructor in JavaScript is a function that initialize objects created by the operator new and they are not meant to return anything.

    Read the article

  • c++ how to ? function_x ( new object1 )

    - by ismail marmoush
    Hi i want to do the next instead of MyClass object; function_x (object); i want to function_x ( new object ); so what will be the structure of the MyClass to be able to do that .. if i just compiled it , it gives me a compile time error answer function_x (MyClass() ) New Edit thanks for the quick answers.. i did ask the wrong Question i should have asked how temporary variables created in C++ and the answer

    Read the article

  • Why do I need to call new?

    - by cam
    It seems like I could program something without ever using the word 'new', and I would never have to worry about deleting anything either. From what I understand, it's because I would run out of stack memory. Is this correct? I guess my main question is, when should I call 'new'?

    Read the article

  • Behaviour difference Dim oDialog1 as Dialog1 = New Dialog1 VS Dim oDialog1 as Dialog1 = Dialog1

    - by user472722
    VB.Net 2005 I have a now closed Dialog1. To get information from the Dialog1 from within a module I need to use Dim oDialog1 as Dialog1 = New Dialog1. VB.Net 2008 I have a still open Dialog1. To get information from the Dialog1 from within a module I need to use Dim oDialog1 as Dialog1 = Dialog1. VB.Net 2005 does not compile using Dim oDialog1 as Dialog1 = Dialog1 and insists on NEW What is going on and why do I need the different initialisation syntax?

    Read the article

  • Framework 4 Features: Support for Timed Jobs

    - by Anthony Shorten
    One of the new features of the Oracle Utilities Application Framework V4 is the ability for the batch framework to support Timed Batch. Traditionally batch is associated with set processing in the background in a fixed time frame. For example, billing customers. Over the last few versions their has been functionality required by the products required a more monitoring style batch process. The monitor is a batch process that looks for specific business events based upon record status or other pieces of data. For example, the framework contains a fact monitor (F1-FCTRN) that can be configured to look for specific status's or other conditions. The batch process then uses the instructions on the object to determine what to do. To support monitor style processing, you need to run the process regularly a number of times a day (for example, every ten minutes). Traditional batch could support this but it was not as optimal as expected (if you are a site using the old Workflow subsystem, you understand what I mean). The Batch framework was extended to add additional facilities to support times (and continuous batch which is another new feature for another blog entry). The new facilities include: The batch control now defines the job as Timed or Not Timed. Non-Timed batch are traditional batch jobs. The timer interval (the interval between executions) can be specified The timer can be made active or inactive. Only active timers are executed. Setting the Timer Active to inactive will stop the job at the next time interval. Setting the Timer Active to Active will start the execution of the timed job. You can specify the credentials, language to view the messages and an email address to send the a summary of the execution to. The email address is optional and requires an email server to be specified in the relevant feature configuration. You can specify the thread limits and commit intervals to be sued for the multiple executions. Once a timer job is defined it will be executed automatically by the Business Application Server process if the DEFAULT threadpool is active. This threadpool can be started using the online batch daemon (for non-production) or externally using the threadpoolworker utility. At that time any batch process with the Timer Active set to Active and Batch Control Type of Timed will begin executing. As Timed jobs are executed automatically then they do not appear in any external schedule or are managed by an external scheduler (except via the DEFAULT threadpool itself of course). Now, if the job has no work to do as the timer interval is being reached then that instance of the job is stopped and the next instance started at the timer interval. If there is still work to complete when the interval interval is reached, the instance will continue processing till the work is complete, then the instance will be stopped and the next instance scheduled for the next timer interval. One of the key ways of optimizing this processing is to set the timer interval correctly for the expected workload. This is an interesting new feature of the batch framework and we anticipate it will come in handy for specific business situations with the monitor processes.

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • MySQL for Excel new features (1.2.0): Save and restore Edit sessions

    - by Javier Rivera
    Today we are going to talk about another new feature included in the latest MySQL for Excel release to date (1.2.0) which can be Installed directly from our MySQL Installer downloads page.Since the first release you were allowed to open a session to directly edit data from a MySQL table at Excel on a worksheet and see those changes reflected immediately on the database. You were also capable of opening multiple sessions to work with different tables at the same time (when they belong to the same schema). The problem was that if for any reason you were forced to close Excel or the Workbook you were working on, you had no way to save the state of those open sessions and to continue where you left off you needed to reopen them one by one. Well, that's no longer a problem since we are now introducing a new feature to save and restore active Edit sessions. All you need to do is in click the options button from the main MySQL for Excel panel:  And make sure the Edit Session Options (highlighted in yellow) are set correctly, specially that Restore saved Edit sessions is checked: Then just begin an Edit session like you would normally do, select the connection and schema on the main panel and then select table you want to edit data from and click over Edit MySQL Data. and just import the MySQL data into Excel:You can edit data like you always did with the previous version. To test the save and restore saved sessions functionality, first we need to save the workbook while at least one Edit session is opened and close the file.Then reopen the workbook. Depending on your version of Excel is where the next steps are going to differ:Excel 2013 extra step (first): In Excel 2013 you first need to open the workbook with saved edit sessions, then click the MySQL for Excel Icon on the the Data menu (notice how in this version, every time you open or create a new file the MySQL for Excel panel is closed in the new window). Please note that if you work on Excel 2013 with several workbooks with open edit sessions each at the same time, you'll need to repeat this step each time you open one of them: Following steps:  In Excel 2010 or previous, you just need to make sure the MySQL for Excel panel is already open at this point, if its not, please do the previous step specified above (Excel 2013 extra step). For Excel 2010 or older versions you will only need to do this previous step once.  When saved sessions are detected, you will be prompted what to do with those sessions, you can click Restore to continue working where you left off, click Discard to delete the saved sessions (All edit session information for this file will be deleted from your computer, so you will no longer be prompted the next time you open this same file) or click Nothing to continue without opening saved sessions (This will keep the saved edit sessions intact, to be prompted again about them the next time you open this workbook): And there you have it, now you will be able to save your Edit sessions, close your workbook or turn off your computer and you will still be able to reopen them in the future, to continue working right where you were. Today we talked about how you can save your active Edit sessions and restore them later, this is another feature included in the latest MySQL for Excel release (1.2.0). Please remember you can try this product and many others for free downloading the installer directly from our MySQL Installer downloads page.Happy editing !

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Problem adding Contact with new API

    - by Mike
    Hello, I am trying to add a new contact to my contact list using the new ContactContract API via my application. I have the following method based on the Contact Manager example on android dev. private static void addContactCore(Context context, String accountType, String accountName, String name, String phoneNumber, int phoneType) throws RemoteException, OperationApplicationException { ArrayList<ContentProviderOperation> ops = new ArrayList<ContentProviderOperation>(); //Add contact type ops.add(ContentProviderOperation.newInsert(ContactsContract.RawContacts.CONTENT_URI) .withValue(ContactsContract.RawContacts.ACCOUNT_TYPE, accountType) .withValue(ContactsContract.RawContacts.ACCOUNT_NAME, accountName) .build()); //Add contact name ops.add(ContentProviderOperation.newInsert(ContactsContract.Data.CONTENT_URI) .withValueBackReference(ContactsContract.Data.RAW_CONTACT_ID, 0) .withValue(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.StructuredName.CONTENT_ITEM_TYPE) .withValue(ContactsContract.CommonDataKinds.StructuredName.DISPLAY_NAME, (!name.toLowerCase().equals("unavailable") && !name.equals("")) ? name : phoneNumber) .build()); //Add phone number ops.add(ContentProviderOperation.newInsert(ContactsContract.Data.CONTENT_URI) .withValueBackReference(ContactsContract.Data.RAW_CONTACT_ID, 0) .withValue(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.Phone.CONTENT_ITEM_TYPE) .withValue(ContactsContract.CommonDataKinds.Phone.NUMBER, phoneNumber) .withValue(ContactsContract.CommonDataKinds.Phone.TYPE, phoneType) .build()); //Add contact context.getContentResolver().applyBatch(ContactsContract.AUTHORITY, ops); } In one example I have the flowing values for the parameters. accountType:com.google accountName:(my google account email) name:Mike phoneNumber:5555555555 phoneType:3 The call to the function returns normally without any exception being thrown however the contact is no where to be found in the contact manager on my phone. There is also no contact with that information on my phone already. Does anyone have any insight into what I might be doing wrong?

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Should a new language compiler target the JVM?

    - by Pindatjuh
    I'm developing a new language. My initial target was to compile to native x86 for the Windows platform, but now I am in doubt. I've seen some new languages target the JVM (most notable Scala and Clojure). Ofcourse it's not possible to port every language easily to the JVM; to do so, it may lead to small changes to the language and it's design. So that's the reason behind this doubt, and thus this question: Is targetting the JVM a good idea, when creating a compiler for a new language? Or should I stick with x86? I have experience in generating JVM bytecode. Are there any workarounds to JVM's GC? The language has deterministic implicit memory management. How to produce JIT-compatible bytecode, such that it will get the highest speedup? Is it similar to compiling for IA-32, such as the 4-1-1 muops pattern on Pentium? I can imagine some advantages (please correct me if I'm wrong): JVM bytecode is easier than x86. Like x86 communicates with Windows, JVM communicates with the Java Foundation Classes. To provide I/O, Threading, GUI, etc. Implementing "lightweight"-threads.I've seen a very clever implementation of this at http://www.malhar.net/sriram/kilim/. Most advantages of the Java Runtime (portability, etc.) The disadvantages, as I imagined, are: Less freedom? On x86 it'll be more easy to create low-level constructs, while JVM has a higher level (more abstract) processor. Most disadvantages of the Java Runtime (no native dynamic typing, etc.)

    Read the article

  • safe placement new & explicit destructor call

    - by uray
    this is an example of my codes: ` template <typename T> struct MyStruct { T object; } template <typename T> class MyClass { MyStruct<T>* structPool; size_t structCount; MyClass(size_t count) { this->structCount = count; this->structPool = new MyStruct<T>[count]; for( size_t i=0 ; i<count ; i++ ) { //placement new to call constructor new (&this->structPool[i].object) T(); } } ~MyClass() { for( size_t i=0 ; i<this->structCount ; i++ ) { //explicit destructor call this->structPool[i].object.~T(); } delete[] this->structPool; } } ` my question is, is this a safe way to do? do I make some hidden mistake at some condition? will it work for every type of object (POD and non-POD) ?

    Read the article

  • C++ new memory allocation fragmentation

    - by tamulj
    I was trying to look at the behavior of the new allocator and why it doesn't place data contiguously. My code: struct ci { char c; int i; } template <typename T> void memTest() { T * pLast = new T(); for(int i = 0; i < 20; ++i) { T * pNew = new T(); cout << (pNew - pLast) << " "; pLast = pNew; } } So I ran this with char, int, ci. Most allocations were a fixed length from the last, sometimes there were odd jumps from one available block to another. sizeof(char) : 1 Average Jump: 64 bytes sizeof(int): 4 Average Jump: 16 sizeof(ci): 8 (int has to be placed on a 4 byte align) Average Jump: 9 Can anyone explain why the allocator is fragmenting memory like this? Also why is the jump for char so much larger then ints and a structure that contains both an int and char.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >