Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 534/555 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • Working With Extended Events

    - by Fatherjack
    SQL Server 2012 has made working with Extended Events (XE) pretty simple when it comes to what sessions you have on your servers and what options you have selected and so forth but if you are like me then you still have some SQL Server instances that are 2008 or 2008 R2. For those servers there is no built-in way to view the Extended Event sessions in SSMS. I keep coming up against the same situations – Where are the xel log files? What events, actions or predicates are set for the events on the server? What sessions are there on the server already? I got tired of this being a perpetual question and wrote some TSQL to save as a snippet in SQL Prompt so that these details are permanently only a couple of clicks away. First, some history. If you just came here for the code skip down a few paragraphs and it’s all there. If you want a little time to reminisce about SQL Server then stick with me through the next paragraph or two. We are in a bit of a cross-over period currently, there are many versions of SQL Server but I would guess that SQL Server 2008, 2008 R2 and 2012 comprise the majority of installations. With each of these comes a set of management tools, of which SQL Server Management Studio (SSMS) is one. In 2008 and 2008 R2 Extended Events made their first appearance and there was no way to work with them in the SSMS interface. At some point the Extended Events guru Jonathan Kehayias (http://www.sqlskills.com/blogs/jonathan/) created the SQL Server 2008 Extended Events SSMS Addin which is really an excellent tool to ease XE session administration. This addin will install in SSMS 2008 or 2008R2 but not SSMS 2012. If you use a compatible version of SSMS then I wholly recommend downloading and using it to make your work with XE much easier. If you have SSMS 2012 installed, and there is no reason not to as it will let you work with all versions of SQL Server, then you cannot install this addin. If you are working with SQL Server 2012 then SSMS 2012 has built in functionality to manage XE sessions – this functionality does not apply for 2008 or 2008 R2 instances though. This means you are somewhat restricted and have to use TSQL to manage XE sessions on older versions of SQL Server. OK, those of you that skipped ahead for the code, you need to start from here: So, you are working with SSMS 2012 but have a SQL Server that is an earlier version that needs an XE session created or you think there is a session created but you aren’t sure, or you know it’s there but can’t remember if it is running and where the output is going. How do you find out? Well, none of the information is hidden as such but it is a bit of a wrangle to locate it and it isn’t a lot of code that is unlikely to remain in your memory. I have created two pieces of code. The first examines the SYS.Server_Event_… management views in combination with the SYS.DM_XE_… management views to give the name of all sessions that exist on the server, regardless of whether they are running or not and two pieces of TSQL code. One piece will alter the state of the session: if the session is running then the code will stop the session if executed and vice versa. The other piece of code will drop the selected session. If the session is running then the code will stop it first. Do not execute the DROP code unless you are sure you have the Create code to hand. It will be dropped from the server without a second chance to change your mind. /**************************************************************/ /***   To locate and describe event sessions on a server    ***/ /***                                                        ***/ /***   Generates TSQL to start/stop/drop sessions           ***/ /***                                                        ***/ /***        Jonathan Allen - @fatherjack                    ***/ /***                 June 2013                                ***/ /***                                                        ***/ /**************************************************************/ SELECT  [EES].[name] AS [Session Name - all sessions] ,         CASE WHEN [MXS].[name] IS NULL THEN ISNULL([MXS].[name], 'Stopped')              ELSE 'Running'         END AS SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'ALTER EVENT SESSION [' + [EES].[name]                          + '] ON SERVER STATE = START;')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP;'         END AS ALTER_SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'DROP EVENT SESSION [' + [EES].[name]                          + '] ON SERVER; -- This WILL drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP; ' + CHAR(10)                   + '-- DROP EVENT SESSION [' + [EES].[name]                   + '] ON SERVER; -- This WILL stop and drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.'         END AS DROP_Session FROM    [sys].[server_event_sessions] AS EES         LEFT JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name] WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) ORDER BY SessionState GO I have excluded the system_health and AlwaysOn sessions as I don’t want to accidentally execute the drop script for these sessions that are created as part of the SQL Server installation. It is possible to recreate the sessions but that is a whole lot of aggravation I’d rather avoid. The second piece of code gathers details of running XE sessions only and provides information on the Events being collected, any predicates that are set on those events, the actions that are set to be collected, where the collected information is being logged and if that logging is to a file target, where that file is located. /**********************************************/ /***    Running Session summary                ***/ /***                                        ***/ /***    Details key values of XE sessions     ***/ /***    that are in a running state            ***/ /***                                        ***/ /***        Jonathan Allen - @fatherjack    ***/ /***        June 2013                        ***/ /***                                        ***/ /**********************************************/ SELECT  [EES].[name] AS [Session Name - running sessions] ,         [EESE].[name] AS [Event Name] ,         COALESCE([EESE].[predicate], 'unfiltered') AS [Event Predicate Filter(s)] ,         [EESA].[Action] AS [Event Action(s)] ,         [EEST].[Target] AS [Session Target(s)] ,         ISNULL([EESF].[value], 'No file target in use') AS [File_Target_UNC] -- select * FROM    [sys].[server_event_sessions] AS EES         INNER JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name]         INNER JOIN [sys].[server_event_session_events] AS [EESE] ON [EES].[event_session_id] = [EESE].[event_session_id]         LEFT JOIN [sys].[server_event_session_fields] AS EESF ON ( [EES].[event_session_id] = [EESF].[event_session_id]                                                               AND [EESF].[name] = 'filename'                                                               )         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + sest.name                                         FROM    [sys].[server_event_session_targets]                                                 AS SEST                                         WHERE   [EES].[event_session_id] = [SEST].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Target]                     ) AS EEST         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + [sesa].NAME                                         FROM    [sys].[server_event_session_actions]                                                 AS sesa                                         WHERE   [sesa].[event_session_id] = [EES].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Action]                     ) AS EESA WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) /*Optional to exclude 'out-of-the-box' traces*/ I hope that these scripts are useful to you and I would be obliged if you would keep my name in the script comments. I have no problem with you using it in production or personal circumstances, however it has no warranty or guarantee. Don’t use it unless you understand it and are happy with what it is going to do. I am not ever responsible for the consequences of executing this script on your servers.

    Read the article

  • Microsoft Business Intelligence Seminar 2011

    - by DavidWimbush
    I was lucky enough to attend the maiden presentation of this at Microsoft Reading yesterday. It was pretty gripping stuff not only because of what was said but also because of what could only be hinted at. Here's what I took away from the day. (Disclaimer: I'm not a BI guru, just a reasonably experienced BI developer, so I may have misunderstood or misinterpreted a few things. Particularly when so much of the talk was about the vision and subtle hints of what is coming. Please comment if you think I've got anything wrong. I'm also not going to even try to cover Master Data Services as I struggled to imagine how you would actually use it.) I was a bit worried when I learned that the whole day was going to be presented by one guy but Rafal Lukawiecki is a very engaging speaker. He's going to be presenting this about 20 times around the world over the coming months. If you get a chance to hear him speak, I say go for it. No doubt some of the hints will become clearer as Denali gets closer to RTM. Firstly, things are definitely happening in the SQL Server Reporting and BI world. Traditionally IT would build a data warehouse, then cubes on top of that, and then publish them in a structured and controlled way. But, just as with many IT projects in general, by the time it's finished the business has moved on and the system no longer meets their requirements. This not sustainable and something more agile is needed but there has to be some control. Apparently we're going to be hearing the catchphrase 'Balancing agility with control' a lot. More users want more access to more data. Can they define what they want? Of course not, but they'll recognise it when they see it. It's estimated that only 28% of potential BI users have meaningful access to the data they need, so there is a real pent-up demand. The answer looks like: give them some self-service tools so they can experiment and see what works, and then IT can help to support the results. It's estimated that 32% of Excel users are comfortable with its analysis tools such as pivot tables. It's the power user's preferred tool. Why fight it? That's why PowerPivot is an Excel add-in and that's why they released a Data Mining add-in for it as well. It does appear that the strategy is going to be to use Reporting Services (in SharePoint mode), PowerPivot, and possibly something new (smiles and hints but no details) to create reports and explore data. Everything will be published and managed in SharePoint which gives users the ability to mash-up, share and socialise what they've found out. SharePoint also gives IT tools to understand what people are looking at and where to concentrate effort. If PowerPivot report X becomes widely used, it's time to check that it shows what they think it does and perhaps get it a bit more under central control. There was more SharePoint detail that went slightly over my head regarding where Excel Services and Excel Web Application fit in, the differences between them, and the suggestion that it is likely they will one day become one (but not in the immediate future). That basic pattern is set to be expanded upon by further exploiting Vertipaq (the columnar indexing engine that enables PowerPivot to store and process a lot of data fast and in a small memory footprint) to provide scalability 'from the desktop to the data centre', and some yet to be detailed advances in 'frictionless deployment' (part of which is about making the difference between local and the cloud pretty much irrelevant). Excel looks like becoming Microsoft's primary BI client. It already has: the ability to consume cubes strong visualisation tools slicers (which are part of Excel not PowerPivot) a data mining add-in PowerPivot A major hurdle for self-service BI is presenting the data in a consumable format. You can't just give users PowerPivot and a server with a copy of the OLTP database(s). Building cubes is labour intensive and doesn't always give the user what they need. This is where the BI Semantic Model (BISM) comes in. I gather it's a layer of metadata you define that can combine multiple data sources (and types of data source) into a clear 'interface' that users can work with. It comes with a new query language called DAX. SSAS cubes are unlikely to go away overnight because, with their pre-calculated results, they are still the most efficient way to work with really big data sets. A few other random titbits that came up: Reporting Services is going to get some good new stuff in Denali. Keep an eye on www.projectbotticelli.com for the slides. You can also view last year's seminar sessions which covered a lot of the same ground as far as the overall strategy is concerned. They plan to add more material as Denali's features are publicly exposed. Check out the PASS keynote address for a showing of Yahoo's SQL BI servers. Apparently they wheeled the rack out on stage still plugged in and running! Check out the Excel 2010 Data Mining Add-Ins. 32 bit only at present but 64 bit is on the way. There are lots of data sets, many of them free, at the Windows Azure Marketplace Data Market (where you can also get ESRI shape files). If you haven't already seen it, have a look at the Silverlight Pivot Viewer (http://weblogs.asp.net/scottgu/archive/2010/06/29/silverlight-pivotviewer-now-available.aspx). The Bing Maps Data Connector is worth a look if you're into spatial stuff (http://www.bing.com/community/site_blogs/b/maps/archive/2010/07/13/data-connector-sql-server-2008-spatial-amp-bing-maps.aspx).  

    Read the article

  • J2EE Applications, SPARC T4, Solaris Containers, and Resource Pools

    - by user12620111
    I've obtained a substantial performance improvement on a SPARC T4-2 Server running a J2EE Application Server Cluster by deploying the cluster members into Oracle Solaris Containers and binding those containers to cores of the SPARC T4 Processor. This is not a surprising result, in fact, it is consistent with other results that are available on the Internet. See the "references", below, for some examples. Nonetheless, here is a summary of my configuration and results. (1.0) Before deploying a J2EE Application Server Cluster into a virtualized environment, many decisions need to be made. I'm not claiming that all of the decisions that I have a made will work well for every environment. In fact, I'm not even claiming that all of the decisions are the best possible for my environment. I'm only claiming that of the small sample of configurations that I've tested, this is the one that is working best for me. Here are some of the decisions that needed to be made: (1.1) Which virtualization option? There are several virtualization options and isolation levels that are available. Options include: Hard partitions:  Dynamic Domains on Sun SPARC Enterprise M-Series Servers Hypervisor based virtualization such as Oracle VM Server for SPARC (LDOMs) on SPARC T-Series Servers OS Virtualization using Oracle Solaris Containers Resource management tools in the Oracle Solaris OS to control the amount of resources an application receives, such as CPU cycles, physical memory, and network bandwidth. Oracle Solaris Containers provide the right level of isolation and flexibility for my environment. To borrow some words from my friends in marketing, "The SPARC T4 processor leverages the unique, no-cost virtualization capabilities of Oracle Solaris Zones"  (1.2) How to associate Oracle Solaris Containers with resources? There are several options available to associate containers with resources, including (a) resource pool association (b) dedicated-cpu resources and (c) capped-cpu resources. I chose to create resource pools and associate them with the containers because I wanted explicit control over the cores and virtual processors.  (1.3) Cluster Topology? Is it best to deploy (a) multiple application servers on one node, (b) one application server on multiple nodes, or (c) multiple application servers on multiple nodes? After a few quick tests, it appears that one application server per Oracle Solaris Container is a good solution. (1.4) Number of cluster members to deploy? I chose to deploy four big 64-bit application servers. I would like go back a test many 32-bit application servers, but that is left for another day. (2.0) Configuration tested. (2.1) I was using a SPARC T4-2 Server which has 2 CPU and 128 virtual processors. To understand the physical layout of the hardware on Solaris 10, I used the OpenSolaris psrinfo perl script available at http://hub.opensolaris.org/bin/download/Community+Group+performance/files/psrinfo.pl: test# ./psrinfo.pl -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7)   The core has 8 virtual processors (8-15)   The core has 8 virtual processors (16-23)   The core has 8 virtual processors (24-31)   The core has 8 virtual processors (32-39)   The core has 8 virtual processors (40-47)   The core has 8 virtual processors (48-55)   The core has 8 virtual processors (56-63)     SPARC-T4 (chipid 0, clock 2848 MHz) The physical processor has 8 cores and 64 virtual processors (64-127)   The core has 8 virtual processors (64-71)   The core has 8 virtual processors (72-79)   The core has 8 virtual processors (80-87)   The core has 8 virtual processors (88-95)   The core has 8 virtual processors (96-103)   The core has 8 virtual processors (104-111)   The core has 8 virtual processors (112-119)   The core has 8 virtual processors (120-127)     SPARC-T4 (chipid 1, clock 2848 MHz) (2.2) The "before" test: without processor binding. I started with a 4-member cluster deployed into 4 Oracle Solaris Containers. Each container used a unique gigabit Ethernet port for HTTP traffic. The containers shared a 10 gigabit Ethernet port for JDBC traffic. (2.3) The "after" test: with processor binding. I ran one application server in the Global Zone and another application server in each of the three non-global zones (NGZ):  (3.0) Configuration steps. The following steps need to be repeated for all three Oracle Solaris Containers. (3.1) Stop AppServers from the BUI. (3.2) Stop the NGZ. test# ssh test-z2 init 5 (3.3) Enable resource pools: test# svcadm enable pools (3.4) Create the resource pool: test# poolcfg -dc 'create pool pool-test-z2' (3.5) Create the processor set: test# poolcfg -dc 'create pset pset-test-z2' (3.6) Specify the maximum number of CPU's that may be addd to the processor set: test# poolcfg -dc 'modify pset pset-test-z2 (uint pset.max=32)' (3.7) bash syntax to add Virtual CPUs to the processor set: test# (( i = 64 )); while (( i < 96 )); do poolcfg -dc "transfer to pset pset-test-z2 (cpu $i)"; (( i = i + 1 )) ; done (3.8) Associate the resource pool with the processor set: test# poolcfg -dc 'associate pool pool-test-z2 (pset pset-test-z2)' (3.9) Tell the zone to use the resource pool that has been created: test# zonecfg -z test-z1 set pool=pool-test-z2 (3.10) Boot the Oracle Solaris Container test# zoneadm -z test-z2 boot (3.11) Save the configuration to /etc/pooladm.conf test# pooladm -s (4.0) Results. Using the resource pools improves both throughput and response time: (5.0) References: System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones Capitalizing on large numbers of processors with WebSphere Portal on Solaris WebSphere Application Server and T5440 (Dileep Kumar's Weblog)  http://www.brendangregg.com/zones.html Reuters Market Data System, RMDS 6 Multiple Instances (Consolidated), Performance Test Results in Solaris, Containers/Zones Environment on Sun Blade X6270 by Amjad Khan, 2009.

    Read the article

  • Dell Studio 1737 Overheating

    - by Sean
    I am using a Dell Studio 1737 laptop. I have been running Linux and have ran Windows recently for a very long time. I upgraded to the 10.10 distribution and since that distro, it seems that for some reason all Linuxes want to push my laptop to extremes. I have recently upgraded to Ubuntu 12.04 since I heart that it contains kernel fixes for overheating issues. 12.04 will actually eventually cool the system, but that is after the fans run to the point it sounds like a jet aircraft taking off and the laptop makes my hands sweat. In trying to combat the heat problems I have done the following: I installed the propriatery driver for my ATI Mobility HD 3600. I have tried both the one in the Additional Drivers and also tried ATI's latest greatest version. If I don't install this my laptop will overheat and shut off in minutes. Both seem to perform similarly, but the heat problem remains. I have tried limiting the CPU by installing the CPUFreq Indicator. This does help keep the machine from shutting off, but the heat is still uncomfortable to be around the machine. I usually run in power saver mode or run the cpu at 1.6 GHZ just to error on safety. I ran sensors-detect and here are the results: sean@sean-Studio-1737:~$ sudo sensors-detect # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: Dell Inc. Studio 1737 (laptop) # Board: Dell Inc. 0F237N This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... Yes Found `ITE IT8512E/F/G Super IO' (but not activated) Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel ICH9 Module i2c-i801 loaded successfully. Module i2c-dev loaded successfully. Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO)y Successful! Monitoring programs won't work until the needed modules are loaded. You may want to run 'service module-init-tools start' to load them. Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK sean@sean-Studio-1737:~$ sudo service module-init-tools start module-init-tools stop/waiting I also tried installing i8k but that didn't work since it didn't seem to be able to communicate with the hardware (probably for different kind of device). Also I ran acpi -V and here are the results: Battery 0: Full, 100% Battery 0: design capacity 613 mAh, last full capacity 260 mAh = 42% Adapter 0: on-line Thermal 0: ok, 49.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 1: ok, 48.0 degrees C Thermal 1: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 2: ok, 51.0 degrees C Thermal 2: trip point 0 switches to mode critical at temperature 100.0 degrees C Cooling 0: LCD 0 of 15 Cooling 1: Processor 0 of 10 Cooling 2: Processor 0 of 10 I have hit a wall and don't know what to do now. Any advice is appreciated.

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

  • T4 Performance Counters explained

    - by user13346607
    Now that T4 is out for a few month some people might have wondered what details of the new pipeline you can monitor. A "cpustat -h" lists a lot of events that can be monitored, and only very few are self-explanatory. I will try to give some insight on all of them, some of these "PIC events" require an in-depth knowledge of T4 pipeline. Over time I will try to explain these, for the time being these events should simply be ignored. (Side note: some counters changed from tape-out 1.1 (*only* used in the T4 beta program) to tape-out 1.2 (used in the systems shipping today) The table only lists the tape-out 1.2 counters) 0 0 1 1058 6033 Oracle Microelectronics 50 14 7077 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} pic name (cpustat) Prose Comment Sel-pipe-drain-cycles, Sel-0-[wait|ready], Sel-[1,2] Sel-0-wait counts cycles a strand waits to be selected. Some reasons can be counted in detail; these are: Sel-0-ready: Cycles a strand was ready but not selected, that can signal pipeline oversubscription Sel-1: Cycles only one instruction or µop was selected Sel-2: Cycles two instructions or µops were selected Sel-pipe-drain-cycles: cf. PRM footnote 8 to table 10.2 Pick-any, Pick-[0|1|2|3] Cycles one, two, three, no or at least one instruction or µop is picked Instr_FGU_crypto Number of FGU or crypto instructions executed on that vcpu Instr_ld dto. for load Instr_st dto. for store SPR_ring_ops dto. for SPR ring ops Instr_other dto. for all other instructions not listed above, PRM footnote 7 to table 10.2 lists the instructions Instr_all total number of instructions executed on that vcpu Sw_count_intr Nr of S/W count instructions on that vcpu (sethi %hi(fc000),%g0 (whatever that is))  Atomics nr of atomic ops, which are LDSTUB/a, CASA/XA, and SWAP/A SW_prefetch Nr of PREFETCH or PREFETCHA instructions Block_ld_st Block loads or store on that vcpu IC_miss_nospec, IC_miss_[L2_or_L3|local|remote]\ _hit_nospec Various I$ misses, distinguished by where they hit. All of these count per thread, but only primary events: T4 counts only the first occurence of an I$ miss on a core for a certain instruction. If one strand misses in I$ this miss is counted, but if a second strand on the same core misses while the first miss is being resolved, that second miss is not counted This flavour of I$ misses counts only misses that are caused by instruction that really commit (note the "_nospec") BTC_miss Branch target cache miss ITLB_miss ITLB misses (synchronously counted) ITLB_miss_asynch dto. but asynchronously [I|D]TLB_fill_\ [8KB|64KB|4MB|256MB|2GB|trap] H/W tablewalk events that fill ITLB or DTLB with translation for the corresponding page size. The “_trap” event occurs if the HWTW was not able to fill the corresponding TLB IC_mtag_miss, IC_mtag_miss_\ [ptag_hit|ptag_miss|\ ptag_hit_way_mismatch] I$ micro tag misses, with some options for drill down Fetch-0, Fetch-0-all fetch-0 counts nr of cycles nothing was fetched for this particular strand, fetch-0-all counts cycles nothing was fetched for all strands on a core Instr_buffer_full Cycles the instruction buffer for a strand was full, thereby preventing any fetch BTC_targ_incorrect Counts all occurences of wrongly predicted branch targets from the BTC [PQ|ROB|LB|ROB_LB|SB|\ ROB_SB|LB_SB|RB_LB_SB|\ DTLB_miss]\ _tag_wait ST_q_tag_wait is listed under sl=20. These counters monitor pipeline behaviour therefore they are not strand specific: PQ_...: cycles Rename stage waits for a Pick Queue tag (might signal memory bound workload for single thread mode, cf. Mail from Richard Smith) ROB_...: cycles Select stage waits for a ROB (ReOrderBuffer) tag LB_...: cycles Select stage waits for a Load Buffer tag SB_...: cycles Select stage waits for Store Buffer tag combinations of the above are allowed, although some of these events can overlap, the counter will only be incremented once per cycle if any of these occur DTLB_...: cycles load or store instructions wait at Pick stage for a DTLB miss tag [ID]TLB_HWTW_\ [L2_hit|L3_hit|L3_miss|all] Counters for HWTW accesses caused by either DTLB or ITLB misses. Canbe further detailed by where they hit IC_miss_L2_L3_hit, IC_miss_local_remote_remL3_hit, IC_miss I$ prefetches that were dropped because they either miss in L2$ or L3$ This variant counts misses regardless if the causing instruction commits or not DC_miss_nospec, DC_miss_[L2_L3|local|remote_L3]\ _hit_nospec D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters DTLB_miss_asynch counts all DTLB misses asynchronously, there is no way to count them synchronously DC_pref_drop_DC_hit, SW_pref_drop_[DC_hit|buffer_full] L1-D$ h/w prefetches that were dropped because of a D$ hit, counted per core. The others count software prefetches per strand [Full|Partial]_RAW_hit_st_[buf|q] Count events where a load wants to get data that has not yet been stored, i. e. it is still inside the pipeline. The data might be either still in the store buffer or in the store queue. If the load's data matches in the SB and in the store queue the data in buffer takes precedence of course since it is younger [IC|DC]_evict_invalid, [IC|DC|L1]_snoop_invalid, [IC|DC|L1]_invalid_all Counter for invalidated cache evictions per core St_q_tag_wait Number of cycles pipeline waits for a store queue tag, of course counted per core Data_pref_[drop_L2|drop_L3|\ hit_L2|hit_L3|\ hit_local|hit_remote] Data prefetches that can be further detailed by either why they were dropped or where they did hit St_hit_[L2|L3], St_L2_[local|remote]_C2C, St_local, St_remote Store events distinguished by where they hit or where they cause a L2 cache-to-cache transfer, i.e. either a transfer from another L2$ on the same die or from a different die DC_miss, DC_miss_\ [L2_L3|local|remote]_hit D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters L2_[clean|dirty]_evict Per core clean or dirty L2$ evictions L2_fill_buf_full, L2_wb_buf_full, L2_miss_buf_full Per core L2$ buffer events, all count number of cycles that this state was present L2_pipe_stall Per core cycles pipeline stalled because of L2$ Branches Count branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_taken Counts taken branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_mispred, Br_dir_mispred, Br_trg_mispred, Br_trg_mispred_\ [far_tbl|indir_tbl|ret_stk] Counter for various branch misprediction events.  Cycles_user counts cycles, attribute setting hpriv, nouser, sys controls addess space to count in Commit-[0|1|2], Commit-0-all, Commit-1-or-2 Number of times either no, one, or two µops commit for a strand. Commit-0-all counts number of times no µop commits for the whole core, cf. footnote 11 to table 10.2 in PRM for a more detailed explanation on how this counters interacts with the privilege levels

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • CodePlex Daily Summary for Tuesday, July 02, 2013

    CodePlex Daily Summary for Tuesday, July 02, 2013Popular ReleasesMastersign.Expressions: Mastersign.Expressions v0.4.2: added support for if(<cond>, <true-part>, <false-part>) fixed multithreading issue with rand() improved demo applicationNB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.6 Rel0: v2.3.6 Is now DNN6 and DNN7 compatible Important : During update this install with overwrite the menu.xml setting, if you have changed this then make a backup before you upgrade and reapply your changes after the upgrade. Please view the following documentation if you are installing and configuring this module for the first time System Requirements Skill requirements Downloads and documents Step by step guide to a working store Please ask all questions in the Discussions tab. Document.Editor: 2013.26: What's new for Document.Editor 2013.26: New Insert Chart Improved User Interface Minor Bug Fix's, improvements and speed upsWsus Package Publisher: Release V1.2.1307.01: Fix an issue in the UI, approvals are not shown correctly in the 'Report' tabDirectX Tool Kit: July 2013: July 1, 2013 VS 2013 Preview projects added and updates for DirectXMath 3.05 vectorcall Added use of sRGB WIC metadata for JPEG, PNG, and TIFF SaveToWIC functions updated with new optional setCustomProps parameter and error check with optional targetFormatCore Server 2012 Powershell Script Hyper-v Manager: new_root.zip: Verison 1.0JSON Toolkit: JSON Toolkit 4.1.736: Improved strinfigy performance New serializing feature New anonymous type support in constructorsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...New ProjectsALM Rangers DevOps Tooling and Guidance: Practical tooling and guidance that will enable teams to realize a faster deployment based on continuous feedback.Core Server 2012 Powershell Script Hyper-v Manager: Free core Server 2012 powershell scripts and batch files that replace the non-existent hyper-v manager, vmconnect and mstsc.Enhanced Deployment Service (EDS): EDS is a web service based utility designed to extend the deployment capabilities of administrators with the Microsoft Deployment Toolkit.ExtendedDialogBox: Libreria DialogBoxJazdy: This project is here only because we wanted to take advantage of a public git server.Mon Examen: This web interface is meant to make examinationsneet: summaryOrchard Multi-Choice Voting: A multiple choice voting Orchard module.Particle Swarm Optimization Solving Quadratic Assignment Problem: This project is submitted for the solving of QAP using PSO algorithms with addition of some modification Porjects: 23123123PPL Power Pack: PPL Power PackProperty Builder: Visual Studio tool for speeding up process of coding class properties getters and setters.RedRuler for Redline: I tried some on-screen rulers, none of them help me measure the UI element quickly based on the Redline. So I decided to created this handy RedRuler tool. Royale Living: Mahindra Royale Community PortalSearch and booking Hotel or Tours: Ð? án nghiên c?u c?a sinh viên tdt theo mô hình mvc 4SystemBuilder.Show: This tool is a helper after you create your project in visual studio to create the respective objects and interface. TalentDesk: new ptojectTcmplex: The Training Center teaches many different kind of course such as English, French, Computer hardware and computer softwareTFS Reporting Guide: Provides guidance and samples to enable TFS users to generate reports based on WIT data.Umbraco AdaptiveImages: Adaptive Images Package for UmbracoVirtualNet - A ILcode interpreter/emulator written in C++/Assembly: VirtualNet is a interpreter/emulator for running .net code in native without having to install the .Net FrameWorkVisual Blocks: Visual Blocks ????IDE ????? ??????? ????? ????/?? Visual Studio and Cloud Based Mobile Device Testing: Practical guidance enabling field to remove blockers to adoption and to use and extend the Perfecto Mobile Cloud Device testing within the context of VS.Windows 8 Time Picker for Windows Phone: A Windows Phone implementation of the Time Picker control found in the Windows 8.1 Alarms app.???? - SmallBasic?: ?????????

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

  • Persisting Session Between Different Browser Instances

    - by imran_ku07
        Introduction:          By default inproc session's identifier cookie is saved in browser memory. This cookie is known as non persistent cookie identifier. This simply means that if the user closes his browser then the cookie is immediately removed. On the other hand cookies which stored on the user’s hard drive and can be reused for later visits are called persistent cookies. Persistent cookies are less used than nonpersistent cookies because of security. Simply because nonpersistent cookies makes session hijacking attacks more difficult and more limited. If you are using shared computer then there are lot of chances that your persistent session will be used by other shared members. However this is not always the case, lot of users desired that their session will remain persisted even they open two instances of same browser or when they close and open a new browser. So in this article i will provide a very simple way to persist your session even the browser is closed.   Description:          Let's create a simple ASP.NET Web Application. In this article i will use Web Form but it also works in MVC. Open Default.aspx.cs and add the following code in Page_Load.    protected void Page_Load(object sender, EventArgs e)        {            if (Session["Message"] != null)                Response.Write(Session["Message"].ToString());            Session["Message"] = "Hello, Imran";        }          This page simply shows a message if a session exist previously and set the session.          Now just run the application, you will just see an empty page on first try. After refreshing the page you will see the Message "Hello, Imran". Now just close the browser and reopen it or just open another browser instance, you will get the exactly same behavior when you run your application first time . Why the session is not persisted between browser instances. The simple reason is non persistent session cookie identifier. The session cookie identifier is not shared between browser instances. Now let's make it persistent.          To make your application share session between different browser instances just add the following code in global.asax.    protected void Application_PostMapRequestHandler(object sender, EventArgs e)           {               if (Request.Cookies["ASP.NET_SessionIdTemp"] != null)               {                   if (Request.Cookies["ASP.NET_SessionId"] == null)                       Request.Cookies.Add(new HttpCookie("ASP.NET_SessionId", Request.Cookies["ASP.NET_SessionIdTemp"].Value));                   else                       Request.Cookies["ASP.NET_SessionId"].Value = Request.Cookies["ASP.NET_SessionIdTemp"].Value;               }           }          protected void Application_PostRequestHandlerExecute(object sender, EventArgs e)        {             HttpCookie cookie = new HttpCookie("ASP.NET_SessionIdTemp", Session.SessionID);               cookie.Expires = DateTime.Now.AddMinutes(Session.Timeout);               Response.Cookies.Add(cookie);         }          This code simply state that during Application_PostRequestHandlerExecute(which is executed after HttpHandler) just add a persistent cookie ASP.NET_SessionIdTemp which contains the value of current user SessionID and sets the timeout to current user session timeout.          In Application_PostMapRequestHandler(which is executed just before th session is restored) we just check whether the Request cookie contains ASP.NET_SessionIdTemp. If yes then just add or update ASP.NET_SessionId cookie with ASP.NET_SessionIdTemp. So when a new browser instance is open, then a check will made that if ASP.NET_SessionIdTemp exist then simply add or update ASP.NET_SessionId cookie with ASP.NET_SessionIdTemp.          So run your application again, you will get the last closed browser session(if it is not expired).   Summary:          Persistence session is great way to increase the user usability. But always beware the security before doing this. However there are some cases in which you might need persistence session. In this article i just go through how to do this simply. So hopefully you will again enjoy this simple article too.

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • Create and Backup Multiple Profiles in Google Chrome

    - by Asian Angel
    Other browsers such as Firefox and SeaMonkey allow you to have multiple profiles but not Chrome…at least not until now. If you want to use multiple profiles and create backups for them then join us as we look at Google Chrome Backup. Note: There is a paid version of this program available but we used the free version for our article. Google Chrome Backup in Action During the installation process you will run across this particular window. It will have a default user name filled in as shown here…you will not need to do anything except click on Next to continue installing the program. When you start the program for the first time this is what you will see. Your default Chrome Profile will already be visible in the window. A quick look at the Profile Menu… In the Tools Menu you can go ahead and disable the Start program at Windows Startup setting…the only time that you will need the program running is if you are creating or restoring a profile. When you create a new profile the process will start with this window. You can access an Advanced Options mode if desired but most likely you will not need it. Here is a look at the Advanced Options mode. It is mainly focused on adding Switches to the new Chrome Shortcut. The drop-down menu for the Switches available… To create your new profile you will need to choose: A profile location A profile name (as you type/create the profile name it will automatically be added to the Profile Path) Make certain that the Create a new shortcut to access new profile option is checked For our example we decided to try out the Disable plugins switch option… Click OK to create the new profile. Once you have created your new profile, you will find a new shortcut on the Desktop. Notice that the shortcut’s name will be Google Chrome + profile name that you chose. Note: On our system we were able to move the new shortcut to the “Start Menu” without problems. Clicking on our new profile’s shortcut opened up a fresh and clean looking instance of Chrome. Just out of curiosity we did decide to check the shortcut to see if the Switch set up correctly. Unfortunately it did not in this instance…so your mileage with the Switches may vary. This was just a minor quirk and nothing to get excited or upset over…especially considering that you can create multiple profiles so easily. After opening up our default profile of Chrome you can see the individual profile icons (New & Default in order) sitting in the Taskbar side-by-side. And our two profiles open at the same time on our Desktop… Backing Profiles Up For the next part of our tests we decided to create a backup for each of our profiles. Starting the wizard will allow you to choose between creating or restoring a profile. Note: To create or restore a backup click on Run Wizard. When you reach the second part of the process you can go with the Backup default profile option or choose a particular one from a drop-down list using the Select a profile to backup option. We chose to backup the Default Profile first… In the third part of the process you will need to select a location to save the profile to. Once you have selected the location you will see the Target Path as shown here. You can choose your own name for the backup file…we decided to go with the default name instead since it contained the backup’s calendar date. A very nice feature is the ability to have the cache cleared before creating the backup. We clicked on Yes…choose the option that best suits your needs. Once you have chosen either Yes or No the backup will then be created. Click Finish to complete the process. The backup file for our Default Profile at 14.0 MB in size. And the backup file for our Chrome Fresh Profile…2.81 MB. Restoring Profiles For the final part of our tests we decided to do a Restore. Select Restore and click Next to get the process started. In the second step you will need to browse for the Profile Backup File (and select the desired profile if you have created multiples). For our example we decided to overwrite the original Default Profile with the Chrome Fresh Profile. The third step lets you choose where to restore the chosen profile to…you can go with the Default Profile or choose one from the drop-down list using the Restore to a selected profile option. The final step will get you on your way to restoring the chosen profile. The program will conduct a check regarding the previous/old profile and ask if you would like to proceed with overwriting it. Definitely nice in case you change your mind at the last moment. Clicking Yes will finish the restoration. The only other odd quirk that we noticed while using the program was that the Next Button did not function after restoring the profile. You can easily get around the problem by clicking to close the window. Which one is which? After the restore process we had identical twins. Conclusion If you have been looking for a way to create multiple profiles in Google Chrome, then you might want to add this program to your system. Links Download Google Chrome Backup Similar Articles Productive Geek Tips Backup and Restore Firefox Profiles EasilyBackup Different Browsers Easily with FavBackupBackup Your Browser with the New FavBackupStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • CodePlex Daily Summary for Monday, June 02, 2014

    CodePlex Daily Summary for Monday, June 02, 2014Popular ReleasesPortable Class Library for SQLite: Portable Class Library for SQLite - 3.8.4.4: This pull request from mattleibow addresses an issue with custom function creation (define functions in C# code and invoke them from SQLite as id they where regular SQL functions). Impact: Xamarin iOSTweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.GenerateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweets- Add mis...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Vi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew Projects[ISEN] Rendu de projet Naughty3Dogs - Pong3D: Pong3D est un jeu qui reprend le principe classique du Pong en le portant dans un environnement 3D à l'aide du langage c# et du moteur Unity3DBootstrap for MVC: Bootstrap for MVC.F. A. Q. - Najczesciej zadawane pytania: FAQForuMvc: Technifutur short projecthomework456: no.iStoody: Studies organize solution, available through app for Windows and Windows Phone.liaoliao: ???????????Price Tracker: Allows a user to track prices based on parsed emailsRoslynEval: RoslynRx Hub: Rx Hub provides server side computation which initiate by subscriber requestSharepoint Online AppCache Reset: We are an IT resource company providing Virtual IT services and custom and opensource programs for everyday needs. UnitConversionLib : Smart Unit Conversion Library in C#: Conversion of units, arithmetic operation and parsing quantities with their units on run time. Smart unit converter and conversion lib for physical quantities,

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • How to access GNU Xnee

    - by Gaurav Butola
    I have installed GNU Xnee (Gnee an OS X automator alternative) from the Software Centre but now I cant find it anywhere in the menus. Here is the output when I run gnee in the terminal gaurav@gaurav-HCL-ME-Laptop:~$ gnee (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated *** glibc detected *** gnee: free(): invalid next size (fast): 0x08afb638 *** ======= Backtrace: ========= /lib/libc.so.6(+0x6c501)[0x53de501] /lib/libc.so.6(+0x6dd70)[0x53dfd70] /lib/libc.so.6(cfree+0x6d)[0x53e2e5d] gnee[0x804c9f5] /lib/libc.so.6(__libc_start_main+0xe7)[0x5388ce7] gnee[0x804c571] ======= Memory map: ======== 00110000-00112000 r-xp 00000000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00112000-00113000 r--p 00002000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00113000-00114000 rw-p 00003000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00116000-0011a000 r-xp 00000000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011a000-0011b000 r--p 00003000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011b000-0011c000 rw-p 00004000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011c000-00176000 r-xp 00000000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00176000-00177000 r--p 00059000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00177000-00179000 rw-p 0005a000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00179000-001c8000 r-xp 00000000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001c8000-001c9000 ---p 0004f000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001c9000-001cc000 r--p 0004f000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001cc000-001d3000 rw-p 00052000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001d3000-00200000 r-xp 00000000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00200000-00201000 ---p 0002d000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00201000-00202000 r--p 0002d000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00202000-00204000 rw-p 0002e000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00204000-0021c000 r-xp 00000000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021c000-0021d000 ---p 00018000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021d000-0021e000 r--p 00018000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021e000-0021f000 rw-p 00019000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021f000-00243000 r-xp 00000000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00243000-00244000 r--p 00023000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00244000-00245000 rw-p 00024000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00245000-00248000 r-xp 00000000 08:01 393403 /lib/libuuid.so.1.3.0 00248000-00249000 r--p 00002000 08:01 393403 /lib/libuuid.so.1.3.0 00249000-0024a000 rw-p 00003000 08:01 393403 /lib/libuuid.so.1.3.0 0024a000-0024c000 r-xp 00000000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024c000-0024d000 r--p 00001000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024d000-0024e000 rw-p 00002000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024e000-00250000 r-xp 00000000 08:01 393661 /lib/libutil-2.12.1.so 00250000-00251000 r--p 00001000 08:01 393661 /lib/libutil-2.12.1.so 00251000-00252000 rw-p 00002000 08:01 393661 /lib/libutil-2.12.1.so 00254000-00255000 r-xp 00000000 00:00 0 [vdso] 00255000-0026c000 r-xp 00000000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026c000-0026d000 r--p 00017000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026d000-0026e000 rw-p 00018000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026e000-002ad000 r-xp 00000000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002ad000-002ae000 ---p 0003f000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002ae000-002af000 r--p 0003f000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002af000-002b0000 rw-p 00040000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002b0000-002be000 r-xp 00000000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002be000-002bf000 r--p 0000d000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002bf000-002c0000 rw-p 0000e000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002c0000-002c4000 r-xp 00000000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c4000-002c5000 r--p 00003000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c5000-002c6000 rw-p 00004000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c7000-002d9000 r-xp 00000000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002d9000-002da000 r--p 00012000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002da000-002db000 rw-p 00013000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002db000-002dc000 rw-p 00000000 00:00 0 002dc000-00370000 r-xp 00000000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00370000-00372000 r--p 00094000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00372000-00373000 rw-p 00096000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00373000-0038d000 r-xp 00000000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038d000-0038e000 r--p 00019000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038e000-0038f000 rw-p 0001a000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038f000-00395000 r-xp 00000000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00395000-00396000 r--p 00005000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00396000-00397000 rw-p 00006000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00397000-003ac000 r-xp 00000000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ac000-003ad000 r--p 00014000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ad000-003ae000 rw-p 00015000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ae000-003b0000 rw-p 00000000 00:00 0 003b0000-003f0000 r-xp 00000000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f0000-003f1000 r--p 00040000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f1000-003f2000 rw-p 00041000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f2000-0040f000 r-xp 00000000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 0040f000-00410000 r--p 0001c000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 00410000-00411000 rw-p 0001d000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 00411000-00413000 r-xp 00000000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00413000-00414000 r--p 00001000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00414000-00415000 rw-p 00002000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00416000-0045f000 r-xp 00000000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 0045f000-00467000 r--p 00049000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 00467000-00469000 rw-p 00051000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 00469000-00551000 r-xp 00000000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00551000-00553000 r--p 000e7000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00553000-00554000 rw-p 000e9000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00554000-00555000 rw-p 00000000 00:00 0 00555000-00578000 r-xp 00000000 08:01 393365 /lib/libpng12.so.0.44.0 00578000-00579000 r--p 00022000 08:01 393365 /lib/libpng12.so.0.44.0 00579000-0057a000 rw-p 00023000 08:01 393365 /lib/libpng12.so.0.44.0 0057d000-0057f000 r-xp 00000000 08:01 393656 /lib/libdl-2.12.1.so 0057f000-00580000 r--p 00001000 08:01 393656 /lib/libdl-2.12.1.soAborted

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • Oracle SOA Suite - Highlighted Travel and Transportation Customer References

    - by Bruce Tierney
    0 0 1 1137 6483 - 54 15 7605 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Next in this series on industry-specific highlights of Oracle SOA Suite customers is the Travel and Transportation industry.  If you are in the travel or transportation industry, take a look at how these Oracle SOA Suite integration customers have addressed common business requirements to enable better customer service, lower costs, and deliver new business services. For example, All Nippon Airways (ANA) has significantly lowered management costs associated with their hybrid on-premise/cloud ticketing system deployments for domestic and international flights. Their lead-time for changes or new applications has been greatly reduced compared to their old mainframe-based systems, enabling ANA to rapidly develop new services in response to changing market needs. Another example is Schneider National, a leading provider of truckload logistics, and how they have integrated Oracle E-Business Suite, Siebel CRM, Oracle Transportation Management and customers applications using Oracle SOA Suite. Schneider National has 400 BPEL processes that generate over 60 million composite instances over five SOA clusters.  Take a deeper look into any of these case studies, videos, and Oracle Magazine articles that closely align with your industry:  Customers fly and airline succeeds with an IT transformation. Company:  All Nippon Airways  Customer Oracle or Profit Magazine Article   |   Travel and Transportation   |   Published on January 06, 2014 Any successful business must ensure ongoing customer satisfaction, respond to increased competition, and minimize costs. Running a successful airline in today’s economic climate requires all of those things, as well a... Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform New Company:  Openmatics s.r.o.  Customer Snapshot   |   Automotive   |   Published on May 20, 2014 Openmatics uses Oracle WebCenter Portal and Oracle Application Development Framework as a foundation for Openmatics, a vehicle telematics service for next-generation fleet management. It integrated its own app shop wi... Future Proof: To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate Company:  SFpark  Customer Oracle or Profit Magazine Article   |   Professional Services   |   Published on August 01, 2012 Oracle Fusion Middleware is at the heart of a recently completed and very ambitious project to change how people handle the challenge of finding a parking space in San Francisco, California. “Parking is a universal is... Globalia Corporación Empresarial Accelerates Hotel Bookings, Boosts Sales by 40% with In-Memory Data Grid Solution Company:  Globalia Corporación Empresarial S.A.  Customer Snapshot   |   Travel and Transportation   |   Published on April 29, 2013 Globalia Corporación Empresarial S.A. deployed Oracle Coherence to reengineer the group’s core system for hotel bookings, now serving booking requests involving 80 hotels within an average response time of 100 millise... Choice Hotels Uses Oracle SOA Suite and Oracle BPM Suite to Modernize Global IT Architecture Company:  Choice Hotels  Press Release   |   Travel and Transportation   |   Published on August 07, 2012 Choice Hotels International, one of the largest and most successful hotel franchises in the world, has implemented Oracle SOA Suite and Oracle BPM Suite. Sascar Consolidates Fleet Management Infrastructure and Accelerates Customers’ Data Access Company:  Sascar  Customer Case Study   |   Travel and Transportation   |   Published on February 07, 2014 Description – Sascar used Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle WebLogic Suite 11g to consolidate fleet management and perform real-time vehicle tracking 4x faster. Directorate General of Civil Aviation Streamlines Key Aviation Applications Access, Improves Productivity and Reduces Maintenance Costs Company:  Directorate General of Civil Aviation (DGAC)  Customer Snapshot   |   Travel and Transportation   |   Published on May 24, 2013 With Oracle Fusion Middleware, the Directorate General of Civil Aviation (DGAC) provided its 12,500 employees a virtual office environment that integrates team workspaces, business applications, and e-mails within a n... Schneider National Implements Next-Generation IT Infrastructure to Continue Leadership in Transportation and Logistics Industry Company:  Schneider National, Inc.  Customer Snapshot   |   Travel and Transportation   |   Published on February 26, 2013 Schneider National, Inc. deployed Oracle applications, Oracle Fusion Middleware, and Oracle development tools as the foundation for its next-generation IT environment, which is driving new levels of efficiency, profit... DGAC Cuts Subscription Costs with Oracle Company:  DGAC  Video   |   Travel and Transportation   |   Published on October 31, 2012 Using Oracle WebCenter Portal, Oracle SOA Suite, and Oracle Exalogic, DGAC reduces the cost of subscriptions to newsletters and provide to its 12,500 employees a collaborative workspace portal. Asiana Airlines Builds PIP System with Oracle Solutions Company:  Asiana Airlines  Video   |   Travel and Transportation   |   Published on July 26, 2012 With Oracle Exalogic and the Oracle SOA Suite, Asiana Airlines builds a passenger service integrated platform providing various services such as integration between its interface and internal systems and a data wareho... Choice Hotels Reduces Time to Market with Oracle WebCenter Company:  Choice Hotels  Video   |   Travel and Transportation   |   Published on April 11, 2014 Using Oracle WebCenter and Oracle SOA standardization, Choice Hotels consolidated multiple platforms, reduced IT dependency and realized tremendous benefits in total cost of ownership and faster time to market support... An Interview with Schneider National's Judy Lemke Company:  Schneider National  Video   |   Travel and Transportation   |   Published on December 17, 2013 Judy Lemke talks with Mark Sunday about the challenges Schneider National faced and how they overcame them through a companywide transformational change. For more details on these case studies, you can use this pre-filtered search on “Travel and Transportation” / “Middleware” / “Service Oriented Architecture” or browse on your own at www.oracle.com/customers

    Read the article

  • CodePlex Daily Summary for Monday, May 26, 2014

    CodePlex Daily Summary for Monday, May 26, 2014Popular ReleasesClosedXML - The easy way to OpenXML: ClosedXML 0.71.1: More performance improvements. It's faster and consumes less memory.Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 and 2013 - 1.1.0.0: Issues fixed in this build: 1. Works for CRM 2013 2. Lookup view not getting blockedSimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesSimple Connect To Db: SimpleConnectToDb_v1: SimpleConnectToDb_v1CRM 2011 / CRM 2013 Form Helper: v2014.05.25: v2014.05.25 Added PhoneFormat & PhoneFormatAreaCode v2014.05.24 Initial ReleaseCreate Word documents without MS Word: Release 3.0: Add support for Sections, Sections Headers and Footers and right to left languages.Corporate News App for SharePoint 2013: CorporateNewsApp v1.6.2.0: Important note This version contains a major bug fix about the generic error "Request failed. Unexpected response data from server null" This error occurs on SharePoint Online only, following an update of the Javascript API after May 2014. If you have installed this application manually in your applications company catalog, you can download the CorporateNewsApp.app file in the zip archive and update it manually. If you have installed this application directly from the SharePoint Store, it ...DevOS: DevOS: Plugin-system added Including:DevOS.exe DevOS API.dll Files must be in the some folderTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"C64 Studio: 3.5: Add: BASIC renumber function Add: !PET pseudo op Add: elseif for !if, } else { pseudo op Add: !TRACE pseudo op Add: Watches are saved/restored with a solution Add: Ctrl-A works now in export assembly controls Add: Preliminary graphic import dialog (not fully functional yet) Add: range and block selection in sprite/charset editor (Shift-Click = range, Alt-Click = block) Fix: Expression evaluator could miscalculate when both division and multiplication were in an expression without parenthesisSEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...ConEmu - Windows console with tabs: ConEmu 140523 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" Aspose for Apache POI: Missing Features of Apache POI SL - v 1.1: Release contain the Missing Features in Apache POI SL SDK in Comparison with Aspose.Slides for dealing with Microsoft Power Point. What's New ?Following Examples: Managing Slide Transitions Manage Smart Art Adding Media Player Adding Audio Frame to Slide Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...WordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.com????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.New ProjectsASP.Net MCV4 Simplified Code Samples: This project intended to simplify the same. In this project each task is implemented with minimum lines of code to reduces complicity.Calvin: net???CodeLatino by Latinosoft: A Modified version for codeShow -- Probably taking more than a month.freeasyBackup: A free and easy to use Backup Tool for everyone. Without any cloud restrictions. freeasyExplorer: A free and easy to use File Explorer for everyone.openPDFspeedreader: #spritz #pdfreader #speedreader PDF Editor to Edit PDF Files in your ASP.NET Applications: This sample application allows the users to edit PDF files online using Aspose.Pdf for .NET.SharePoint World Cup 2013: world cup 2014SSAS Long Running Query Performance Helper: This utility helps investigate long running multidimensional or mining queries in discovery, de-parameterization and re-parameterization back to source format.

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • SDL_BlitSurface segmentation fault (surfaces aren't null)

    - by Trollkemada
    My app is crashing on SDL_BlitSurface() and i can't figure out why. I think it has something to do with my static object. If you read the code you'll why I think so. This happens when the limits of the map are reached, i.e. (iwidth || jheight). This is the code: Map.cpp (this render) Tile const * Map::getTyle(int i, int j) const { if (i >= 0 && j >= 0 && i < width && j < height) { return data[i][j]; } else { return &Tile::ERROR_TYLE; // This makes SDL_BlitSurface (called later) crash //return new Tile(TileType::ERROR); // This works with not problem (but is memory leak, of course) } } void Map::render(int x, int y, int width, int height) const { //DEBUG("(Rendering...) x: "<<x<<", y: "<<y<<", width: "<<width<<", height: "<<height); int firstI = x / TileType::PIXEL_PER_TILE; int firstJ = y / TileType::PIXEL_PER_TILE; int lastI = (x+width) / TileType::PIXEL_PER_TILE; int lastJ = (y+height) / TileType::PIXEL_PER_TILE; // The previous integer division rounds down when dealing with positive values, but it rounds up // negative values. This is a fix for that (We need those values always rounded down) if (firstI < 0) { firstI--; } if (firstJ < 0) { firstJ--; } const int firstX = x; const int firstY = y; SDL_Rect srcRect; SDL_Rect dstRect; for (int i=firstI; i <= lastI; i++) { for (int j=firstJ; j <= lastJ; j++) { if (i*TileType::PIXEL_PER_TILE < x) { srcRect.x = x % TileType::PIXEL_PER_TILE; srcRect.w = TileType::PIXEL_PER_TILE - (x % TileType::PIXEL_PER_TILE); dstRect.x = i*TileType::PIXEL_PER_TILE + (x % TileType::PIXEL_PER_TILE) - firstX; } else if (i*TileType::PIXEL_PER_TILE >= x + width) { srcRect.x = 0; srcRect.w = x % TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } else { srcRect.x = 0; srcRect.w = TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } if (j*TileType::PIXEL_PER_TILE < y) { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE - (y % TileType::PIXEL_PER_TILE); dstRect.y = j*TileType::PIXEL_PER_TILE + (y % TileType::PIXEL_PER_TILE) - firstY; } else if (j*TileType::PIXEL_PER_TILE >= y + height) { srcRect.y = y % TileType::PIXEL_PER_TILE; srcRect.h = y % TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } else { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } SDL::YtoSDL(dstRect.y, srcRect.h); SDL_BlitSurface(getTyle(i,j)->getType()->getSurface(), &srcRect, SDL::getScreen(), &dstRect); // <-- Crash HERE /*DEBUG("i = "<<i<<", j = "<<j); DEBUG("srcRect.x = "<<srcRect.x<<", srcRect.y = "<<srcRect.y<<", srcRect.w = "<<srcRect.w<<", srcRect.h = "<<srcRect.h); DEBUG("dstRect.x = "<<dstRect.x<<", dstRect.y = "<<dstRect.y);*/ } } } Tile.h #ifndef TILE_H #define TILE_H #include "TileType.h" class Tile { private: TileType const * type; public: static const Tile ERROR_TYLE; Tile(TileType const * t); ~Tile(); TileType const * getType() const; }; #endif Tile.cpp #include "Tile.h" const Tile Tile::ERROR_TYLE(TileType::ERROR); Tile::Tile(TileType const * t) : type(t) {} Tile::~Tile() {} TileType const * Tile::getType() const { return type; } TileType.h #ifndef TILETYPE_H #define TILETYPE_H #include "SDL.h" #include "DEBUG.h" class TileType { protected: TileType(); ~TileType(); public: static const int PIXEL_PER_TILE = 30; static const TileType * ERROR; static const TileType * AIR; static const TileType * SOLID; virtual SDL_Surface * getSurface() const = 0; virtual bool isSolid(int x, int y) const = 0; }; #endif ErrorTyle.h #ifndef ERRORTILE_H #define ERRORTILE_H #include "TileType.h" class ErrorTile : public TileType { friend class TileType; private: ErrorTile(); mutable SDL_Surface * surface; static const char * FILE_PATH; public: SDL_Surface * getSurface() const; bool isSolid(int x, int y) const ; }; #endif ErrorTyle.cpp (The surface can't be loaded when building the object, because it is a static object and SDL_Init() needs to be called first) #include "ErrorTile.h" const char * ErrorTile::FILE_PATH = ("C:\\error.bmp"); ErrorTile::ErrorTile() : TileType(), surface(NULL) {} SDL_Surface * ErrorTile::getSurface() const { if (surface == NULL) { if (SDL::isOn()) { surface = SDL::loadAndOptimice(ErrorTile::FILE_PATH); if (surface->w != TileType::PIXEL_PER_TILE || surface->h != TileType::PIXEL_PER_TILE) { WARNING("Bad tile surface size"); } } else { ERROR("Trying to load a surface, but SDL is not on"); } } if (surface == NULL) { // This if doesn't get called, so surface != NULL ERROR("WTF? Can't load surface :\\"); } return surface; } bool ErrorTile::isSolid(int x, int y) const { return true; }

    Read the article

  • Keypress detection wont work after seemingly unrelated code change

    - by LukeZaz
    I'm trying to have the Enter key cause a new 'map' to generate for my game, but for whatever reason after implementing full-screen in it the input check won't work anymore. I tried removing the new code and only pressing one key at a time, but it still won't work. Here's the check code and the method it uses, along with the newMap method: public class Game1 : Microsoft.Xna.Framework.Game { // ... protected override void Update(GameTime gameTime) { // ... // Check if Enter was pressed - if so, generate a new map if (CheckInput(Keys.Enter, 1)) { blocks = newMap(map, blocks, console); } // ... } // Method: Checks if a key is/was pressed public bool CheckInput(Keys key, int checkType) { // Get current keyboard state KeyboardState newState = Keyboard.GetState(); bool retType = false; // Return type if (checkType == 0) { // Check Type: Is key currently down? if (newState.IsKeyDown(key)) { retType = true; } else { retType = false; } } else if (checkType == 1) { // Check Type: Was the key pressed? if (newState.IsKeyDown(key)) { if (!oldState.IsKeyDown(key)) { // Key was just pressed retType = true; } else { // Key was already pressed, return false retType = false; } } } // Save keyboard state oldState = newState; // Return result if (retType == true) { return true; } else { return false; } } // Method: Generate a new map public List<Block> newMap(Map map, List<Block> blockList, Console console) { // Create new map block coordinates List<Vector2> positions = new List<Vector2>(); positions = map.generateMap(console); // Clear list and reallocate memory previously used up by it blockList.Clear(); blockList.TrimExcess(); // Add new blocks to the list using positions created by generateMap() foreach (Vector2 pos in positions) { blockList.Add(new Block() { Position = pos, Texture = dirtTex }); } // Return modified list return blockList; } // ... } and the generateMap code: // Generate a list of Vector2 positions for blocks public List<Vector2> generateMap(Console console, int method = 0) { ScreenTileWidth = gDevice.Viewport.Width / 16; ScreenTileHeight = gDevice.Viewport.Height / 16; maxHeight = gDevice.Viewport.Height; List<Vector2> blockLocations = new List<Vector2>(); if (useScreenSize == true) { Width = ScreenTileWidth; Height = ScreenTileHeight; } else { maxHeight = Height; } int startHeight = -500; // For debugging purposes, the startHeight is set to an // hopefully-unreachable value - if it returns this, something is wrong // Methods of land generation /// <summary> /// Third version land generation /// Generates a base land height as the second version does /// but also generates a 'max change' value which determines how much /// the land can raise or lower by which it now does by a random amount /// during generation /// </summary> if (method == 0) { // Get the land height startHeight = rnd.Next(1, maxHeight); int maxChange = rnd.Next(1, 5); // Amount ground will raise/lower by int curHeight = startHeight; for (int w = 0; w < Width; w++) { // Run a chance to lower/raise ground level int changeBy = rnd.Next(1, maxChange); int doChange = rnd.Next(0, 3); if (doChange == 1 && !(curHeight <= (1 + maxChange))) { curHeight = curHeight - changeBy; } else if (doChange == 2 && !(curHeight >= (29 - maxChange))) { curHeight = curHeight + changeBy; } for (int h = curHeight; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; blockLocations.Add(new Vector2(x, y)); } } console.newMsg("[INFO] Cur, height change maximum: " + maxChange.ToString()); } /// <summary> /// Second version land generator /// Generates a solid mass of land starting at a random height /// derived from either screen height or provided height value /// </summary> else if (method == 1) { // Get the land height startHeight = rnd.Next(0, 30); for (int w = 0; w < Width; w++) { for (int h = startHeight; h < ScreenTileHeight; h++) { // Location variables float x = w * 16; float y = h * 16; // Add a tile at set location blockLocations.Add(new Vector2(x, y)); } } } /// <summary> /// First version land generator /// Generates land completely randomly either across screen or /// in a box set by Width and Height values /// </summary> else { // For each tile in the map... for (int w = 0; w < Width; w++) { for (int h = 0; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; // ...decide whether or not to place a tile... if (rnd.Next(0, 2) == 1) { // ...and if so, add a tile at that location. blockLocations.Add(new Vector2(x, y)); } } } } console.newMsg("[INFO] Cur, base height: " + startHeight.ToString()); return blockLocations; } I never touched any of the above code for this when it broke - changing keys won't seem to fix it. Despite this, I have camera movement set inside another Game1 method that uses WASD and works perfectly. All I did was add a few lines of code here: private int BackBufferWidth = 1280; // Added these variables private int BackBufferHeight = 800; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = BackBufferWidth; // and this graphics.PreferredBackBufferHeight = BackBufferHeight; // this Content.RootDirectory = "Content"; this.graphics.IsFullScreen = true; // and this } When I try adding a console line to be printed in the event the key is pressed, it seems that the If is never even triggered despite the correct key being pressed.

    Read the article

  • OData &ndash; The easiest service I can create: now with updates

    - by Jon Dalberg
    The other day I created a simple NastyWord service exposed via OData. It was read-only and used an in-memory backing store for the words. Today I’ll modify it to use a file instead of a list and I’ll accept new nasty words by implementing IUpdatable directly. The first thing to do is enable the service to accept new entries. This is done at configuration time by adding the “WriteAppend” access rule: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Next I placed a file, NastyWords.txt, in the “App_Data” folder and added a few *choice* words to start. This required one simple change to our NastyWordDataSource.cs file: 1: public NastyWordsDataSource() 2: { 3: UpdateFromSource(); 4: } 5:   6: private void UpdateFromSource() 7: { 8: var words = File.ReadAllLines(pathToFile); 9: NastyWords = (from w in words 10: select new NastyWord { Word = w }).AsQueryable(); 11: }   Nothing too shocking here, just reading each line from the NastyWords.txt file and exposing them. Next, I implemented IUpdatable which comes with a boat-load of methods. We don’t need all of them for now since we are only concerned with allowing new values. Here are the methods we must implement, all the others throw a NotImplementedException: 1: public object CreateResource(string containerName, string fullTypeName) 2: { 3: var nastyWord = new NastyWord(); 4: pendingUpdates.Add(nastyWord); 5: return nastyWord; 6: } 7:   8: public object ResolveResource(object resource) 9: { 10: return resource; 11: } 12:   13: public void SaveChanges() 14: { 15: var intersect = (from w in pendingUpdates 16: select w.Word).Intersect(from n in NastyWords 17: select n.Word); 18:   19: if (intersect.Count() > 0) 20: throw new DataServiceException(500, "duplicate entry"); 21:   22: var lines = from w in pendingUpdates 23: select w.Word; 24:   25: File.AppendAllLines(pathToFile, 26: lines, 27: Encoding.UTF8); 28:   29: pendingUpdates.Clear(); 30:   31: UpdateFromSource(); 32: } 33:   34: public void SetValue(object targetResource, string propertyName, object propertyValue) 35: { 36: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 37: }   I use a simple list to contain the pending updates and only commit them when the “SaveChanges” method is called. Here’s the order these methods are called in our service during an insert: CreateResource – here we just instantiate a new NastyWord and stick a reference to it in our pending updates list. SetValue – this is where the “Word” property of the NastyWord instance is set. SaveChanges – get the list of pending updates, barfing on duplicates, write them to the file and clear our pending list. ResolveResource – the newly created resource will be returned directly here since we aren’t dealing with “handles” to objects but the actual objects themselves. Not too bad, eh? I didn’t find this documented anywhere but a little bit of digging in the OData spec and use of Fiddler made it pretty easy to figure out. Here is some client code which would add a new nasty word: 1: static void Main(string[] args) 2: { 3: var svc = new ServiceReference1.NastyWordsDataSource(new Uri("http://localhost.:60921/NastyWords.svc")); 4: svc.AddToNastyWords(new ServiceReference1.NastyWord() { Word = "shat" }); 5:   6: svc.SaveChanges(); 7: }   Here’s all of the code so far for to implement the service: 1: using System; 2: using System.Collections.Generic; 3: using System.Data.Services; 4: using System.Data.Services.Common; 5: using System.Linq; 6: using System.ServiceModel.Web; 7: using System.Web; 8: using System.IO; 9: using System.Text; 10:   11: namespace ONasty 12: { 13: [DataServiceKey("Word")] 14: public class NastyWord 15: { 16: public string Word { get; set; } 17: } 18:   19: public class NastyWordsDataSource : IUpdatable 20: { 21: private List<NastyWord> pendingUpdates = new List<NastyWord>(); 22: private string pathToFile = @"path to your\App_Data\NastyWords.txt"; 23:   24: public NastyWordsDataSource() 25: { 26: UpdateFromSource(); 27: } 28:   29: private void UpdateFromSource() 30: { 31: var words = File.ReadAllLines(pathToFile); 32: NastyWords = (from w in words 33: select new NastyWord { Word = w }).AsQueryable(); 34: } 35:   36: public IQueryable<NastyWord> NastyWords { get; private set; } 37:   38: public void AddReferenceToCollection(object targetResource, string propertyName, object resourceToBeAdded) 39: { 40: throw new NotImplementedException(); 41: } 42:   43: public void ClearChanges() 44: { 45: pendingUpdates.Clear(); 46: } 47:   48: public object CreateResource(string containerName, string fullTypeName) 49: { 50: var nastyWord = new NastyWord(); 51: pendingUpdates.Add(nastyWord); 52: return nastyWord; 53: } 54:   55: public void DeleteResource(object targetResource) 56: { 57: throw new NotImplementedException(); 58: } 59:   60: public object GetResource(IQueryable query, string fullTypeName) 61: { 62: throw new NotImplementedException(); 63: } 64:   65: public object GetValue(object targetResource, string propertyName) 66: { 67: throw new NotImplementedException(); 68: } 69:   70: public void RemoveReferenceFromCollection(object targetResource, string propertyName, object resourceToBeRemoved) 71: { 72: throw new NotImplementedException(); 73: } 74:   75: public object ResetResource(object resource) 76: { 77: throw new NotImplementedException(); 78: } 79:   80: public object ResolveResource(object resource) 81: { 82: return resource; 83: } 84:   85: public void SaveChanges() 86: { 87: var intersect = (from w in pendingUpdates 88: select w.Word).Intersect(from n in NastyWords 89: select n.Word); 90:   91: if (intersect.Count() > 0) 92: throw new DataServiceException(500, "duplicate entry"); 93:   94: var lines = from w in pendingUpdates 95: select w.Word; 96:   97: File.AppendAllLines(pathToFile, 98: lines, 99: Encoding.UTF8); 100:   101: pendingUpdates.Clear(); 102:   103: UpdateFromSource(); 104: } 105:   106: public void SetReference(object targetResource, string propertyName, object propertyValue) 107: { 108: throw new NotImplementedException(); 109: } 110:   111: public void SetValue(object targetResource, string propertyName, object propertyValue) 112: { 113: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 114: } 115: } 116:   117: public class NastyWords : DataService<NastyWordsDataSource> 118: { 119: // This method is called only once to initialize service-wide policies. 120: public static void InitializeService(DataServiceConfiguration config) 121: { 122: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 123: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 124: } 125: } 126: } Next time we’ll allow removing nasty words. Enjoy!

    Read the article

  • Passing a parameter so that it cannot be changed – C#

    - by nmarun
    I read this requirement of not allowing a user to change the value of a property passed as a parameter to a method. In C++, as far as I could recall (it’s been over 10 yrs, so I had to refresh memory), you can pass ‘const’ to a function parameter and this ensures that the parameter cannot be changed inside the scope of the function. There’s no such direct way of doing this in C#, but that does not mean it cannot be done!! Ok, so this ‘not-so-direct’ technique depends on the type of the parameter – a simple property or a collection. Parameter as a simple property: This is quite easy (and you might have guessed it already). Bulent Ozkir clearly explains how this can be done here. Parameter as a collection property: Obviously the above does not work if the parameter is a collection of some type. Let’s dig-in. Suppose I need to create a collection of type KeyTitle as defined below. 1: public class KeyTitle 2: { 3: public int Key { get; set; } 4: public string Title { get; set; } 5: } My class is declared as below: 1: public class Class1 2: { 3: public Class1() 4: { 5: MyKeyTitleList = new List<KeyTitle>(); 6: } 7: 8: public List<KeyTitle> MyKeyTitleList { get; set; } 9: public ReadOnlyCollection<KeyTitle> ReadonlyKeyTitleCollection 10: { 11: // .AsReadOnly creates a ReadOnlyCollection<> type 12: get { return MyKeyTitleList.AsReadOnly(); } 13: } 14: } See the .AsReadOnly() method used in the second property? As MSDN says it: “Returns a read-only IList<T> wrapper for the current collection.” Knowing this, I can implement my code as: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8:  9: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 10:  11: Console.ReadLine(); 12: } 13:  14: private static void TryToModifyCollection(ReadOnlyCollection<KeyTitle> readOnlyCollection) 15: { 16: // can only read 17: for (int i = 0; i < readOnlyCollection.Count; i++) 18: { 19: Console.WriteLine("{0} - {1}", readOnlyCollection[i].Key, readOnlyCollection[i].Title); 20: } 21: // Add() - not allowed 22: // even the indexer does not have a setter 23: } The output is as expected: The below image shows two things. In the first line, I’ve tried to access an element in my read-only collection through an indexer. It shows that the ReadOnlyCollection<> does not have a setter on the indexer. The second line tells that there’s no ‘Add()’ method for this type of collection. The capture below shows there’s no ‘Remove()’ method either, there-by eliminating all ways of modifying a collection. Mission accomplished… right? Now, even if you have a collection of different type, all you need to do is to somehow cast (used loosely) it to a List<> and then do a .AsReadOnly() to get a ReadOnlyCollection of your custom collection type. As an example, if you have an IDictionary<int, string>, you can create a List<T> of this type with a wrapper class (KeyTitle in our case). 1: public IDictionary<int, string> MyDictionary { get; set; } 2:  3: public ReadOnlyCollection<KeyTitle> ReadonlyDictionary 4: { 5: get 6: { 7: return (from item in MyDictionary 8: select new KeyTitle 9: { 10: Key = item.Key, 11: Title = item.Value, 12: }).ToList().AsReadOnly(); 13: } 14: } Cool huh? Just one thing you need to know about the .AsReadOnly() method is that the only way to modify your ReadOnlyCollection<> is to modify the original collection. So doing: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 9:  10: Console.WriteLine(); 11:  12: class1.MyKeyTitleList.Add(new KeyTitle { Key = 5, Title = "mno" }); 13: class1.MyKeyTitleList[2] = new KeyTitle{Key = 3, Title = "GHI"}; 14: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 15:  16: Console.ReadLine(); 17: } Gives me the output of: See that the second element’s Title is changed to upper-case and the fifth element also gets displayed even though we’re still looping through the same ReadOnlyCollection<KeyTitle>. Verdict: Now you know of a way to implement ‘Method(const param1)’ in your code!

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >