Search Results

Search found 12872 results on 515 pages for 'memory alignment'.

Page 495/515 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • CodePlex Daily Summary for Tuesday, July 02, 2013

    CodePlex Daily Summary for Tuesday, July 02, 2013Popular ReleasesMastersign.Expressions: Mastersign.Expressions v0.4.2: added support for if(<cond>, <true-part>, <false-part>) fixed multithreading issue with rand() improved demo applicationNB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.6 Rel0: v2.3.6 Is now DNN6 and DNN7 compatible Important : During update this install with overwrite the menu.xml setting, if you have changed this then make a backup before you upgrade and reapply your changes after the upgrade. Please view the following documentation if you are installing and configuring this module for the first time System Requirements Skill requirements Downloads and documents Step by step guide to a working store Please ask all questions in the Discussions tab. Document.Editor: 2013.26: What's new for Document.Editor 2013.26: New Insert Chart Improved User Interface Minor Bug Fix's, improvements and speed upsWsus Package Publisher: Release V1.2.1307.01: Fix an issue in the UI, approvals are not shown correctly in the 'Report' tabDirectX Tool Kit: July 2013: July 1, 2013 VS 2013 Preview projects added and updates for DirectXMath 3.05 vectorcall Added use of sRGB WIC metadata for JPEG, PNG, and TIFF SaveToWIC functions updated with new optional setCustomProps parameter and error check with optional targetFormatCore Server 2012 Powershell Script Hyper-v Manager: new_root.zip: Verison 1.0JSON Toolkit: JSON Toolkit 4.1.736: Improved strinfigy performance New serializing feature New anonymous type support in constructorsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...New ProjectsALM Rangers DevOps Tooling and Guidance: Practical tooling and guidance that will enable teams to realize a faster deployment based on continuous feedback.Core Server 2012 Powershell Script Hyper-v Manager: Free core Server 2012 powershell scripts and batch files that replace the non-existent hyper-v manager, vmconnect and mstsc.Enhanced Deployment Service (EDS): EDS is a web service based utility designed to extend the deployment capabilities of administrators with the Microsoft Deployment Toolkit.ExtendedDialogBox: Libreria DialogBoxJazdy: This project is here only because we wanted to take advantage of a public git server.Mon Examen: This web interface is meant to make examinationsneet: summaryOrchard Multi-Choice Voting: A multiple choice voting Orchard module.Particle Swarm Optimization Solving Quadratic Assignment Problem: This project is submitted for the solving of QAP using PSO algorithms with addition of some modification Porjects: 23123123PPL Power Pack: PPL Power PackProperty Builder: Visual Studio tool for speeding up process of coding class properties getters and setters.RedRuler for Redline: I tried some on-screen rulers, none of them help me measure the UI element quickly based on the Redline. So I decided to created this handy RedRuler tool. Royale Living: Mahindra Royale Community PortalSearch and booking Hotel or Tours: Ð? án nghiên c?u c?a sinh viên tdt theo mô hình mvc 4SystemBuilder.Show: This tool is a helper after you create your project in visual studio to create the respective objects and interface. TalentDesk: new ptojectTcmplex: The Training Center teaches many different kind of course such as English, French, Computer hardware and computer softwareTFS Reporting Guide: Provides guidance and samples to enable TFS users to generate reports based on WIT data.Umbraco AdaptiveImages: Adaptive Images Package for UmbracoVirtualNet - A ILcode interpreter/emulator written in C++/Assembly: VirtualNet is a interpreter/emulator for running .net code in native without having to install the .Net FrameWorkVisual Blocks: Visual Blocks ????IDE ????? ??????? ????? ????/?? Visual Studio and Cloud Based Mobile Device Testing: Practical guidance enabling field to remove blockers to adoption and to use and extend the Perfecto Mobile Cloud Device testing within the context of VS.Windows 8 Time Picker for Windows Phone: A Windows Phone implementation of the Time Picker control found in the Windows 8.1 Alarms app.???? - SmallBasic?: ?????????

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • T4 Performance Counters explained

    - by user13346607
    Now that T4 is out for a few month some people might have wondered what details of the new pipeline you can monitor. A "cpustat -h" lists a lot of events that can be monitored, and only very few are self-explanatory. I will try to give some insight on all of them, some of these "PIC events" require an in-depth knowledge of T4 pipeline. Over time I will try to explain these, for the time being these events should simply be ignored. (Side note: some counters changed from tape-out 1.1 (*only* used in the T4 beta program) to tape-out 1.2 (used in the systems shipping today) The table only lists the tape-out 1.2 counters) 0 0 1 1058 6033 Oracle Microelectronics 50 14 7077 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} pic name (cpustat) Prose Comment Sel-pipe-drain-cycles, Sel-0-[wait|ready], Sel-[1,2] Sel-0-wait counts cycles a strand waits to be selected. Some reasons can be counted in detail; these are: Sel-0-ready: Cycles a strand was ready but not selected, that can signal pipeline oversubscription Sel-1: Cycles only one instruction or µop was selected Sel-2: Cycles two instructions or µops were selected Sel-pipe-drain-cycles: cf. PRM footnote 8 to table 10.2 Pick-any, Pick-[0|1|2|3] Cycles one, two, three, no or at least one instruction or µop is picked Instr_FGU_crypto Number of FGU or crypto instructions executed on that vcpu Instr_ld dto. for load Instr_st dto. for store SPR_ring_ops dto. for SPR ring ops Instr_other dto. for all other instructions not listed above, PRM footnote 7 to table 10.2 lists the instructions Instr_all total number of instructions executed on that vcpu Sw_count_intr Nr of S/W count instructions on that vcpu (sethi %hi(fc000),%g0 (whatever that is))  Atomics nr of atomic ops, which are LDSTUB/a, CASA/XA, and SWAP/A SW_prefetch Nr of PREFETCH or PREFETCHA instructions Block_ld_st Block loads or store on that vcpu IC_miss_nospec, IC_miss_[L2_or_L3|local|remote]\ _hit_nospec Various I$ misses, distinguished by where they hit. All of these count per thread, but only primary events: T4 counts only the first occurence of an I$ miss on a core for a certain instruction. If one strand misses in I$ this miss is counted, but if a second strand on the same core misses while the first miss is being resolved, that second miss is not counted This flavour of I$ misses counts only misses that are caused by instruction that really commit (note the "_nospec") BTC_miss Branch target cache miss ITLB_miss ITLB misses (synchronously counted) ITLB_miss_asynch dto. but asynchronously [I|D]TLB_fill_\ [8KB|64KB|4MB|256MB|2GB|trap] H/W tablewalk events that fill ITLB or DTLB with translation for the corresponding page size. The “_trap” event occurs if the HWTW was not able to fill the corresponding TLB IC_mtag_miss, IC_mtag_miss_\ [ptag_hit|ptag_miss|\ ptag_hit_way_mismatch] I$ micro tag misses, with some options for drill down Fetch-0, Fetch-0-all fetch-0 counts nr of cycles nothing was fetched for this particular strand, fetch-0-all counts cycles nothing was fetched for all strands on a core Instr_buffer_full Cycles the instruction buffer for a strand was full, thereby preventing any fetch BTC_targ_incorrect Counts all occurences of wrongly predicted branch targets from the BTC [PQ|ROB|LB|ROB_LB|SB|\ ROB_SB|LB_SB|RB_LB_SB|\ DTLB_miss]\ _tag_wait ST_q_tag_wait is listed under sl=20. These counters monitor pipeline behaviour therefore they are not strand specific: PQ_...: cycles Rename stage waits for a Pick Queue tag (might signal memory bound workload for single thread mode, cf. Mail from Richard Smith) ROB_...: cycles Select stage waits for a ROB (ReOrderBuffer) tag LB_...: cycles Select stage waits for a Load Buffer tag SB_...: cycles Select stage waits for Store Buffer tag combinations of the above are allowed, although some of these events can overlap, the counter will only be incremented once per cycle if any of these occur DTLB_...: cycles load or store instructions wait at Pick stage for a DTLB miss tag [ID]TLB_HWTW_\ [L2_hit|L3_hit|L3_miss|all] Counters for HWTW accesses caused by either DTLB or ITLB misses. Canbe further detailed by where they hit IC_miss_L2_L3_hit, IC_miss_local_remote_remL3_hit, IC_miss I$ prefetches that were dropped because they either miss in L2$ or L3$ This variant counts misses regardless if the causing instruction commits or not DC_miss_nospec, DC_miss_[L2_L3|local|remote_L3]\ _hit_nospec D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters DTLB_miss_asynch counts all DTLB misses asynchronously, there is no way to count them synchronously DC_pref_drop_DC_hit, SW_pref_drop_[DC_hit|buffer_full] L1-D$ h/w prefetches that were dropped because of a D$ hit, counted per core. The others count software prefetches per strand [Full|Partial]_RAW_hit_st_[buf|q] Count events where a load wants to get data that has not yet been stored, i. e. it is still inside the pipeline. The data might be either still in the store buffer or in the store queue. If the load's data matches in the SB and in the store queue the data in buffer takes precedence of course since it is younger [IC|DC]_evict_invalid, [IC|DC|L1]_snoop_invalid, [IC|DC|L1]_invalid_all Counter for invalidated cache evictions per core St_q_tag_wait Number of cycles pipeline waits for a store queue tag, of course counted per core Data_pref_[drop_L2|drop_L3|\ hit_L2|hit_L3|\ hit_local|hit_remote] Data prefetches that can be further detailed by either why they were dropped or where they did hit St_hit_[L2|L3], St_L2_[local|remote]_C2C, St_local, St_remote Store events distinguished by where they hit or where they cause a L2 cache-to-cache transfer, i.e. either a transfer from another L2$ on the same die or from a different die DC_miss, DC_miss_\ [L2_L3|local|remote]_hit D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters L2_[clean|dirty]_evict Per core clean or dirty L2$ evictions L2_fill_buf_full, L2_wb_buf_full, L2_miss_buf_full Per core L2$ buffer events, all count number of cycles that this state was present L2_pipe_stall Per core cycles pipeline stalled because of L2$ Branches Count branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_taken Counts taken branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_mispred, Br_dir_mispred, Br_trg_mispred, Br_trg_mispred_\ [far_tbl|indir_tbl|ret_stk] Counter for various branch misprediction events.  Cycles_user counts cycles, attribute setting hpriv, nouser, sys controls addess space to count in Commit-[0|1|2], Commit-0-all, Commit-1-or-2 Number of times either no, one, or two µops commit for a strand. Commit-0-all counts number of times no µop commits for the whole core, cf. footnote 11 to table 10.2 in PRM for a more detailed explanation on how this counters interacts with the privilege levels

    Read the article

  • Persisting Session Between Different Browser Instances

    - by imran_ku07
        Introduction:          By default inproc session's identifier cookie is saved in browser memory. This cookie is known as non persistent cookie identifier. This simply means that if the user closes his browser then the cookie is immediately removed. On the other hand cookies which stored on the user’s hard drive and can be reused for later visits are called persistent cookies. Persistent cookies are less used than nonpersistent cookies because of security. Simply because nonpersistent cookies makes session hijacking attacks more difficult and more limited. If you are using shared computer then there are lot of chances that your persistent session will be used by other shared members. However this is not always the case, lot of users desired that their session will remain persisted even they open two instances of same browser or when they close and open a new browser. So in this article i will provide a very simple way to persist your session even the browser is closed.   Description:          Let's create a simple ASP.NET Web Application. In this article i will use Web Form but it also works in MVC. Open Default.aspx.cs and add the following code in Page_Load.    protected void Page_Load(object sender, EventArgs e)        {            if (Session["Message"] != null)                Response.Write(Session["Message"].ToString());            Session["Message"] = "Hello, Imran";        }          This page simply shows a message if a session exist previously and set the session.          Now just run the application, you will just see an empty page on first try. After refreshing the page you will see the Message "Hello, Imran". Now just close the browser and reopen it or just open another browser instance, you will get the exactly same behavior when you run your application first time . Why the session is not persisted between browser instances. The simple reason is non persistent session cookie identifier. The session cookie identifier is not shared between browser instances. Now let's make it persistent.          To make your application share session between different browser instances just add the following code in global.asax.    protected void Application_PostMapRequestHandler(object sender, EventArgs e)           {               if (Request.Cookies["ASP.NET_SessionIdTemp"] != null)               {                   if (Request.Cookies["ASP.NET_SessionId"] == null)                       Request.Cookies.Add(new HttpCookie("ASP.NET_SessionId", Request.Cookies["ASP.NET_SessionIdTemp"].Value));                   else                       Request.Cookies["ASP.NET_SessionId"].Value = Request.Cookies["ASP.NET_SessionIdTemp"].Value;               }           }          protected void Application_PostRequestHandlerExecute(object sender, EventArgs e)        {             HttpCookie cookie = new HttpCookie("ASP.NET_SessionIdTemp", Session.SessionID);               cookie.Expires = DateTime.Now.AddMinutes(Session.Timeout);               Response.Cookies.Add(cookie);         }          This code simply state that during Application_PostRequestHandlerExecute(which is executed after HttpHandler) just add a persistent cookie ASP.NET_SessionIdTemp which contains the value of current user SessionID and sets the timeout to current user session timeout.          In Application_PostMapRequestHandler(which is executed just before th session is restored) we just check whether the Request cookie contains ASP.NET_SessionIdTemp. If yes then just add or update ASP.NET_SessionId cookie with ASP.NET_SessionIdTemp. So when a new browser instance is open, then a check will made that if ASP.NET_SessionIdTemp exist then simply add or update ASP.NET_SessionId cookie with ASP.NET_SessionIdTemp.          So run your application again, you will get the last closed browser session(if it is not expired).   Summary:          Persistence session is great way to increase the user usability. But always beware the security before doing this. However there are some cases in which you might need persistence session. In this article i just go through how to do this simply. So hopefully you will again enjoy this simple article too.

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • Create and Backup Multiple Profiles in Google Chrome

    - by Asian Angel
    Other browsers such as Firefox and SeaMonkey allow you to have multiple profiles but not Chrome…at least not until now. If you want to use multiple profiles and create backups for them then join us as we look at Google Chrome Backup. Note: There is a paid version of this program available but we used the free version for our article. Google Chrome Backup in Action During the installation process you will run across this particular window. It will have a default user name filled in as shown here…you will not need to do anything except click on Next to continue installing the program. When you start the program for the first time this is what you will see. Your default Chrome Profile will already be visible in the window. A quick look at the Profile Menu… In the Tools Menu you can go ahead and disable the Start program at Windows Startup setting…the only time that you will need the program running is if you are creating or restoring a profile. When you create a new profile the process will start with this window. You can access an Advanced Options mode if desired but most likely you will not need it. Here is a look at the Advanced Options mode. It is mainly focused on adding Switches to the new Chrome Shortcut. The drop-down menu for the Switches available… To create your new profile you will need to choose: A profile location A profile name (as you type/create the profile name it will automatically be added to the Profile Path) Make certain that the Create a new shortcut to access new profile option is checked For our example we decided to try out the Disable plugins switch option… Click OK to create the new profile. Once you have created your new profile, you will find a new shortcut on the Desktop. Notice that the shortcut’s name will be Google Chrome + profile name that you chose. Note: On our system we were able to move the new shortcut to the “Start Menu” without problems. Clicking on our new profile’s shortcut opened up a fresh and clean looking instance of Chrome. Just out of curiosity we did decide to check the shortcut to see if the Switch set up correctly. Unfortunately it did not in this instance…so your mileage with the Switches may vary. This was just a minor quirk and nothing to get excited or upset over…especially considering that you can create multiple profiles so easily. After opening up our default profile of Chrome you can see the individual profile icons (New & Default in order) sitting in the Taskbar side-by-side. And our two profiles open at the same time on our Desktop… Backing Profiles Up For the next part of our tests we decided to create a backup for each of our profiles. Starting the wizard will allow you to choose between creating or restoring a profile. Note: To create or restore a backup click on Run Wizard. When you reach the second part of the process you can go with the Backup default profile option or choose a particular one from a drop-down list using the Select a profile to backup option. We chose to backup the Default Profile first… In the third part of the process you will need to select a location to save the profile to. Once you have selected the location you will see the Target Path as shown here. You can choose your own name for the backup file…we decided to go with the default name instead since it contained the backup’s calendar date. A very nice feature is the ability to have the cache cleared before creating the backup. We clicked on Yes…choose the option that best suits your needs. Once you have chosen either Yes or No the backup will then be created. Click Finish to complete the process. The backup file for our Default Profile at 14.0 MB in size. And the backup file for our Chrome Fresh Profile…2.81 MB. Restoring Profiles For the final part of our tests we decided to do a Restore. Select Restore and click Next to get the process started. In the second step you will need to browse for the Profile Backup File (and select the desired profile if you have created multiples). For our example we decided to overwrite the original Default Profile with the Chrome Fresh Profile. The third step lets you choose where to restore the chosen profile to…you can go with the Default Profile or choose one from the drop-down list using the Restore to a selected profile option. The final step will get you on your way to restoring the chosen profile. The program will conduct a check regarding the previous/old profile and ask if you would like to proceed with overwriting it. Definitely nice in case you change your mind at the last moment. Clicking Yes will finish the restoration. The only other odd quirk that we noticed while using the program was that the Next Button did not function after restoring the profile. You can easily get around the problem by clicking to close the window. Which one is which? After the restore process we had identical twins. Conclusion If you have been looking for a way to create multiple profiles in Google Chrome, then you might want to add this program to your system. Links Download Google Chrome Backup Similar Articles Productive Geek Tips Backup and Restore Firefox Profiles EasilyBackup Different Browsers Easily with FavBackupBackup Your Browser with the New FavBackupStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • CodePlex Daily Summary for Monday, June 02, 2014

    CodePlex Daily Summary for Monday, June 02, 2014Popular ReleasesPortable Class Library for SQLite: Portable Class Library for SQLite - 3.8.4.4: This pull request from mattleibow addresses an issue with custom function creation (define functions in C# code and invoke them from SQLite as id they where regular SQL functions). Impact: Xamarin iOSTweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.GenerateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweets- Add mis...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Vi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew Projects[ISEN] Rendu de projet Naughty3Dogs - Pong3D: Pong3D est un jeu qui reprend le principe classique du Pong en le portant dans un environnement 3D à l'aide du langage c# et du moteur Unity3DBootstrap for MVC: Bootstrap for MVC.F. A. Q. - Najczesciej zadawane pytania: FAQForuMvc: Technifutur short projecthomework456: no.iStoody: Studies organize solution, available through app for Windows and Windows Phone.liaoliao: ???????????Price Tracker: Allows a user to track prices based on parsed emailsRoslynEval: RoslynRx Hub: Rx Hub provides server side computation which initiate by subscriber requestSharepoint Online AppCache Reset: We are an IT resource company providing Virtual IT services and custom and opensource programs for everyday needs. UnitConversionLib : Smart Unit Conversion Library in C#: Conversion of units, arithmetic operation and parsing quantities with their units on run time. Smart unit converter and conversion lib for physical quantities,

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • How to access GNU Xnee

    - by Gaurav Butola
    I have installed GNU Xnee (Gnee an OS X automator alternative) from the Software Centre but now I cant find it anywhere in the menus. Here is the output when I run gnee in the terminal gaurav@gaurav-HCL-ME-Laptop:~$ gnee (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated *** glibc detected *** gnee: free(): invalid next size (fast): 0x08afb638 *** ======= Backtrace: ========= /lib/libc.so.6(+0x6c501)[0x53de501] /lib/libc.so.6(+0x6dd70)[0x53dfd70] /lib/libc.so.6(cfree+0x6d)[0x53e2e5d] gnee[0x804c9f5] /lib/libc.so.6(__libc_start_main+0xe7)[0x5388ce7] gnee[0x804c571] ======= Memory map: ======== 00110000-00112000 r-xp 00000000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00112000-00113000 r--p 00002000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00113000-00114000 rw-p 00003000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00116000-0011a000 r-xp 00000000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011a000-0011b000 r--p 00003000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011b000-0011c000 rw-p 00004000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011c000-00176000 r-xp 00000000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00176000-00177000 r--p 00059000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00177000-00179000 rw-p 0005a000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00179000-001c8000 r-xp 00000000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001c8000-001c9000 ---p 0004f000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001c9000-001cc000 r--p 0004f000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001cc000-001d3000 rw-p 00052000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001d3000-00200000 r-xp 00000000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00200000-00201000 ---p 0002d000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00201000-00202000 r--p 0002d000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00202000-00204000 rw-p 0002e000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00204000-0021c000 r-xp 00000000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021c000-0021d000 ---p 00018000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021d000-0021e000 r--p 00018000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021e000-0021f000 rw-p 00019000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021f000-00243000 r-xp 00000000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00243000-00244000 r--p 00023000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00244000-00245000 rw-p 00024000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00245000-00248000 r-xp 00000000 08:01 393403 /lib/libuuid.so.1.3.0 00248000-00249000 r--p 00002000 08:01 393403 /lib/libuuid.so.1.3.0 00249000-0024a000 rw-p 00003000 08:01 393403 /lib/libuuid.so.1.3.0 0024a000-0024c000 r-xp 00000000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024c000-0024d000 r--p 00001000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024d000-0024e000 rw-p 00002000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024e000-00250000 r-xp 00000000 08:01 393661 /lib/libutil-2.12.1.so 00250000-00251000 r--p 00001000 08:01 393661 /lib/libutil-2.12.1.so 00251000-00252000 rw-p 00002000 08:01 393661 /lib/libutil-2.12.1.so 00254000-00255000 r-xp 00000000 00:00 0 [vdso] 00255000-0026c000 r-xp 00000000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026c000-0026d000 r--p 00017000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026d000-0026e000 rw-p 00018000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026e000-002ad000 r-xp 00000000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002ad000-002ae000 ---p 0003f000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002ae000-002af000 r--p 0003f000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002af000-002b0000 rw-p 00040000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002b0000-002be000 r-xp 00000000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002be000-002bf000 r--p 0000d000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002bf000-002c0000 rw-p 0000e000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002c0000-002c4000 r-xp 00000000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c4000-002c5000 r--p 00003000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c5000-002c6000 rw-p 00004000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c7000-002d9000 r-xp 00000000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002d9000-002da000 r--p 00012000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002da000-002db000 rw-p 00013000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002db000-002dc000 rw-p 00000000 00:00 0 002dc000-00370000 r-xp 00000000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00370000-00372000 r--p 00094000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00372000-00373000 rw-p 00096000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00373000-0038d000 r-xp 00000000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038d000-0038e000 r--p 00019000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038e000-0038f000 rw-p 0001a000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038f000-00395000 r-xp 00000000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00395000-00396000 r--p 00005000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00396000-00397000 rw-p 00006000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00397000-003ac000 r-xp 00000000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ac000-003ad000 r--p 00014000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ad000-003ae000 rw-p 00015000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ae000-003b0000 rw-p 00000000 00:00 0 003b0000-003f0000 r-xp 00000000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f0000-003f1000 r--p 00040000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f1000-003f2000 rw-p 00041000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f2000-0040f000 r-xp 00000000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 0040f000-00410000 r--p 0001c000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 00410000-00411000 rw-p 0001d000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 00411000-00413000 r-xp 00000000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00413000-00414000 r--p 00001000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00414000-00415000 rw-p 00002000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00416000-0045f000 r-xp 00000000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 0045f000-00467000 r--p 00049000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 00467000-00469000 rw-p 00051000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 00469000-00551000 r-xp 00000000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00551000-00553000 r--p 000e7000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00553000-00554000 rw-p 000e9000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00554000-00555000 rw-p 00000000 00:00 0 00555000-00578000 r-xp 00000000 08:01 393365 /lib/libpng12.so.0.44.0 00578000-00579000 r--p 00022000 08:01 393365 /lib/libpng12.so.0.44.0 00579000-0057a000 rw-p 00023000 08:01 393365 /lib/libpng12.so.0.44.0 0057d000-0057f000 r-xp 00000000 08:01 393656 /lib/libdl-2.12.1.so 0057f000-00580000 r--p 00001000 08:01 393656 /lib/libdl-2.12.1.soAborted

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • Oracle SOA Suite - Highlighted Travel and Transportation Customer References

    - by Bruce Tierney
    0 0 1 1137 6483 - 54 15 7605 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Next in this series on industry-specific highlights of Oracle SOA Suite customers is the Travel and Transportation industry.  If you are in the travel or transportation industry, take a look at how these Oracle SOA Suite integration customers have addressed common business requirements to enable better customer service, lower costs, and deliver new business services. For example, All Nippon Airways (ANA) has significantly lowered management costs associated with their hybrid on-premise/cloud ticketing system deployments for domestic and international flights. Their lead-time for changes or new applications has been greatly reduced compared to their old mainframe-based systems, enabling ANA to rapidly develop new services in response to changing market needs. Another example is Schneider National, a leading provider of truckload logistics, and how they have integrated Oracle E-Business Suite, Siebel CRM, Oracle Transportation Management and customers applications using Oracle SOA Suite. Schneider National has 400 BPEL processes that generate over 60 million composite instances over five SOA clusters.  Take a deeper look into any of these case studies, videos, and Oracle Magazine articles that closely align with your industry:  Customers fly and airline succeeds with an IT transformation. Company:  All Nippon Airways  Customer Oracle or Profit Magazine Article   |   Travel and Transportation   |   Published on January 06, 2014 Any successful business must ensure ongoing customer satisfaction, respond to increased competition, and minimize costs. Running a successful airline in today’s economic climate requires all of those things, as well a... Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform New Company:  Openmatics s.r.o.  Customer Snapshot   |   Automotive   |   Published on May 20, 2014 Openmatics uses Oracle WebCenter Portal and Oracle Application Development Framework as a foundation for Openmatics, a vehicle telematics service for next-generation fleet management. It integrated its own app shop wi... Future Proof: To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate Company:  SFpark  Customer Oracle or Profit Magazine Article   |   Professional Services   |   Published on August 01, 2012 Oracle Fusion Middleware is at the heart of a recently completed and very ambitious project to change how people handle the challenge of finding a parking space in San Francisco, California. “Parking is a universal is... Globalia Corporación Empresarial Accelerates Hotel Bookings, Boosts Sales by 40% with In-Memory Data Grid Solution Company:  Globalia Corporación Empresarial S.A.  Customer Snapshot   |   Travel and Transportation   |   Published on April 29, 2013 Globalia Corporación Empresarial S.A. deployed Oracle Coherence to reengineer the group’s core system for hotel bookings, now serving booking requests involving 80 hotels within an average response time of 100 millise... Choice Hotels Uses Oracle SOA Suite and Oracle BPM Suite to Modernize Global IT Architecture Company:  Choice Hotels  Press Release   |   Travel and Transportation   |   Published on August 07, 2012 Choice Hotels International, one of the largest and most successful hotel franchises in the world, has implemented Oracle SOA Suite and Oracle BPM Suite. Sascar Consolidates Fleet Management Infrastructure and Accelerates Customers’ Data Access Company:  Sascar  Customer Case Study   |   Travel and Transportation   |   Published on February 07, 2014 Description – Sascar used Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle WebLogic Suite 11g to consolidate fleet management and perform real-time vehicle tracking 4x faster. Directorate General of Civil Aviation Streamlines Key Aviation Applications Access, Improves Productivity and Reduces Maintenance Costs Company:  Directorate General of Civil Aviation (DGAC)  Customer Snapshot   |   Travel and Transportation   |   Published on May 24, 2013 With Oracle Fusion Middleware, the Directorate General of Civil Aviation (DGAC) provided its 12,500 employees a virtual office environment that integrates team workspaces, business applications, and e-mails within a n... Schneider National Implements Next-Generation IT Infrastructure to Continue Leadership in Transportation and Logistics Industry Company:  Schneider National, Inc.  Customer Snapshot   |   Travel and Transportation   |   Published on February 26, 2013 Schneider National, Inc. deployed Oracle applications, Oracle Fusion Middleware, and Oracle development tools as the foundation for its next-generation IT environment, which is driving new levels of efficiency, profit... DGAC Cuts Subscription Costs with Oracle Company:  DGAC  Video   |   Travel and Transportation   |   Published on October 31, 2012 Using Oracle WebCenter Portal, Oracle SOA Suite, and Oracle Exalogic, DGAC reduces the cost of subscriptions to newsletters and provide to its 12,500 employees a collaborative workspace portal. Asiana Airlines Builds PIP System with Oracle Solutions Company:  Asiana Airlines  Video   |   Travel and Transportation   |   Published on July 26, 2012 With Oracle Exalogic and the Oracle SOA Suite, Asiana Airlines builds a passenger service integrated platform providing various services such as integration between its interface and internal systems and a data wareho... Choice Hotels Reduces Time to Market with Oracle WebCenter Company:  Choice Hotels  Video   |   Travel and Transportation   |   Published on April 11, 2014 Using Oracle WebCenter and Oracle SOA standardization, Choice Hotels consolidated multiple platforms, reduced IT dependency and realized tremendous benefits in total cost of ownership and faster time to market support... An Interview with Schneider National's Judy Lemke Company:  Schneider National  Video   |   Travel and Transportation   |   Published on December 17, 2013 Judy Lemke talks with Mark Sunday about the challenges Schneider National faced and how they overcame them through a companywide transformational change. For more details on these case studies, you can use this pre-filtered search on “Travel and Transportation” / “Middleware” / “Service Oriented Architecture” or browse on your own at www.oracle.com/customers

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • CodePlex Daily Summary for Monday, May 26, 2014

    CodePlex Daily Summary for Monday, May 26, 2014Popular ReleasesClosedXML - The easy way to OpenXML: ClosedXML 0.71.1: More performance improvements. It's faster and consumes less memory.Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 and 2013 - 1.1.0.0: Issues fixed in this build: 1. Works for CRM 2013 2. Lookup view not getting blockedSimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesSimple Connect To Db: SimpleConnectToDb_v1: SimpleConnectToDb_v1CRM 2011 / CRM 2013 Form Helper: v2014.05.25: v2014.05.25 Added PhoneFormat & PhoneFormatAreaCode v2014.05.24 Initial ReleaseCreate Word documents without MS Word: Release 3.0: Add support for Sections, Sections Headers and Footers and right to left languages.Corporate News App for SharePoint 2013: CorporateNewsApp v1.6.2.0: Important note This version contains a major bug fix about the generic error "Request failed. Unexpected response data from server null" This error occurs on SharePoint Online only, following an update of the Javascript API after May 2014. If you have installed this application manually in your applications company catalog, you can download the CorporateNewsApp.app file in the zip archive and update it manually. If you have installed this application directly from the SharePoint Store, it ...DevOS: DevOS: Plugin-system added Including:DevOS.exe DevOS API.dll Files must be in the some folderTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"C64 Studio: 3.5: Add: BASIC renumber function Add: !PET pseudo op Add: elseif for !if, } else { pseudo op Add: !TRACE pseudo op Add: Watches are saved/restored with a solution Add: Ctrl-A works now in export assembly controls Add: Preliminary graphic import dialog (not fully functional yet) Add: range and block selection in sprite/charset editor (Shift-Click = range, Alt-Click = block) Fix: Expression evaluator could miscalculate when both division and multiplication were in an expression without parenthesisSEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...ConEmu - Windows console with tabs: ConEmu 140523 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" Aspose for Apache POI: Missing Features of Apache POI SL - v 1.1: Release contain the Missing Features in Apache POI SL SDK in Comparison with Aspose.Slides for dealing with Microsoft Power Point. What's New ?Following Examples: Managing Slide Transitions Manage Smart Art Adding Media Player Adding Audio Frame to Slide Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...WordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.com????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.New ProjectsASP.Net MCV4 Simplified Code Samples: This project intended to simplify the same. In this project each task is implemented with minimum lines of code to reduces complicity.Calvin: net???CodeLatino by Latinosoft: A Modified version for codeShow -- Probably taking more than a month.freeasyBackup: A free and easy to use Backup Tool for everyone. Without any cloud restrictions. freeasyExplorer: A free and easy to use File Explorer for everyone.openPDFspeedreader: #spritz #pdfreader #speedreader PDF Editor to Edit PDF Files in your ASP.NET Applications: This sample application allows the users to edit PDF files online using Aspose.Pdf for .NET.SharePoint World Cup 2013: world cup 2014SSAS Long Running Query Performance Helper: This utility helps investigate long running multidimensional or mining queries in discovery, de-parameterization and re-parameterization back to source format.

    Read the article

  • SDL_BlitSurface segmentation fault (surfaces aren't null)

    - by Trollkemada
    My app is crashing on SDL_BlitSurface() and i can't figure out why. I think it has something to do with my static object. If you read the code you'll why I think so. This happens when the limits of the map are reached, i.e. (iwidth || jheight). This is the code: Map.cpp (this render) Tile const * Map::getTyle(int i, int j) const { if (i >= 0 && j >= 0 && i < width && j < height) { return data[i][j]; } else { return &Tile::ERROR_TYLE; // This makes SDL_BlitSurface (called later) crash //return new Tile(TileType::ERROR); // This works with not problem (but is memory leak, of course) } } void Map::render(int x, int y, int width, int height) const { //DEBUG("(Rendering...) x: "<<x<<", y: "<<y<<", width: "<<width<<", height: "<<height); int firstI = x / TileType::PIXEL_PER_TILE; int firstJ = y / TileType::PIXEL_PER_TILE; int lastI = (x+width) / TileType::PIXEL_PER_TILE; int lastJ = (y+height) / TileType::PIXEL_PER_TILE; // The previous integer division rounds down when dealing with positive values, but it rounds up // negative values. This is a fix for that (We need those values always rounded down) if (firstI < 0) { firstI--; } if (firstJ < 0) { firstJ--; } const int firstX = x; const int firstY = y; SDL_Rect srcRect; SDL_Rect dstRect; for (int i=firstI; i <= lastI; i++) { for (int j=firstJ; j <= lastJ; j++) { if (i*TileType::PIXEL_PER_TILE < x) { srcRect.x = x % TileType::PIXEL_PER_TILE; srcRect.w = TileType::PIXEL_PER_TILE - (x % TileType::PIXEL_PER_TILE); dstRect.x = i*TileType::PIXEL_PER_TILE + (x % TileType::PIXEL_PER_TILE) - firstX; } else if (i*TileType::PIXEL_PER_TILE >= x + width) { srcRect.x = 0; srcRect.w = x % TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } else { srcRect.x = 0; srcRect.w = TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } if (j*TileType::PIXEL_PER_TILE < y) { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE - (y % TileType::PIXEL_PER_TILE); dstRect.y = j*TileType::PIXEL_PER_TILE + (y % TileType::PIXEL_PER_TILE) - firstY; } else if (j*TileType::PIXEL_PER_TILE >= y + height) { srcRect.y = y % TileType::PIXEL_PER_TILE; srcRect.h = y % TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } else { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } SDL::YtoSDL(dstRect.y, srcRect.h); SDL_BlitSurface(getTyle(i,j)->getType()->getSurface(), &srcRect, SDL::getScreen(), &dstRect); // <-- Crash HERE /*DEBUG("i = "<<i<<", j = "<<j); DEBUG("srcRect.x = "<<srcRect.x<<", srcRect.y = "<<srcRect.y<<", srcRect.w = "<<srcRect.w<<", srcRect.h = "<<srcRect.h); DEBUG("dstRect.x = "<<dstRect.x<<", dstRect.y = "<<dstRect.y);*/ } } } Tile.h #ifndef TILE_H #define TILE_H #include "TileType.h" class Tile { private: TileType const * type; public: static const Tile ERROR_TYLE; Tile(TileType const * t); ~Tile(); TileType const * getType() const; }; #endif Tile.cpp #include "Tile.h" const Tile Tile::ERROR_TYLE(TileType::ERROR); Tile::Tile(TileType const * t) : type(t) {} Tile::~Tile() {} TileType const * Tile::getType() const { return type; } TileType.h #ifndef TILETYPE_H #define TILETYPE_H #include "SDL.h" #include "DEBUG.h" class TileType { protected: TileType(); ~TileType(); public: static const int PIXEL_PER_TILE = 30; static const TileType * ERROR; static const TileType * AIR; static const TileType * SOLID; virtual SDL_Surface * getSurface() const = 0; virtual bool isSolid(int x, int y) const = 0; }; #endif ErrorTyle.h #ifndef ERRORTILE_H #define ERRORTILE_H #include "TileType.h" class ErrorTile : public TileType { friend class TileType; private: ErrorTile(); mutable SDL_Surface * surface; static const char * FILE_PATH; public: SDL_Surface * getSurface() const; bool isSolid(int x, int y) const ; }; #endif ErrorTyle.cpp (The surface can't be loaded when building the object, because it is a static object and SDL_Init() needs to be called first) #include "ErrorTile.h" const char * ErrorTile::FILE_PATH = ("C:\\error.bmp"); ErrorTile::ErrorTile() : TileType(), surface(NULL) {} SDL_Surface * ErrorTile::getSurface() const { if (surface == NULL) { if (SDL::isOn()) { surface = SDL::loadAndOptimice(ErrorTile::FILE_PATH); if (surface->w != TileType::PIXEL_PER_TILE || surface->h != TileType::PIXEL_PER_TILE) { WARNING("Bad tile surface size"); } } else { ERROR("Trying to load a surface, but SDL is not on"); } } if (surface == NULL) { // This if doesn't get called, so surface != NULL ERROR("WTF? Can't load surface :\\"); } return surface; } bool ErrorTile::isSolid(int x, int y) const { return true; }

    Read the article

  • Keypress detection wont work after seemingly unrelated code change

    - by LukeZaz
    I'm trying to have the Enter key cause a new 'map' to generate for my game, but for whatever reason after implementing full-screen in it the input check won't work anymore. I tried removing the new code and only pressing one key at a time, but it still won't work. Here's the check code and the method it uses, along with the newMap method: public class Game1 : Microsoft.Xna.Framework.Game { // ... protected override void Update(GameTime gameTime) { // ... // Check if Enter was pressed - if so, generate a new map if (CheckInput(Keys.Enter, 1)) { blocks = newMap(map, blocks, console); } // ... } // Method: Checks if a key is/was pressed public bool CheckInput(Keys key, int checkType) { // Get current keyboard state KeyboardState newState = Keyboard.GetState(); bool retType = false; // Return type if (checkType == 0) { // Check Type: Is key currently down? if (newState.IsKeyDown(key)) { retType = true; } else { retType = false; } } else if (checkType == 1) { // Check Type: Was the key pressed? if (newState.IsKeyDown(key)) { if (!oldState.IsKeyDown(key)) { // Key was just pressed retType = true; } else { // Key was already pressed, return false retType = false; } } } // Save keyboard state oldState = newState; // Return result if (retType == true) { return true; } else { return false; } } // Method: Generate a new map public List<Block> newMap(Map map, List<Block> blockList, Console console) { // Create new map block coordinates List<Vector2> positions = new List<Vector2>(); positions = map.generateMap(console); // Clear list and reallocate memory previously used up by it blockList.Clear(); blockList.TrimExcess(); // Add new blocks to the list using positions created by generateMap() foreach (Vector2 pos in positions) { blockList.Add(new Block() { Position = pos, Texture = dirtTex }); } // Return modified list return blockList; } // ... } and the generateMap code: // Generate a list of Vector2 positions for blocks public List<Vector2> generateMap(Console console, int method = 0) { ScreenTileWidth = gDevice.Viewport.Width / 16; ScreenTileHeight = gDevice.Viewport.Height / 16; maxHeight = gDevice.Viewport.Height; List<Vector2> blockLocations = new List<Vector2>(); if (useScreenSize == true) { Width = ScreenTileWidth; Height = ScreenTileHeight; } else { maxHeight = Height; } int startHeight = -500; // For debugging purposes, the startHeight is set to an // hopefully-unreachable value - if it returns this, something is wrong // Methods of land generation /// <summary> /// Third version land generation /// Generates a base land height as the second version does /// but also generates a 'max change' value which determines how much /// the land can raise or lower by which it now does by a random amount /// during generation /// </summary> if (method == 0) { // Get the land height startHeight = rnd.Next(1, maxHeight); int maxChange = rnd.Next(1, 5); // Amount ground will raise/lower by int curHeight = startHeight; for (int w = 0; w < Width; w++) { // Run a chance to lower/raise ground level int changeBy = rnd.Next(1, maxChange); int doChange = rnd.Next(0, 3); if (doChange == 1 && !(curHeight <= (1 + maxChange))) { curHeight = curHeight - changeBy; } else if (doChange == 2 && !(curHeight >= (29 - maxChange))) { curHeight = curHeight + changeBy; } for (int h = curHeight; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; blockLocations.Add(new Vector2(x, y)); } } console.newMsg("[INFO] Cur, height change maximum: " + maxChange.ToString()); } /// <summary> /// Second version land generator /// Generates a solid mass of land starting at a random height /// derived from either screen height or provided height value /// </summary> else if (method == 1) { // Get the land height startHeight = rnd.Next(0, 30); for (int w = 0; w < Width; w++) { for (int h = startHeight; h < ScreenTileHeight; h++) { // Location variables float x = w * 16; float y = h * 16; // Add a tile at set location blockLocations.Add(new Vector2(x, y)); } } } /// <summary> /// First version land generator /// Generates land completely randomly either across screen or /// in a box set by Width and Height values /// </summary> else { // For each tile in the map... for (int w = 0; w < Width; w++) { for (int h = 0; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; // ...decide whether or not to place a tile... if (rnd.Next(0, 2) == 1) { // ...and if so, add a tile at that location. blockLocations.Add(new Vector2(x, y)); } } } } console.newMsg("[INFO] Cur, base height: " + startHeight.ToString()); return blockLocations; } I never touched any of the above code for this when it broke - changing keys won't seem to fix it. Despite this, I have camera movement set inside another Game1 method that uses WASD and works perfectly. All I did was add a few lines of code here: private int BackBufferWidth = 1280; // Added these variables private int BackBufferHeight = 800; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = BackBufferWidth; // and this graphics.PreferredBackBufferHeight = BackBufferHeight; // this Content.RootDirectory = "Content"; this.graphics.IsFullScreen = true; // and this } When I try adding a console line to be printed in the event the key is pressed, it seems that the If is never even triggered despite the correct key being pressed.

    Read the article

  • OData &ndash; The easiest service I can create: now with updates

    - by Jon Dalberg
    The other day I created a simple NastyWord service exposed via OData. It was read-only and used an in-memory backing store for the words. Today I’ll modify it to use a file instead of a list and I’ll accept new nasty words by implementing IUpdatable directly. The first thing to do is enable the service to accept new entries. This is done at configuration time by adding the “WriteAppend” access rule: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Next I placed a file, NastyWords.txt, in the “App_Data” folder and added a few *choice* words to start. This required one simple change to our NastyWordDataSource.cs file: 1: public NastyWordsDataSource() 2: { 3: UpdateFromSource(); 4: } 5:   6: private void UpdateFromSource() 7: { 8: var words = File.ReadAllLines(pathToFile); 9: NastyWords = (from w in words 10: select new NastyWord { Word = w }).AsQueryable(); 11: }   Nothing too shocking here, just reading each line from the NastyWords.txt file and exposing them. Next, I implemented IUpdatable which comes with a boat-load of methods. We don’t need all of them for now since we are only concerned with allowing new values. Here are the methods we must implement, all the others throw a NotImplementedException: 1: public object CreateResource(string containerName, string fullTypeName) 2: { 3: var nastyWord = new NastyWord(); 4: pendingUpdates.Add(nastyWord); 5: return nastyWord; 6: } 7:   8: public object ResolveResource(object resource) 9: { 10: return resource; 11: } 12:   13: public void SaveChanges() 14: { 15: var intersect = (from w in pendingUpdates 16: select w.Word).Intersect(from n in NastyWords 17: select n.Word); 18:   19: if (intersect.Count() > 0) 20: throw new DataServiceException(500, "duplicate entry"); 21:   22: var lines = from w in pendingUpdates 23: select w.Word; 24:   25: File.AppendAllLines(pathToFile, 26: lines, 27: Encoding.UTF8); 28:   29: pendingUpdates.Clear(); 30:   31: UpdateFromSource(); 32: } 33:   34: public void SetValue(object targetResource, string propertyName, object propertyValue) 35: { 36: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 37: }   I use a simple list to contain the pending updates and only commit them when the “SaveChanges” method is called. Here’s the order these methods are called in our service during an insert: CreateResource – here we just instantiate a new NastyWord and stick a reference to it in our pending updates list. SetValue – this is where the “Word” property of the NastyWord instance is set. SaveChanges – get the list of pending updates, barfing on duplicates, write them to the file and clear our pending list. ResolveResource – the newly created resource will be returned directly here since we aren’t dealing with “handles” to objects but the actual objects themselves. Not too bad, eh? I didn’t find this documented anywhere but a little bit of digging in the OData spec and use of Fiddler made it pretty easy to figure out. Here is some client code which would add a new nasty word: 1: static void Main(string[] args) 2: { 3: var svc = new ServiceReference1.NastyWordsDataSource(new Uri("http://localhost.:60921/NastyWords.svc")); 4: svc.AddToNastyWords(new ServiceReference1.NastyWord() { Word = "shat" }); 5:   6: svc.SaveChanges(); 7: }   Here’s all of the code so far for to implement the service: 1: using System; 2: using System.Collections.Generic; 3: using System.Data.Services; 4: using System.Data.Services.Common; 5: using System.Linq; 6: using System.ServiceModel.Web; 7: using System.Web; 8: using System.IO; 9: using System.Text; 10:   11: namespace ONasty 12: { 13: [DataServiceKey("Word")] 14: public class NastyWord 15: { 16: public string Word { get; set; } 17: } 18:   19: public class NastyWordsDataSource : IUpdatable 20: { 21: private List<NastyWord> pendingUpdates = new List<NastyWord>(); 22: private string pathToFile = @"path to your\App_Data\NastyWords.txt"; 23:   24: public NastyWordsDataSource() 25: { 26: UpdateFromSource(); 27: } 28:   29: private void UpdateFromSource() 30: { 31: var words = File.ReadAllLines(pathToFile); 32: NastyWords = (from w in words 33: select new NastyWord { Word = w }).AsQueryable(); 34: } 35:   36: public IQueryable<NastyWord> NastyWords { get; private set; } 37:   38: public void AddReferenceToCollection(object targetResource, string propertyName, object resourceToBeAdded) 39: { 40: throw new NotImplementedException(); 41: } 42:   43: public void ClearChanges() 44: { 45: pendingUpdates.Clear(); 46: } 47:   48: public object CreateResource(string containerName, string fullTypeName) 49: { 50: var nastyWord = new NastyWord(); 51: pendingUpdates.Add(nastyWord); 52: return nastyWord; 53: } 54:   55: public void DeleteResource(object targetResource) 56: { 57: throw new NotImplementedException(); 58: } 59:   60: public object GetResource(IQueryable query, string fullTypeName) 61: { 62: throw new NotImplementedException(); 63: } 64:   65: public object GetValue(object targetResource, string propertyName) 66: { 67: throw new NotImplementedException(); 68: } 69:   70: public void RemoveReferenceFromCollection(object targetResource, string propertyName, object resourceToBeRemoved) 71: { 72: throw new NotImplementedException(); 73: } 74:   75: public object ResetResource(object resource) 76: { 77: throw new NotImplementedException(); 78: } 79:   80: public object ResolveResource(object resource) 81: { 82: return resource; 83: } 84:   85: public void SaveChanges() 86: { 87: var intersect = (from w in pendingUpdates 88: select w.Word).Intersect(from n in NastyWords 89: select n.Word); 90:   91: if (intersect.Count() > 0) 92: throw new DataServiceException(500, "duplicate entry"); 93:   94: var lines = from w in pendingUpdates 95: select w.Word; 96:   97: File.AppendAllLines(pathToFile, 98: lines, 99: Encoding.UTF8); 100:   101: pendingUpdates.Clear(); 102:   103: UpdateFromSource(); 104: } 105:   106: public void SetReference(object targetResource, string propertyName, object propertyValue) 107: { 108: throw new NotImplementedException(); 109: } 110:   111: public void SetValue(object targetResource, string propertyName, object propertyValue) 112: { 113: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 114: } 115: } 116:   117: public class NastyWords : DataService<NastyWordsDataSource> 118: { 119: // This method is called only once to initialize service-wide policies. 120: public static void InitializeService(DataServiceConfiguration config) 121: { 122: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 123: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 124: } 125: } 126: } Next time we’ll allow removing nasty words. Enjoy!

    Read the article

  • Passing a parameter so that it cannot be changed – C#

    - by nmarun
    I read this requirement of not allowing a user to change the value of a property passed as a parameter to a method. In C++, as far as I could recall (it’s been over 10 yrs, so I had to refresh memory), you can pass ‘const’ to a function parameter and this ensures that the parameter cannot be changed inside the scope of the function. There’s no such direct way of doing this in C#, but that does not mean it cannot be done!! Ok, so this ‘not-so-direct’ technique depends on the type of the parameter – a simple property or a collection. Parameter as a simple property: This is quite easy (and you might have guessed it already). Bulent Ozkir clearly explains how this can be done here. Parameter as a collection property: Obviously the above does not work if the parameter is a collection of some type. Let’s dig-in. Suppose I need to create a collection of type KeyTitle as defined below. 1: public class KeyTitle 2: { 3: public int Key { get; set; } 4: public string Title { get; set; } 5: } My class is declared as below: 1: public class Class1 2: { 3: public Class1() 4: { 5: MyKeyTitleList = new List<KeyTitle>(); 6: } 7: 8: public List<KeyTitle> MyKeyTitleList { get; set; } 9: public ReadOnlyCollection<KeyTitle> ReadonlyKeyTitleCollection 10: { 11: // .AsReadOnly creates a ReadOnlyCollection<> type 12: get { return MyKeyTitleList.AsReadOnly(); } 13: } 14: } See the .AsReadOnly() method used in the second property? As MSDN says it: “Returns a read-only IList<T> wrapper for the current collection.” Knowing this, I can implement my code as: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8:  9: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 10:  11: Console.ReadLine(); 12: } 13:  14: private static void TryToModifyCollection(ReadOnlyCollection<KeyTitle> readOnlyCollection) 15: { 16: // can only read 17: for (int i = 0; i < readOnlyCollection.Count; i++) 18: { 19: Console.WriteLine("{0} - {1}", readOnlyCollection[i].Key, readOnlyCollection[i].Title); 20: } 21: // Add() - not allowed 22: // even the indexer does not have a setter 23: } The output is as expected: The below image shows two things. In the first line, I’ve tried to access an element in my read-only collection through an indexer. It shows that the ReadOnlyCollection<> does not have a setter on the indexer. The second line tells that there’s no ‘Add()’ method for this type of collection. The capture below shows there’s no ‘Remove()’ method either, there-by eliminating all ways of modifying a collection. Mission accomplished… right? Now, even if you have a collection of different type, all you need to do is to somehow cast (used loosely) it to a List<> and then do a .AsReadOnly() to get a ReadOnlyCollection of your custom collection type. As an example, if you have an IDictionary<int, string>, you can create a List<T> of this type with a wrapper class (KeyTitle in our case). 1: public IDictionary<int, string> MyDictionary { get; set; } 2:  3: public ReadOnlyCollection<KeyTitle> ReadonlyDictionary 4: { 5: get 6: { 7: return (from item in MyDictionary 8: select new KeyTitle 9: { 10: Key = item.Key, 11: Title = item.Value, 12: }).ToList().AsReadOnly(); 13: } 14: } Cool huh? Just one thing you need to know about the .AsReadOnly() method is that the only way to modify your ReadOnlyCollection<> is to modify the original collection. So doing: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 9:  10: Console.WriteLine(); 11:  12: class1.MyKeyTitleList.Add(new KeyTitle { Key = 5, Title = "mno" }); 13: class1.MyKeyTitleList[2] = new KeyTitle{Key = 3, Title = "GHI"}; 14: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 15:  16: Console.ReadLine(); 17: } Gives me the output of: See that the second element’s Title is changed to upper-case and the fifth element also gets displayed even though we’re still looping through the same ReadOnlyCollection<KeyTitle>. Verdict: Now you know of a way to implement ‘Method(const param1)’ in your code!

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • LINQ – SequenceEqual() method

    - by nmarun
    I have been looking at LINQ extension methods and have blogged about what I learned from them in my blog space. Next in line is the SequenceEqual() method. Here’s the description about this method: “Determines whether two sequences are equal by comparing the elements by using the default equality comparer for their type.” Let’s play with some code: 1: int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 2: // int[] numbersCopy = numbers; 3: int[] numbersCopy = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 4:  5: Console.WriteLine(numbers.SequenceEqual(numbersCopy)); This gives an output of ‘True’ – basically compares each of the elements in the two arrays and returns true in this case. The result is same even if you uncomment line 2 and comment line 3 (I didn’t need to say that now did I?). So then what happens for custom types? For this, I created a Product class with the following definition: 1: class Product 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8: } 9:  10: public enum Status 11: { 12: Active = 1, 13: InActive = 2, 14: OffShelf = 3, 15: } In my calling code, I’m just adding a few product items: 1: private static List<Product> GetProducts() 2: { 3: return new List<Product> 4: { 5: new Product 6: { 7: ProductId = 1, 8: Name = "Laptop", 9: Category = "Computer", 10: MfgDate = new DateTime(2003, 4, 3), 11: Status = Status.Active, 12: }, 13: new Product 14: { 15: ProductId = 2, 16: Name = "Compact Disc", 17: Category = "Water Sport", 18: MfgDate = new DateTime(2009, 12, 3), 19: Status = Status.InActive, 20: }, 21: new Product 22: { 23: ProductId = 3, 24: Name = "Floppy", 25: Category = "Computer", 26: MfgDate = new DateTime(1993, 3, 7), 27: Status = Status.OffShelf, 28: }, 29: }; 30: } Now for the actual check: 1: List<Product> products1 = GetProducts(); 2: List<Product> products2 = GetProducts(); 3:  4: Console.WriteLine(products1.SequenceEqual(products2)); This one returns ‘False’ and the reason is simple – this one checks for reference equality and the products in the both the lists get different ‘memory addresses’ (sounds like I’m talking in ‘C’). In order to modify this behavior and return a ‘True’ result, we need to modify the Product class as follows: 1: class Product : IEquatable<Product> 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8:  9: public override bool Equals(object obj) 10: { 11: return Equals(obj as Product); 12: } 13:  14: public bool Equals(Product other) 15: { 16: //Check whether the compared object is null. 17: if (ReferenceEquals(other, null)) return false; 18:  19: //Check whether the compared object references the same data. 20: if (ReferenceEquals(this, other)) return true; 21:  22: //Check whether the products' properties are equal. 23: return ProductId.Equals(other.ProductId) 24: && Name.Equals(other.Name) 25: && Category.Equals(other.Category) 26: && MfgDate.Equals(other.MfgDate) 27: && Status.Equals(other.Status); 28: } 29:  30: // If Equals() returns true for a pair of objects 31: // then GetHashCode() must return the same value for these objects. 32: // read why in the following articles: 33: // http://geekswithblogs.net/akraus1/archive/2010/02/28/138234.aspx 34: // http://stackoverflow.com/questions/371328/why-is-it-important-to-override-gethashcode-when-equals-method-is-overriden-in-c 35: public override int GetHashCode() 36: { 37: //Get hash code for the ProductId field. 38: int hashProductId = ProductId.GetHashCode(); 39:  40: //Get hash code for the Name field if it is not null. 41: int hashName = Name == null ? 0 : Name.GetHashCode(); 42:  43: //Get hash code for the ProductId field. 44: int hashCategory = Category.GetHashCode(); 45:  46: //Get hash code for the ProductId field. 47: int hashMfgDate = MfgDate.GetHashCode(); 48:  49: //Get hash code for the ProductId field. 50: int hashStatus = Status.GetHashCode(); 51: //Calculate the hash code for the product. 52: return hashProductId ^ hashName ^ hashCategory & hashMfgDate & hashStatus; 53: } 54:  55: public static bool operator ==(Product a, Product b) 56: { 57: // Enable a == b for null references to return the right value 58: if (ReferenceEquals(a, b)) 59: { 60: return true; 61: } 62: // If one is null and the other not. Remember a==null will lead to Stackoverflow! 63: if (ReferenceEquals(a, null)) 64: { 65: return false; 66: } 67: return a.Equals((object)b); 68: } 69:  70: public static bool operator !=(Product a, Product b) 71: { 72: return !(a == b); 73: } 74: } Now THAT kinda looks overwhelming. But lets take one simple step at a time. Ok first thing you’ve noticed is that the class implements IEquatable<Product> interface – the key step towards achieving our goal. This interface provides us with an ‘Equals’ method to perform the test for equality with another Product object, in this case. This method is called in the following situations: when you do a ProductInstance.Equals(AnotherProductInstance) and when you perform actions like Contains<T>, IndexOf() or Remove() on your collection Coming to the Equals method defined line 14 onwards. The two ‘if’ blocks check for null and referential equality using the ReferenceEquals() method defined in the Object class. Line 23 is where I’m doing the actual check on the properties of the Product instances. This is what returns the ‘True’ for us when we run the application. I have also overridden the Object.Equals() method which calls the Equals() method of the interface. One thing to remember is that anytime you override the Equals() method, its’ a good practice to override the GetHashCode() method and overload the ‘==’ and the ‘!=’ operators. For detailed information on this, please read this and this. Since we’ve overloaded the operators as well, we get ‘True’ when we do actions like: 1: Console.WriteLine(products1.Contains(products2[0])); 2: Console.WriteLine(products1[0] == products2[0]); This completes the full circle on the SequenceEqual() method. See the code used in the article here.

    Read the article

  • ???Flashback Log???????Redo Log?

    - by Liu Maclean(???)
    ????????????????????redo log?   RVWR( Recovery Writer)?3s??flashback generate buffer??block before image?????????? ?????block change???RVWR??block before image ?flashback log? ?????????,Oracle???????????before image????????,????????flashback database logs?????   ???????????,????? ??????????????????,???????????before image?????shared pool??flashback log buffer?,RVWR??????flashback log buffer??????????? ?DBWR???????????????,DBWR?????buffer header??FBA(Flashback Byte Address)?flashback log buffer?????????? ???? ?????? ??? ????????????? , RVWR???????????(flashback markers)?flashback database logs?? ????(flashback markers)?????????????Oracle??flashback ??????????  ??????????, Oracle ??????(flashback markers)????????????flashback database log???????????block image; ??Oracle ???????(forward recovery)?????????????????SCN?????? flashback markers for example: **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8132 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 52) **** RECORD HEADER: Type: 7 (Begin Crash Recovery Record) Size: 36 RECORD DATA (Begin Crash Recovery Record): Previous logical record fba: (lno 1 thr 1 seq 1 bno 3 bof 316) Record scn: 0x0000.00000000 [0.0] **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 7868 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 316) **** RECORD HEADER: Type: 2 (Marker) Size: 300 RECORD DATA (Marker): Previous logical record fba: (lno 0 thr 0 seq 0 bno 0 bof 0) Record scn: 0x0000.00000000 [0.0] Marker scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:35 Flag 0x0 Flashback threads: 1, Enabled redo threads 1 Recovery Start Checkpoint: scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:12 thread:1 rba:(0x80.180.10) Flashback thread Markers: Thread:1 status:0 fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) Redo Thread Checkpoint Info: Thread:1 rba:(0x80.180.10) **** Record at fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8168 RECORD DATA (Skip): End-Of-Thread reached ????????????????block change ????before image????????flashback log?? ?????block change???flashback log record ????????? redo log???!????flashback log ???????before image ? redo log??? change vector ?  Oracle?????????????????????????????????????,??????I/O??????????????: ??hot block??,Oracle???????????block image?????; Oracle ?????????(flashback barriers)???????????????,flashback barriers???????(???15??),??????????(flashback barriers)????(flashback markers)????????? ????, ??????change?????, ???????????????????????????, ?15????????????????????flashback log????????before image?????????????,?????????????????????,?????????????? ????????,??????????????(flashback barriers), flashback barriers???????,?????15????? ?????flashback barriers????????(flashback markers)???????????????,???????????????????(????barriers?????)??????block image ,????????????????????????????????? ??????????flashback log????redo log????! ????,????????????????, ?????????? SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create table flash_maclean (t1 varchar2(200)) tablespace users; Table created. SQL> insert into flash_maclean values('MACLEAN LOVE HANNA'); 1 row created. SQL> commit; Commit complete. SQL> startup force; ORACLE instance started. Total System Global Area 939495424 bytes Fixed Size 2233960 bytes Variable Size 713034136 bytes Database Buffers 218103808 bytes Redo Buffers 6123520 bytes Database mounted. Database opened. SQL> update flash_maclean set t1='HANNA LOVE MACLEAN'; 1 row updated. commit; Commit complete. SQL> alter system checkpoint; System altered. SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from flash_maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------ 140431 4 datafile 4 block 140431 ??RDBA rdba: 0x0102248f (4/140431) SQL> ! ps -ef|grep rvwr|grep -v grep oracle 26695 1 0 15:56 ? 00:00:00 ora_rvwr_G11R23 SQL> oradebug setospid 26695 Oracle pid: 20, Unix process pid: 26695, image: [email protected] (RVWR) SQL> ORADEBUG DUMP FBTAIL 1; Statement processed. To dump the last 2000 flashback records , ??ORADEBUG DUMP FBTAIL 1????????2000?????? SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/g11r23/G11R23/trace/G11R23_rvwr_26695.trc ? TRACE?????????block? before image **** Record at fba: (lno 1 thr 1 seq 1 bno 55 bof 2564) **** RECORD HEADER: Type: 1 (Block Image) Size: 28 RECORD DATA (Block Image): file#: 4 rdba: 0x0102248f Next scn: 0x0000.00000000 [0.0] Flag: 0x0 Block Size: 8192 BLOCK IMAGE: buffer rdba: 0x0102248f scn: 0x0000.00609044 seq: 0x01 flg: 0x06 tail: 0x90440601 frmt: 0x02 chkval: 0xc626 type: 0x06=trans data Hex dump of block: st=0, typ_found=1 Dump of memory from 0x00002B1D94183C00 to 0x00002B1D94185C00 2B1D94183C00 0000A206 0102248F 00609044 06010000 [.....$..D.`.....] 2B1D94183C10 0000C626 00000001 00014AD4 0060903A [&........J..:.`.] 2B1D94183C20 00000000 00320002 01022488 00090006 [......2..$......] 2B1D94183C30 00000CC8 00C00340 000D0542 00008000 [[email protected].......] 2B1D94183C40 006040BC 000F000A 00000920 00C002E4 [.@`..... .......] 2B1D94183C50 0017048F 00002001 00609044 00000000 [..... ..D.`.....] 2B1D94183C60 00000000 00010100 0014FFFF 1F6E1F77 [............w.n.] 2B1D94183C70 00001F6E 1F770001 00000000 00000000 [n.....w.........] 2B1D94183C80 00000000 00000000 00000000 00000000 [................] Repeat 500 times 2B1D94185BD0 00000000 00000000 2C000000 4D120102 [...........,...M] 2B1D94185BE0 454C4341 4C204E41 2045564F 4E4E4148 [ACLEAN LOVE HANN] 2B1D94185BF0 01002C41 43414D07 4E41454C 90440601 [A,...MACLEAN..D.] Block header dump: 0x0102248f Object id on Block? Y seg/obj: 0x14ad4 csc: 0x00.60903a itc: 2 flg: E typ: 1 - DATA brn: 0 bdba: 0x1022488 ver: 0x01 opc: 0 inc: 0 exflg: 0 Itl Xid Uba Flag Lck Scn/Fsc 0x01 0x0006.009.00000cc8 0x00c00340.0542.0d C--- 0 scn 0x0000.006040bc 0x02 0x000a.00f.00000920 0x00c002e4.048f.17 --U- 1 fsc 0x0000.00609044 bdba: 0x0102248f data_block_dump,data header at 0x2b1d94183c64 =============== tsiz: 0x1f98 hsiz: 0x14 pbl: 0x2b1d94183c64 76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f77 avsp=0x1f6e tosp=0x1f6e 0xe:pti[0] nrow=1 offs=0 0x12:pri[0] offs=0x1f77 block_row_dump: tab 0, row 0, @0x1f77 tl: 22 fb: --H-FL-- lb: 0x2 cc: 1 col 0: [18] 4d 41 43 4c 45 41 4e 20 4c 4f 56 45 20 48 41 4e 4e 41 end_of_block_dump SQL> select dump('MACLEAN LOVE HANNA',16) from dual; DUMP('MACLEANLOVEHANNA',16) -------------------------------------------------------------------- Typ=96 Len=18: 4d,41,43,4c,45,41,4e,20,4c,4f,56,45,20,48,41,4e,4e,41 ???????????????????????,??flashback log??before image????????? create table flash_maclean1 (t1 int) tablespace users; SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 0 redo size 0 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 49 flashback log write bytes 9306112 SQL> begin 2 for i in 1..5000 loop 3 update flash_maclean1 set t1=t1+1; 4 commit; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 20006 redo size 3071288 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 52 flashback log write bytes 10338304 ??????????? ??hot block,???20006 ?block changes???? ??? 3000k ?redo log ? ??1000k? flashback log ?

    Read the article

  • Things you can draw with HTML tables

    - by Coronatus
    So I was watching a talk by Google's Marissa Mayer about speeding up Google's pages. They found that a shopping cart icon increased load time by 2%, and users then searched 2% less. They managed to replace the icon with an HTML table. Here is my attempt at drawing a shopping cart: (live example page) <html> <head> <style> table {border-collapse: collapse;} th, td {width: 8px; height: 8px;} th {background-color: blue;} td {background-color: white;} </style> </head> <body> <table> <!-- this row is just to see alignment --> <tr> <td></td><td></td><td></td><td></td><td></td> <td></td><td></td><td></td><td></td><td></td> <td></td><td></td><td></td><td></td><td></td> <td></td><td></td><td></td><td></td><td></td> </tr> <!-- handle --> <tr> <td colspan="14"></td> <th colspan="3"></th> <td colspan="3"></td> </tr> <tr> <td colspan="13"></td> <th colspan="2"></th> <td colspan="1"></td> <th colspan="2"></th> <td colspan="2"></td> </tr> <tr> <td colspan="13"></td> <th colspan="2"></th> <td colspan="1"></td> <th colspan="2"></th> <td colspan="2"></td> </tr> <tr> <td colspan="14"></td> <th colspan="3"></th> <td colspan="3"></td> </tr> <!-- main body --> <tr> <td colspan="5"></td> <th colspan="13"></th> <td colspan="2"></td> </tr> <tr> <td colspan="5"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="2"></td> </tr> <tr> <td colspan="5"></td> <th colspan="1"></th> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <th colspan="1"></th> <td colspan="3"></td> </tr> <tr> <td colspan="5"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="2"></td> </tr> <tr> <td colspan="5"></td> <th colspan="1"></th> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <th colspan="1"></th> <td colspan="3"></td> </tr> <tr> <td colspan="5"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="2"></td> </tr> <tr> <td colspan="5"></td> <th colspan="1"></th> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <th colspan="1"></th> <td colspan="3"></td> </tr> <tr> <td colspan="5"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="2"></td> </tr> <tr> <td colspan="5"></td> <th colspan="1"></th> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <td colspan="1"></td> <th colspan="1"></th> <th colspan="1"></th> <td colspan="3"></td> </tr> <tr> <td colspan="5"></td> <th colspan="13"></th> <td colspan="2"></td> </tr> <!-- wheels --> <tr> <td colspan="7"></td> <th colspan="2"></th> <td colspan="4"></td> <th colspan="2"></th> <td colspan="5"></td> </tr> <tr> <td colspan="6"></td> <th colspan="4"></th> <td colspan="2"></td> <th colspan="4"></th> <td colspan="4"></td> </tr> <tr> <td colspan="7"></td> <th colspan="2"></th> <td colspan="4"></td> <th colspan="2"></th> <td colspan="5"></td> </tr> </table> </body> </html> What can you draw in tables?! Impress us.

    Read the article

  • User Input That Involves A ' ' Causes A Substring Out Of Range Error

    - by Greenhouse Gases
    Hi Stackoverflow people. You have already helped me quite a bit but near the end of writing this program I have somewhat of a bug. You see in order to read in city names with a space in from a text file I use a '/' that is then replaced by the program for a ' ' (and when the serializer runs the opposite happens for next time the program is run). The problem is when a user inputs a name too add, search for, or delete that contains a space, for instance 'New York' I get a Debug Assertion Error with a substring out of range expression. I have a feeling it's to do with my correctCase function, or setElementsNull that looks at the string until it experiences a null element in the array, however ' ' is not null so I'm not sure how to fix this and I'm going a bit insane. Any help would be much appreciated. Here is my code: // U08221.cpp : main project file. #include "stdafx.h" #include <_iostream> #include <_string> #include <_fstream> #include <_cmath> using namespace std; class locationNode { public: string nodeCityName; double nodeLati; double nodeLongi; locationNode* Next; locationNode(string nameOf, double lat, double lon) { this->nodeCityName = nameOf; this->nodeLati = lat; this->nodeLongi = lon; this->Next = NULL; } locationNode() // NULL constructor { } void swapProps(locationNode *node2) { locationNode place; place.nodeCityName = this->nodeCityName; place.nodeLati = this->nodeLati; place.nodeLongi = this->nodeLongi; this->nodeCityName = node2->nodeCityName; this->nodeLati = node2->nodeLati; this->nodeLongi = node2->nodeLongi; node2->nodeCityName = place.nodeCityName; node2->nodeLati = place.nodeLati; node2->nodeLongi = place.nodeLongi; } void modify(string name) { this->nodeCityName = name; } void modify(double latlon, int mod) { switch(mod) { case 2: this->nodeLati = latlon; break; case 3: this->nodeLongi = latlon; break; } } void correctCase() // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = this->nodeCityName[0], letVal; int n = 1; // variable for name index from second letter onwards if((this->nodeCityName[0] >90) && (this->nodeCityName[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter this->nodeCityName[0] = firstLetVal; } while(this->nodeCityName[n] != NULL) { if((this->nodeCityName[n] >= 65) && (this->nodeCityName[n] <= 90)) { if(this->nodeCityName[n - 1] != 32) { letVal = this->nodeCityName[n] + 32; this->nodeCityName[n] = letVal; } } n++; } } }; Here is the main part of the program: // U08221.cpp : main project file. #include "stdafx.h" #include "Locations2.h" #include <_iostream> #include <_string> #include <_fstream> #include <_cmath> using namespace std; #define pi 3.14159265358979323846264338327950288 #define radius 6371 #define gig 1073741824 //size of a gigabyte in bytes int n = 0,x, locationCount = 0, MAX_SIZE = 35 , g = 0, i = 0, modKey = 0, xx; string cityNameInput, alter; char targetCity[35], skipKey = ' '; double lat1, lon1, lat2, lon2, dist, dummy, modVal, result; bool acceptedInput = false, match = false, nodeExists = false;// note: addLocation(), set to true to enable user input as opposed to txt file locationNode *temp, *temp2, *example, *seek, *bridge, *start_ptr = NULL; class Menu { int junction; public: /* Convert decimal degrees to radians */ public: void setElementsNull(char cityParam[]) { int y=0; while(cityParam[y] != NULL) { y++; } while(y < MAX_SIZE) { cityParam[y] = NULL; y++; } } void correctCase(string name) // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = name[0], letVal; int n = 1; // variable for name index from second letter onwards if((name[0] >90) && (name[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter name[0] = firstLetVal; } while(name[n] != NULL) { if((name[n] >= 65) && (name[n] <= 90)) { letVal = name[n] + 32; name[n] = letVal; } n++; } for(n = 0; targetCity[n] != NULL; n++) { targetCity[n] = name[n]; } } bool nodeExistTest(char targetCity[]) // see if entry is present in the database { match = false; seek = start_ptr; int letters = 0, letters2 = 0, x = 0, y = 0; while(targetCity[y] != NULL) { letters2++; y++; } while(x <= locationCount) // locationCount is number of entries currently in list { y=0, letters = 0; while(seek->nodeCityName[y] != NULL) // count letters in the current name { letters++; y++; } if(letters == letters2) // same amount of letters in the name { y = 0; while(y <= letters) // compare each letter against one another { if(targetCity[y] == seek->nodeCityName[y]) { match = true; y++; } else { match = false; y = letters + 1; // no match, terminate comparison } } } if(match) { x = locationCount + 1; //found match so terminate loop } else{ if(seek->Next != NULL) { bridge = seek; seek = seek->Next; x++; } else { x = locationCount + 1; // end of list so terminate loop } } } return match; } double deg2rad(double deg) { return (deg * pi / 180); } /* Convert radians to decimal degrees */ double rad2deg(double rad) { return (rad * 180 / pi); } /* Do the calculation */ double distance(double lat1, double lon1, double lat2, double lon2, double dist) { dist = sin(deg2rad(lat1)) * sin(deg2rad(lat2)) + cos(deg2rad(lat1)) * cos(deg2rad(lat2)) * cos(deg2rad(lon1 - lon2)); dist = acos(dist); dist = rad2deg(dist); dist = (radius * pi * dist) / 180; return dist; } void serialise() { // Serialize to format that can be written to text file fstream outfile; outfile.open("locations.txt",ios::out); temp = start_ptr; do { for(xx = 0; temp->nodeCityName[xx] != NULL; xx++) { if(temp->nodeCityName[xx] == 32) { temp->nodeCityName[xx] = 47; } } outfile << endl << temp->nodeCityName<< " "; outfile<<temp->nodeLati<< " "; outfile<<temp->nodeLongi; temp = temp->Next; }while(temp != NULL); outfile.close(); } void sortList() // do this { int changes = 1; locationNode *node1, *node2; while(changes > 0) // while changes are still being made to the list execute { node1 = start_ptr; node2 = node1->Next; changes = 0; do { xx = 1; if(node1->nodeCityName[0] > node2->nodeCityName[0]) //compare first letter of name with next in list { node1->swapProps(node2); // should come after the next in the list changes++; } else if(node1->nodeCityName[0] == node2->nodeCityName[0]) // if same first letter { while(node1->nodeCityName[xx] == node2->nodeCityName[xx]) // check next letter of name { if((node1->nodeCityName[xx + 1] != NULL) && (node2->nodeCityName[xx + 1] != NULL)) // check next letter until not the same { xx++; } else break; } if(node1->nodeCityName[xx] > node2->nodeCityName[xx]) { node1->swapProps(node2); // should come after the next in the list changes++; } } node1 = node2; node2 = node2->Next; // move to next pair in list } while(node2 != NULL); } } void initialise() { cout << "Populating List..."; ifstream inputFile; inputFile.open ("locations.txt", ios::in); char inputName[35] = " "; double inputLati = 0, inputLongi = 0; //temp = new locationNode(inputName, inputLati, inputLongi); do { inputFile.get(inputName, 35, ' '); inputFile >> inputLati; inputFile >> inputLongi; if(inputName[0] == 10 || 13) //remove linefeed from input { for(int i = 0; inputName[i] != NULL; i++) { inputName[i] = inputName[i + 1]; } } for(xx = 0; inputName[xx] != NULL; xx++) { if(inputName[xx] == 47) // if it is a '/' { inputName[xx] = 32; // replace it for a space } } temp = new locationNode(inputName, inputLati, inputLongi); if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list } while(!inputFile.eof()); cout << "Successful!" << endl << "List contains: " << locationCount << " entries" << endl; inputFile.close(); cout << endl << "*******************************************************************" << endl << "DISTANCE CALCULATOR v2.0\tAuthors: Darius Hodaei, Joe Clifton" << endl; } void menuInput() { char menuChoice = ' '; while(menuChoice != 'Q') { // Menu if(skipKey != 'X') // This is set by case 'S' below if a searched term does not exist but wants to be added { cout << endl << "*******************************************************************" << endl; cout << "Please enter a choice for the menu..." << endl << endl; cout << "(P) To print out the list" << endl << "(O) To order the list alphabetically" << endl << "(A) To add a location" << endl << "(D) To delete a record" << endl << "(C) To calculate distance between two points" << endl << "(S) To search for a location in the list" << endl << "(M) To check memory usage" << endl << "(U) To update a record" << endl << "(Q) To quit" << endl; cout << endl << "*******************************************************************" << endl; cin >> menuChoice; if(menuChoice >= 97) { menuChoice = menuChoice - 32; // Turn the lower case letter into an upper case letter } } skipKey = ' '; //Reset skipKey so that it does not skip the menu switch(menuChoice) { case 'P': temp = start_ptr; // set temp to the start of the list do { if (temp == NULL) { cout << "You have reached the end of the database" << endl; } else { // Display details for what temp points to at that stage cout << "Location : " << temp->nodeCityName << endl; cout << "Latitude : " << temp->nodeLati << endl; cout << "Longitude : " << temp->nodeLongi << endl; cout << endl; // Move on to next locationNode if one exists temp = temp->Next; } } while (temp != NULL); break; case 'O': { sortList(); // pass by reference??? cout << "List reordered alphabetically" << endl; } break; case 'A': char cityName[35]; double lati, longi; cout << endl << "Enter the name of the location: "; cin >> cityName; for(xx = 0; cityName[xx] != NULL; xx++) { if(cityName[xx] == 47) // if it is a '/' { cityName[xx] = 32; // replace it for a space } } if(!nodeExistTest(cityName)) { cout << endl << "Please enter the latitude value for this location: "; cin >> lati; cout << endl << "Please enter the longitude value for this location: "; cin >> longi; cout << endl; temp = new locationNode(cityName, lati, longi); temp->correctCase(); //start_ptr allignment if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list cout << "Location sucessfully added to the database! There are " << locationCount << " location(s) stored" << endl; } else { cout << "Node is already present in the list and so cannot be added again" << endl; } break; case 'D': { junction = 0; locationNode *place; cout << "Enter the name of the city you wish to remove" << endl; cin >> targetCity; setElementsNull(targetCity); correctCase(targetCity); for(xx = 0; targetCity[xx] != NULL; xx++) { if(targetCity[xx] == 47) { targetCity[xx] = 32; } } if(nodeExistTest(targetCity)) //if this node does exist { if(seek == start_ptr) // if it is the first in the list { junction = 1; } if(seek->Next == NULL) // if it is last in the list { junction = 2; } switch(junction) // will alter list accordingly dependant on where the searched for link is { case 1: start_ptr = start_ptr->Next; delete seek; --locationCount; break; case 2: place = seek; seek = bridge; seek->Next = NULL; delete place; --locationCount; break; default: bridge->Next = seek->Next; delete seek; --locationCount; break; } cout << endl << "Link deleted. There are now " << locationCount << " locations." << endl; } else { cout << "That entry does not currently exist" << endl << endl << endl; } } break; case 'C': { char city1[35], city2[35]; cout << "Enter the first city name" << endl; cin >> city1; setElementsNull(city1); correctCase(targetCity); if(nodeExistTest(city1)) { lat1 = seek->nodeLati; lon1 = seek->nodeLongi; cout << "Lati = " << seek->nodeLati << endl << "Longi = " << seek->nodeLongi << endl << endl; } cout << "Enter the second city name" << endl; cin >> city2; setElementsNull(city2); correctCase(targetCity); if(nodeExistTest(city2)) { lat2 = seek->nodeLati; lon2 = seek->nodeLongi; cout << "Lati = " << seek->nodeLati << endl << "Longi = " << seek->nodeLongi << endl << endl; } result = distance (lat1, lon1, lat2, lon2, dist); cout << "The distance between these two locations is " << result << " kilometres." << endl; } break; case 'S': { char choice; cout << "Enter search term..." << endl; cin >> targetCity; setElementsNull(targetCity); correctCase(targetCity); if(nodeExistTest(targetCity)) { cout << "Latitude: " << seek->nodeLati << endl << "Longitude: " << seek->nodeLongi << endl; } else { cout << "Sorry, that city is not currently present in the list." << endl << "Would you like to add this city now Y/N?" << endl; cin >> choice; /*while(choice != ('Y' || 'N')) { cout << "Please enter a valid choice..." << endl; cin >> choice; }*/ switch(choice) { case 'Y': skipKey = 'X'; menuChoice = 'A'; break; case 'N': break; default : cout << "Invalid choice" << endl; break; } } break; } case 'M': { cout << "Locations currently stored: " << locationCount << endl << "Memory used for this: " << (sizeof(start_ptr) * locationCount) << " bytes" << endl << endl << "You can store " << ((gig - (sizeof(start_ptr) * locationCount)) / sizeof(start_ptr)) << " more locations" << endl ; break; } case 'U': { cout << "Enter the name of the Location you would like to update: "; cin >> targetCity; setElementsNull(targetCity); correctCase(targetCity); if(nodeExistTest(targetCity)) { cout << "Select (1) to alter City Name, (2) to alter Longitude, (3) to alter Latitude" << endl; cin >> modKey; switch(modKey) { case 1: cout << "Enter the new name: "; cin >> alter; cout << endl; seek->modify(alter); break; case 2: cout << "Enter the new latitude: "; cin >> modVal; cout << endl; seek->modify(modVal, modKey); break; case 3: cout << "Enter the new longitude: "; cin >> modVal; cout << endl; seek->modify(modVal, modKey); break; default: break; } } else cout << "Location not found" << endl; break; } } } } }; int main(array<System::String ^> ^args) { Menu mm; //mm.initialise(); mm.menuInput(); mm.serialise(); }

    Read the article

  • Apache load balancer limits with Tomcat over AJP

    - by PAS
    Hi All, I have Apache acting as a load balancer in front of 3 Tomcat servers. Occasionally, Apache returns 503 responses, which I would like to remove completely. All 4 servers are not under significant load in terms of CPU, memory, or disk, so I am a little unsure what is reaching it's limits or why. 503s are returned when all workers are in error state - whatever that means. Here are the details: Apache config: <IfModule mpm_prefork_module> StartServers 30 MinSpareServers 30 MaxSpareServers 60 MaxClients 200 MaxRequestsPerChild 1000 </IfModule> ... <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> # Tomcat HA cluster <Proxy balancer://mycluster> BalancerMember ajp://10.176.201.9:8009 keepalive=On retry=1 timeout=1 ping=1 BalancerMember ajp://10.176.201.10:8009 keepalive=On retry=1 timeout=1 ping=1 BalancerMember ajp://10.176.219.168:8009 keepalive=On retry=1 timeout=1 ping=1 </Proxy> # Passes thru track. or api. ProxyPreserveHost On ProxyStatus On # Original tracker ProxyPass /m balancer://mycluster/m ProxyPassReverse /m balancer://mycluster/m Tomcat config: <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost"> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Engine> </Service> </Server> Apache error log: [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.10:8009 (10.176.201.10) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.10) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.10 [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.9:8009 (10.176.201.9) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.9) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.9 [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.219.168:8009 (10.176.219.168) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.219.168) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.219.168 [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state Load balancer top info: top - 23:44:11 up 210 days, 4:32, 1 user, load average: 0.10, 0.11, 0.09 Tasks: 135 total, 2 running, 133 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.2%id, 0.1%wa, 0.0%hi, 0.1%si, 0.3%st Mem: 524508k total, 517132k used, 7376k free, 9124k buffers Swap: 1048568k total, 352k used, 1048216k free, 334720k cached Tomcat top info: top - 23:47:12 up 210 days, 3:07, 1 user, load average: 0.02, 0.04, 0.00 Tasks: 63 total, 1 running, 62 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 99.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2097372k total, 2080888k used, 16484k free, 21464k buffers Swap: 4194296k total, 380k used, 4193916k free, 1520912k cached Catalina.out does not have any error messages in it. According to Apache's server status, it seems to be maxing out at 143 requests/sec. I believe the servers can handle substantially more load than they are, so any hints about low default limits or other reasons why this setup would be maxing out would be greatly appreciated.

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >