Search Results

Search found 4775 results on 191 pages for 'trace flags'.

Page 132/191 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • Social HCM: Is Your Team Listening?

    - by Mike Stiles
    Does integrating Social HCM into your enterprise make sense? Consider Sam and Christina. Sam is a new hire at a big company. On the job 3 weeks, a question has come up on how to properly file an expense report to get reimbursed. It was covered in the onboarding session, but shockingly enough, Sam didn’t memorize or write down every word of the session. The answer is probably in a handout, in a stack of handouts 2 inches thick. It also might be on the employee web site…somewhere. Christina is a new hire at a different big company. She has the same question. She logs into her company’s social network, goes to the “new hires” group, asks her question and gets an answer in seconds. Christina says, “Cool!” Sam says, “Grrrr.” It’s safe to say the qualified talent your company wants is accustomed to using social platforms to communicate and get quick answers. As such, Christina is comfortable at her new company, whereas Sam is wondering what he’s gotten himself into. Companies that cling to talent communication and management systems that don’t speak to talent’s needs or expectations put themselves at risk. Right from the recruiting stage, prospects can determine if a company has embraced the communications tools of the 21st century. If they don’t see it, alarm bells go off. With great talent more in demand than ever, enterprises should reconsider making “this is the way we do it, you adapt to us” their mantra. Other blogs have clearly outlined that apart from meeting top recruits’ expectations, Social HCM benefits the organization itself in terms of efficiency, talent performance & measurement. Recruiting: Jobvite shows 64% of companies hired using social. 89% of job seekers are using social in their search. Social can give employers access to relevant communities of prospects and advance the brand. Nucleus Research found general hiring software can provide over 1,000% ROI by reducing churn and improving screening. Social talent acquisition should perform at least as well. Learning & Development:Employees, learning from the company or from peers, can be kept on top of the latest needed skillsets and engage in self-paced training so as to advance within the company. Performance Management:Just as gamers are egged on by levels and achievements, talent can reach for workplace kudos, be they shout-outs from peers & managers or formally established milestones. Plus employee reviews become consistent and fair as managers have access to the cumulative feedback social offers. Workflow and Collaboration:With workforces dispersing in terms of physical location, social provides a platform that helps eliminate drawbacks that would have brought just 10 years ago. Finding and connecting with just the right colleague to get the most relevant info at any given time has never been more possible…or expected. While yes, marketing has taken the social lead inside the enterprise, HCM (with the word “human” right there in its name) is the obvious locale for the next big integration of social in business. The technology is there. At Oracle, Fusion HCM apps are deeply embedded with Social HCM…just one example of systems taking social across the enterprise. Christina’s company is communicating with her in ways she’s used to. Sam’s company may as well be trying to talk to him using signal flags. @mikestilesPhoto via stock.xchng

    Read the article

  • Syntax of passing lambda

    - by Astara
    Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the "this" pointer associated with the meter, as well as the common name for the "action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, "CPU ")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like "whack-a-mole" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most "base", Meter class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I wanted to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before ‘{’ token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname "Scheduler" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the "this" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q.

    Read the article

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • What's wrong with my wireless?

    - by dazzle
    I am having issues with my wireless connection. My connection is constantly disconnecting, then attempting to reconnect, reconnecting momentarily, then disconnecting etc. on times scales that range from seconds to minutes. In the meantime, needless to say I'm having significant packet loss. I'm running Ubuntu 14.04 64bit, updated and upgraded to today. Here is my card and driver: delta@sager:~$ lspci -vq | grep -i wireless -B 1 -A 5 04:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73) Subsystem: Intel Corporation Dual Band Wireless-AC 7260 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7d00000 (64-bit, non-prefetchable) [size=8K] Capabilities: Kernel driver in use: iwlwifi Here is my kernel: delta@sager:~$ uname -r 3.13.0-34-generic None of the other machines on my home network are having these issues. Windows Vista is networking without issue for goodness sake ;-) Here is a small clipping from the output of dmesg. As you can see, I am getting a cfg80211 message of some sort over and over again (FYI, I've replaced my MAC address with a series of dashes, so anytime there is a ---------------, that was where the MAC address was: [ 1881.739161] wlan1: authenticate with --------------- [ 1881.741561] wlan1: send auth to --------------- (try 1/3) [ 1881.743440] wlan1: authenticated [ 1881.746027] wlan1: associate with --------------- (try 1/3) [ 1881.749244] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1881.754727] wlan1: associated [ 1881.754827] cfg80211: Calling CRDA for country: US [ 1881.761552] cfg80211: Regulatory domain changed to country: US [ 1881.761559] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1881.761564] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1881.761568] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1881.761571] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761574] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761577] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761580] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1881.761584] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1882.391038] cfg80211: Calling CRDA to update world regulatory domain [ 1882.396254] cfg80211: World regulatory domain updated: [ 1882.396260] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1882.396265] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396268] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396271] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1882.396274] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396277] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.148252] wlan1: authenticate with --------------- [ 1886.150005] wlan1: send auth to --------------- (try 1/3) [ 1886.151807] wlan1: authenticated [ 1886.154847] wlan1: associate with --------------- (try 1/3) [ 1886.158147] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1886.163464] wlan1: associated [ 1886.163520] wlan1: Limiting TX power to 30 (30 - 0) dBm as advertised by --------------- [ 1886.163588] cfg80211: Calling CRDA for country: US [ 1886.170500] cfg80211: Regulatory domain changed to country: US [ 1886.170508] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1886.170513] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1886.170517] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1886.170520] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170523] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170526] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170529] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1886.170533] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1887.200197] cfg80211: Calling CRDA to update world regulatory domain [ 1887.203655] cfg80211: World regulatory domain updated: [ 1887.203659] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1887.203662] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203664] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203666] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1887.203668] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203670] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) I've poked around on AskUbuntu, and have not found any adequate solutions; have also found similar threads that were left unanswered. Any advice/experience/threads I might be able to pull on would be greatly appreciated. In your opinion, is this a kernel issue, hardware issue, etc.? Thanks in advance. EDIT: chili, here's the output of iwconfig: delta@sager:~$ iwconfig wlan1 IEEE 802.11abg ESSID:"LANbeforetime" Mode:Managed Frequency:2.412 GHz Access Point: ----------- Bit Rate=48 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=44/70 Signal level=-66 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:80 Missed beacon:0 eth0 no wireless extensions. lo no wireless extensions.

    Read the article

  • Dont Throw Duplicate Exceptions

    In your code, youll sometimes have write code that validates input using a variety of checks.  Assuming you havent embraced AOP and done everything with attributes, its likely that your defensive coding is going to look something like this: public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new InvalidArgumentException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } Do you see a problem here?  Heres the deal Exceptions should be meaningful.  They have value at a number of levels: In the code, throwing an exception lets the develop know that there is an unsupported condition here In calling code, different types of exceptions may be handled differently At runtime, logging of exceptions provides a valuable diagnostic tool Its this last reason I want to focus on.  If you find yourself literally throwing the exact exception in more than one location within a given method, stop.  The stack trace for such an exception is likely going to be identical regardless of which path of execution led to the exception being thrown.  When that happens, you or whomever is debugging the problem will have to guess which exception was thrown.  Guessing is a great way to introduce additional problems and/or greatly increase the amount of time require to properly diagnose and correct any bugs related to this behavior. Dont Guess Be Specific When throwing an exception from multiple code paths within the code, be specific.  Virtually ever exception allows a custom message use it and ensure each case is unique.  If the exception might be handled differently by the caller, than consider implementing a new custom exception type.  Also, dont automatically think that you can improve the code by collapsing the if-then logic into a single call with short-circuiting (e.g. if(x == null || !x.IsValid()) ) that will guarantee that you cant easily throw different information into the message as easily as constructing the exception separately in each case. The code above might be refactored like so:   public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new ArgumentNullException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } In this case its taking advantage of the fact that there is already an ArgumentNullException in the framework, but if you didnt have an IsValid() method and were doing validation on your own, it might look like this: public void Foo(SomeClass someArgument) { if(someArgument.Quantity < 0) { throw new InvalidArgumentException("someArgument", "Quantity cannot be less than 0. Quantity: " + someArgument.Quantity); } if(someArgument.Quantity > 100) { throw new InvalidArgumentException("someArgument", "SomeArgument.Quantity cannot exceed 100. Quantity: " + someArgument.Quantity); }   // Do Real Work }   Note that in this last example, Im throwing the same exception type in each case, but with different Message values.  Im also making sure to include the value that resulted in the exception, as this can be extremely useful for debugging.  (How many times have you wished NullReferenceException would tell you the name of the variable it was trying to reference?) Dont add work to those who will follow after you to maintain your application (especially since its likely to be you).  Be specific with your exception messages follow DRY when throwing exceptions within a given method by throwing unique exceptions for each interesting case of invalid state. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Investigating Strategies For Functional Decomposition

    - by Liam McLennan
    Introducing Functional Decomposition Before I begin I must apologise. I think I am using the term ‘functional decomposition’ loosely, and probably incorrectly. For the purpose of this article I use functional decomposition to mean the recursive splitting of a large problem into increasingly smaller ones, so that the one large problem may be solved by solving a set of smaller problems. The justification for functional decomposition is that the decomposed problem is more easily solved. As software developers we recognise that the smaller pieces are more easily tested, since they do less and are more cohesive. Functional decomposition is important to all scientific pursuits. Once we understand natural selection we can start to look for humanities ancestral species, once we understand the big bang we can trace our expanding universe back to its origin. Isaac Newton acknowledged the compositional nature of his scientific achievements: If I have seen further than others, it is by standing upon the shoulders of giants   The Two Strategies For Functional Decomposition of Computer Programs Private Methods When I was working on my undergraduate degree I was taught to functionally decompose problems by using private methods. Consider the problem of painting a house. The obvious solution is to solve the problem as a single unit: public void PaintAHouse() { // all the things required to paint a house ... } We decompose the problem by breaking it into parts: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { // everything required to paint the undercoat } private void PaintTopcoat() { // everything required to paint the topcoat } The problem can be recursively decomposed until a sufficiently granular level of detail is reached: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { prepareSurface(); fetchUndercoat(); paintUndercoat(); } private void PaintTopcoat() { fetchPaint(); paintTopcoat(); } According to Wikipedia, at least one computer programmer has referred to this process as “the art of subroutining”. The practical issues that I have encountered when using private methods for decomposition are: To preserve the top level API all of the steps must be private. This means that they can’t easily be tested. The private methods often have little cohesion except that they form part of the same solution. Decomposing to Classes The alternative is to decompose large problems into multiple classes, effectively using a class instead of each private method. The API delegates to related classes, so the API is not polluted by the sub-steps of the problem, and the steps can be easily tested because they are each in their own highly cohesive class. Additionally, I think that this technique facilitates better adherence to the Single Responsibility Principle, since each class can be decomposed until it has precisely one responsibility. Revisiting my previous example using class composition: public class HousePainter { private undercoatPainter = new UndercoatPainter(); private topcoatPainter = new TopcoatPainter(); public void PaintAHouse() { undercoatPainter.Paint(); topcoatPainter.Paint(); } } Summary When decomposing a problem there is more than one way to represent the sub-problems. Using private methods keeps the logic in one place and prevents a proliferation of classes (thereby following the four rules of simple design) but the class decomposition is more easily testable and more compatible with the Single Responsibility Principle.

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • Login via XDMCP not possible to 12.04 from Squeeze box

    - by Joysn
    Can anybody tell me whats wrong with my 12.04/lightdm/XDMCP setup? i activated X11 remote via tcp and XDMCP on lightdm and restarted lightdm: [SeatDefaults] user-session=ubuntu greeter-session=unity-greeter xserver-allow-tcp=true [XDMCPServer] enabled=true whenever i try to login to the remove lightdm via Xephyr Xephyr -query <remote ip> :10 i get the following trace: SELinux: Disabled on system, not enabling in X server [dix] Could not init font path element /usr/share/fonts/X11/cyrillic, removing from list! Ignoring device from udev. Ignoring device from udev. Ignoring device from udev. Ignoring device from udev. Ignoring device from udev. Ignoring device from udev. *** glibc detected *** Xephyr: free(): corrupted unsorted chunks: 0x08659088 *** ======= Backtrace: ========= /lib/i686/cmov/libc.so.6(+0x6af71)[0xb72b6f71] /lib/i686/cmov/libc.so.6(+0x6c7c8)[0xb72b87c8] /lib/i686/cmov/libc.so.6(cfree+0x6d)[0xb72bb8ad] Xephyr(Xfree+0x21)[0x81ddcd1] Xephyr(SrvXkbResizeKeyType+0x59e)[0x81ca01e] Xephyr[0x81ab8eb] Xephyr[0x81ac8c2] Xephyr[0x8088807] Xephyr[0x807c16a] /lib/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0xb7262ca6] Xephyr[0x8061081] ======= Memory map: ======== 08048000-08204000 r-xp 00000000 fd:00 1148994 /usr/bin/Xephyr 08204000-08210000 rw-p 001bc000 fd:00 1148994 /usr/bin/Xephyr 08210000-08239000 rw-p 00000000 00:00 0 08496000-087ac000 rw-p 00000000 00:00 0 [heap] b5600000-b5621000 rw-p 00000000 00:00 0 b5621000-b5700000 ---p 00000000 00:00 0 b5762000-b588f000 rw-p 00000000 00:00 0 b588f000-b58b3000 r-xp 00000000 fd:00 50040 /usr/lib/libexpat.so.1.5.2 b58b3000-b58b5000 rw-p 00024000 fd:00 50040 /usr/lib/libexpat.so.1.5.2 b58d0000-b5ae4000 r-xp 00000000 fd:00 1212530 /usr/lib/dri/swrast_dri.so b5ae4000-b5ae9000 rw-p 00214000 fd:00 1212530 /usr/lib/dri/swrast_dri.so b5ae9000-b5af8000 rw-p 00000000 00:00 0 b5af8000-b5c24000 rw-s 00000000 00:04 1420132375 /SYSV00000000 (deleted) b5c24000-b5c28000 r-xp 00000000 fd:00 770089 /usr/lib/libXfixes.so.3.1.0 b5c28000-b5c29000 rw-p 00003000 fd:00 770089 /usr/lib/libXfixes.so.3.1.0 b5c29000-b5c31000 r-xp 00000000 fd:00 772923 /usr/lib/libXrender.so.1.3.0 b5c31000-b5c32000 rw-p 00007000 fd:00 772923 /usr/lib/libXrender.so.1.3.0 b5c32000-b5c3a000 r-xp 00000000 fd:00 49913 /usr/lib/libXcursor.so.1.0.2 b5c3a000-b5c3b000 rw-p 00007000 fd:00 49913 /usr/lib/libXcursor.so.1.0.2 b5c3b000-b5c58000 r-xp 00000000 09:02 466930 /lib/libgcc_s.so.1 b5c58000-b5c59000 rw-p 0001c000 09:02 466930 /lib/libgcc_s.so.1 b5c59000-b5c5b000 rw-p 00000000 00:00 0 b5c5b000-b7121000 r-xp 00000000 fd:00 49164 /usr/lib/libnvidia-glcore.so.256.53 b7121000-b7178000 rwxp 014c6000 fd:00 49164 /usr/lib/libnvidia-glcore.so.256.53 b7178000-b7188000 rwxp 00000000 00:00 0 b7188000-b7189000 r-xp 00000000 fd:00 49193 /usr/lib/tls/libnvidia-tls.so.256.53 b7189000-b718a000 rw-p 00000000 fd:00 49193 /usr/lib/tls/libnvidia-tls.so.256.53 b718a000-b718b000 rw-p 00000000 00:00 0 b718b000-b7190000 r-xp 00000000 fd:00 50172 /usr/lib/libfontenc.so.1.0.0 b7190000-b7191000 rw-p 00005000 fd:00 50172 /usr/lib/libfontenc.so.1.0.0 b7191000-b71a1000 r-xp 00000000 09:02 466849 /lib/libbz2.so.1.0.4 b71a1000-b71a2000 rw-p 00010000 09:02 466849 /lib/libbz2.so.1.0.4 b71a2000-b71b5000 r-xp 00000000 fd:00 52002 /usr/lib/libz.so.1.2.3.4 b71b5000-b71b6000 rw-p 00013000 fd:00 52002 /usr/lib/libz.so.1.2.3.4 b71b6000-b722a000 r-xp 00000000 fd:00 51162 /usr/lib/libfreetype.so.6.6.0 b722a000-b722e000 rw-p 00073000 fd:00 51162 /usr/lib/libfreetype.so.6.6.0 b722e000-b7246000 r-xp 00000000 fd:00 49652 /usr/lib/libxcb.so.1.1.0 b7246000-b7247000 rw-p 00017000 fd:00 49652 /usr/lib/libxcb.so.1.1.0 b7247000-b7248000 rw-p 00000000 00:00 0 b7248000-b724b000 r-xp 00000000 fd:00 54148 /usr/lib/libgpg-error.so.0.4.0 b724b000-b724c000 rw-p 00002000 fd:00 54148 /usr/lib/libgpg-error.so.0.4.0 b724c000-b738c000 r-xp 00000000 09:02 467095 /lib/i686/cmov/libc-2.11.3.so

    Read the article

  • edited and reversed changes on .htaccess - site starts redirecting to .comindex.php/

    - by Aurigae
    Site is a Joomla 2.5 site. I wanted to add a non www to www redirect to the htaccess file, did so, then the redirection went mad, reversed but still the site redirects. When i click view site in admin panel, i get linked to http://domain.comindex.php/ The website is http://www.domain.com Visiting the website URL works without www, but once you click on projects it acts mad too. Projects is managed with joomshopping extension. EDIT: the redirect also happens when rewrite is deactivated in admin panel. ## # @package Joomla # @copyright Copyright (C) 2005 - 2012 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. Redirect 301 /index.html /index.php Redirect 301 /services /project Redirect 301 /projects/projects.html /project Redirect 301 /projects/project1.html /project Redirect 301 /projects/project2.html /project Redirect 301 /projects /project Redirect 301 /keypersonnel.html /about-agrin/keystaff Redirect 301 /cooperation.htm /about-agrin/intcoop Redirect 301 /member.html /about-agrin/memberships Redirect 301 /contact.html /contacts Redirect 301 /hr.htm /jobs Redirect 301 /index.php/404 /index.php

    Read the article

  • Syntax of passing lambda causing hair loss (pulling out)

    - by Astara
    Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the "this" pointer associated with the meter, as well as the common name for the "action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, "CPU ")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like "whack-a-mole" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most "base", Meter class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I wanted to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before ‘{’ token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname "Scheduler" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the "this" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q.

    Read the article

  • "The connection has timed out" - Please help!

    - by gon
    I recently installed a fresh Ubuntu 12.04 LTS on a desktop, and the installation itself was successful (other than 'grub rescue' issue that I encountered but fixed) but this connection problem is really giving me a headache. Symptoms: 1. When I open the FireFox browser and try to connect to a website, it just hangs for a while saying "Connecting..." but eventually loads an error page "The connection has timed out". 2. It's not a browser problem (and I tried setting ipv6 thing to "true" at about:config) because running "sudo apt-get install [some-random-package]" at terminal fails ("E: Unable to locate package [package]") too. All other operations that need internet access are not working. 3. I certainly see a wired network (called "eth1") at the Network Manager, and it says "Connection Established" after disconnecting and then connecting again. I have tried almost everything that could be found from google search results still no luck. Their problems slightly differ from mine or the solutions just don't work. By the way it didn't have internet access when installing Ubuntu 12.04. (I ignored the message that I need internet to install Ubuntu) Could this be a problem? I'm sorry I don't remember if internet worked or not on the previous version of Ubuntu. :( I would really appreciate your help... I don't even know what more to do if this fails too.. Thanks!! Thanks for your comment. Here is the result of ifconfig: eth0 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b9 inet addr:10.10.65.185 Bcast:10.10.65.255 Mask:255.255.255.0 inet6 addr: fe80::7aac:c0ff:fe3d:b2b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3907 errors:0 dropped:0 overruns:0 frame:0 TX packets:771 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:393118 (393.1 KB) TX bytes:73472 (73.4 KB) Interrupt:16 eth1 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b8 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:204 (204.0 B) TX bytes:204 (204.0 B) route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.10.65.1 0.0.0.0 UG 0 0 0 eth0 10.10.65.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 /etc/resolv.conf: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 8.8.8.8 nameserver 8.8.4.4 nameserver 10.81.1.8 nameserver 10.1.2.10 nameserver 127.0.0.1 search yamatake.local /etc/network/interfaces: auto lo iface lo inet loopback #auto eth0 #iface eth0 inet dhcp #auto eth1 #iface eth1 inet dhcp And I'll also include the result of 'sudo lshw -C network' in case it might help: *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 10 serial: 78:ac:c0:3d:b2:b9 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 ip=10.10.65.185 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:93 memory:fc000000-fc00ffff *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 logical name: eth1 version: 10 serial: 78:ac:c0:3d:b2:b8 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 latency=0 link=no multicast=yes port=twisted pair speed=100Mbit/s resources: irq:94 memory:fb000000-fb00ffff

    Read the article

  • 11gR2 RAC ASM????

    - by Liu Maclean(???)
    11gR2 RAC?ocr?votedisk???????ASM??, ????10g??????2?RAC????????????,  ?? 11gR2 ?ASM?spfile??????ASM diskgroup???????ASM??????? ????????????,????? ASM?????mount diskgroup??????diskgroup????, ??ASM??????ASM spfile????????,?2???????? ????T.askmaclean.com?????ASM?????: hello maclean, ??spfile??ASMCMD> spget+CRSDG/rac/asmparameterfile/registry.253.787925627?????,ASM ?????ORACLE instance,?????????????diskgroup,????????????????????????????thanks.! ?????????: ?11.2??Oracle Cluterware??voting disk files?????????11.1?10.2????,11.2??voting disk file??????OCR?, ?????11.2??ocr?votedisk?????ASM? , ???11.2?voting disk file??GPNP profile??CSS voting file discovery string???? CSS voting disk file?discovery string???ASM,??????ASM discovery string???  ????????udev???????ASM???LUN, ??udev????????/dev/rasm-disk* , ????gpnptool get????gpnp profile: [grid@maclean1 trace]$ gpnptool get Warning: some command line parameters were defaulted. Resulting command line: /g01/grid/app/11.2.0/grid/bin/gpnptool.bin get -o- <?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="9" ClusterUId="452185be9cd14ff4ffdc7688ec5439bf" ClusterName="maclean-cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="172.168.1.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>< orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/rasm*" SPFile="+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933"/>< ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms>< ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>L1SLg10AqGEauCQ4ne9quucITZA=</ds:DigestValue>< /ds:Reference></ds:SignedInfo><ds:SignatureValue>rTyZm9vfcQCMuian6isnAThUmsV4xPoK2fteMc1l0GIvRvHncMwLQzPM/QrXCGGTCEvgvXzUPEKzmdX2oy5vLcztN60UHr6AJtA2JYYodmrsFwEyVBQ1D6wH+HQiOe2SG9UzdQnNtWSbjD4jfZkeQWyMPfWdKm071Ek0Rfb4nxE=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile> Success. ?????2???: <orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400?/>==»css voting disk??+ASM<orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/rasm*” SPFile=”+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933?/>==»??????ASM?DiscoveryString=”/dev/rasm*”,?ASM??????????????,SPFILE???ASM Parameter FILE?ALIAS ???????GPNP???ASM Parameter FILE?ALIAS,?????ASM???????SPFILE,???Diskgroup?Mount???????ASM ALIAS?????? ??????+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933??SPFILE?ASM??????: [grid@maclean1 wallets]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 17 05:45:35 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> set linesize 140 pagesize 1400 col "FILE NAME" format a40 set head on select NAME "FILE NAME", AU_KFFXP "AU NUMBER", NUMBER_KFFXP "FILE NUMBER", DISK_KFFXP "DISK NUMBER" from x$kffxp, v$asm_alias where GROUP_KFFXP = GROUP_NUMBER and NUMBER_KFFXP = FILE_NUMBER and name in ('REGISTRY.253.788682933') order by DISK_KFFXP,AU_KFFXP; FILE NAME AU NUMBER FILE NUMBER DISK NUMBER ---------------------------------------- ---------- ----------- ----------- REGISTRY.253.788682933 39 253 1 REGISTRY.253.788682933 35 253 3 REGISTRY.253.788682933 35 253 4 SQL> col path for a50 SQL> select disk_number,path from v$asm_disk where disk_number in (1,3,4) and GROUP_NUMBER=3; DISK_NUMBER PATH ----------- -------------------------------------------------- 3 /dev/rasm-diske 4 /dev/rasm-diskf 1 /dev/rasm-diskc ?????ASM SPFILE??????(redundancy=high),????? /dev/rasm-diskc?AU=39?/dev/rasm-diske AU=35?/dev/rasm-diskf AU=35? ????kfed?????????ASM DISK?header: [grid@maclean1 wallets]$ kfed read /dev/rasm-diske|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskc|grep spfile kfdhdb.spfile: 39 ; 0x0f4: 0x00000027 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskf|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 ????ASM disk header?kfdhdb.spfile??ASM SPFILE???DISK??AU NUMBER????, ASM???????????GPNP PROFILE?? DiscoveryString?????????,????ASM disk header?????kfdhdb.spfile??????,?????MOUNT DISKGROUP??????ASM SPFILE,?????ASM, ?????????????????

    Read the article

  • ?12c database ????Adaptive Execution Plans ????????

    - by Liu Maclean(???)
    12c R1 ????SQL??????- Adaptive Execution Plans ????????,???????optimizer ??????(runtime)???????????????, ????????????????????? SQL???????? ????????????, ?????????????????????????????????????????????????????????????adaptive plan ????????????????????????????????????,?????subplan???????????????????? ??????, ???????? ???????????????,?????????, ?????? ???????????????”???”????, ???????????????????buffer ???????  ????????????,?????,??????????????????? ???optimizer ?????????????????????????,?????????????????????????????????????????plan???? ??12C?????????????, ???????????????????,?????? ???????????? ????????????2???: Dynamic Plans????: ???????????????????????;??????,???optimizer??????????subplans??????????????, ???????????????????,?????????????? Reoptimization????: ?Dynamic Plans????,Reoptimization??????????????????????Reoptimization??,?????????????????????????,??reoptimization????? OPTIMIZER_ADAPTIVE_REPORTING_ONLY ???? report-only????????????????TRUE,?????????report-only????,???????????????,??????????????? Dynamic Plans ??????????????,????????????????????????, ?????????????,???????????,????????????????????????????????????????? ?????????????final plan??????????????default plan, ??final plan?default plan???????,????????????? subplan ???????????????,???????????????????????? ??????,???????statistics collector ?buffer???????????statistics collector?????????????????,???????????????????????????? ?????????????????????????????????????????,??????????,?????????????? ???????????,???????buffer???? ???????????????,?????????????????????????????,??????buffer,??????final plan? ????????,???????????????????????,????????????????? ?V$SQL??????IS_RESOLVED_DYNAMIC_PLAN??????????final plan???default plan? ??????dynamic plan ???????SQL PLAN directives?????? declare cursor PLAN_DIRECTIVE_IDS is select directive_id from DBA_SQL_PLAN_DIRECTIVES; begin for z in PLAN_DIRECTIVE_IDS loop DBMS_SPD.DROP_SQL_PLAN_DIRECTIVE(z.directive_id); end loop; end; / explain plan for select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; select * from table(dbms_xplan.display()); Plan hash value: 1255158658 www.askmaclean.com ------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4 | 128 | 7 (0)| 00:00:01 | | 1 | NESTED LOOPS | | | | | | | 2 | NESTED LOOPS | | 4 | 128 | 7 (0)| 00:00:01 | |* 3 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 (0)| 00:00:01 | |* 4 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK | 1 | | 0 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION | 1 | 20 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("O"."UNIT_PRICE"=15 AND "QUANTITY">1) 4 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") alter session set events '10053 trace name context forever,level 1'; OR alter session set events 'trace[SQL_Plan_Directive] disk highest'; select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; ---------------------------------------------------------------+-----------------------------------+ | Id | Operation | Name | Rows | Bytes | Cost | Time | ---------------------------------------------------------------+-----------------------------------+ | 0 | SELECT STATEMENT | | | | 7 | | | 1 | HASH JOIN | | 4 | 128 | 7 | 00:00:01 | | 2 | NESTED LOOPS | | | | | | | 3 | NESTED LOOPS | | 4 | 128 | 7 | 00:00:01 | | 4 | STATISTICS COLLECTOR | | | | | | | 5 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 | 00:00:01 | | 6 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK| 1 | | 0 | | | 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | | 8 | TABLE ACCESS FULL | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | ---------------------------------------------------------------+-----------------------------------+ Predicate Information: ---------------------- 1 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") 5 - filter(("O"."UNIT_PRICE"=15 AND "QUANTITY">1)) 6 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") ===================================== SPD: BEGIN context at statement level ===================================== Stmt: ******* UNPARSED QUERY IS ******* SELECT /*+ OPT_ESTIMATE (@"SEL$1" JOIN ("P"@"SEL$1" "O"@"SEL$1") ROWS=13.000000 ) OPT_ESTIMATE (@"SEL$1" TABLE "O"@"SEL$1" ROWS=13.000000 ) */ "P"."PRODUCT_NAME" "PRODUCT_NAME" FROM "OE"."ORDER_ITEMS" "O","OE"."PRODUCT_INFORMATION" "P" WHERE "O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1 AND "P"."PRODUCT_ID"="O"."PRODUCT_ID" Objects referenced in the statement PRODUCT_INFORMATION[P] 92194, type = 1 ORDER_ITEMS[O] 92197, type = 1 Objects in the hash table Hash table Object 92197, type = 1, ownerid = 6573730143572393221: No Dynamic Sampling Directives for the object Hash table Object 92194, type = 1, ownerid = 17822962561575639002: No Dynamic Sampling Directives for the object Return code in qosdInitDirCtx: ENBLD =================================== SPD: END context at statement level =================================== ======================================= SPD: BEGIN context at query block level ======================================= Query Block SEL$1 (#0) Return code in qosdSetupDirCtx4QB: NOCTX ===================================== SPD: END context at query block level ===================================== SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Inserted felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: qosdCreateFindingSingTab retCode = CREATED, fid = 2896834833840853267 SPD: qosdCreateDirCmp retCode = CREATED, fid = 2896834833840853267 SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SKIP_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Modified felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 5618517328604016300 SPD: Modified felem, fid=5618517328604016300, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 1142802697078608149 SPD: Modified felem, fid=1142802697078608149, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 2, objcnt = 2, obItr = 0, objid = 92194, objtyp = 1, vecsize = 0, obItr = 1, objid = 92197, objtyp = 1, vecsize = 0, fid = 1437680122701058051 SPD: Modified felem, fid=1437680122701058051, ftype = 1, freason = 2, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO select * from table(dbms_xplan.display_cursor(format=>'report')) ; ????report????adaptive plan Adaptive plan: ------------- This cursor has an adaptive plan, but adaptive plans are enabled for reporting mode only.  The plan that would be executed if adaptive plans were enabled is displayed below. ------------------------------------------------------------------------------------------ | Id  | Operation          | Name                | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------ |   0 | SELECT STATEMENT   |                     |       |       |     7 (100)|          | |*  1 |  HASH JOIN         |                     |     4 |   128 |     7   (0)| 00:00:01 | |*  2 |   TABLE ACCESS FULL| ORDER_ITEMS         |     4 |    48 |     3   (0)| 00:00:01 | |   3 |   TABLE ACCESS FULL| PRODUCT_INFORMATION |     1 |    20 |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------ SQL> select SQL_ID,IS_RESOLVED_DYNAMIC_PLAN,sql_text from v$SQL WHERE SQL_TEXT like '%MALCEAN%' and sql_text not like '%like%'; SQL_ID IS -------------------------- -- SQL_TEXT -------------------------------------------------------------------------------- 6ydj1bn1bng17 Y select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id ???? explain plan for ????default plan, ??????optimizer???final plan,??V$SQL.IS_RESOLVED_DYNAMIC_PLAN???Y,????????????? DBA_SQL_PLAN_DIRECTIVES?????????????SQL PLAN DIRECTIVES, ???12c? ???MMON?????DML ???column usage??????????,????SMON??? MMON????SGA??PLAN DIRECTIVES??? ?????DBMS_SPD.flush_sql_plan_directive???? select directive_id,type,reason from DBA_SQL_PLAN_DIRECTIVES / DIRECTIVE_ID TYPE REASON ----------------------------------- -------------------------------- ----------------------------- 10321283028317893030 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 4757086536465754886 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 16085268038103121260 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE SQL> set pages 9999 SQL> set lines 300 SQL> col state format a5 SQL> col subobject_name format a11 SQL> col col_name format a11 SQL> col object_name format a13 SQL> select d.directive_id, o.object_type, o.object_name, o.subobject_name col_name, d.type, d.state, d.reason 2 from dba_sql_plan_directives d, dba_sql_plan_dir_objects o 3 where d.DIRECTIVE_ID=o.DIRECTIVE_ID 4 and o.object_name in ('ORDER_ITEMS') 5 order by d.directive_id; DIRECTIVE_ID OBJECT_TYPE OBJECT_NAME COL_NAME TYPE STATE REASON ------------ ------------ ------------- ----------- -------------------------------- ----- ------------------------------------- --- 1.8156E+19 COLUMN ORDER_ITEMS UNIT_PRICE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 TABLE ORDER_ITEMS DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 COLUMN ORDER_ITEMS QUANTITY DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE DBA_SQL_PLAN_DIRECTIVES????? _BASE_OPT_DIRECTIVE ? _BASE_OPT_FINDING SELECT d.dir_own#, d.dir_id, d.f_id, decode(type, 1, 'DYNAMIC_SAMPLING', 'UNKNOWN'), decode(state, 1, 'NEW', 2, 'MISSING_STATS', 3, 'HAS_STATS', 4, 'CANDIDATE', 5, 'PERMANENT', 6, 'DISABLED', 'UNKNOWN'), decode(bitand(flags, 1), 1, 'YES', 'NO'), cast(d.created as timestamp), cast(d.last_modified as timestamp), -- Please see QOSD_DAYS_TO_UPDATE and QOSD_PLUS_SECONDS for more details -- about 6.5 cast(d.last_used as timestamp) - NUMTODSINTERVAL(6.5, 'day') FROM sys.opt_directive$ d ??dbms_spd??? SQL PLAN DIRECTIVES, SQL PLAN DIRECTIVES???retention ???53?: Package: DBMS_SPD This package provides subprograms for managing Sql Plan Directives(SPD). SPD are objects generated automatically by Oracle server. For example, if server detects that the single table cardinality estimated by optimizer is off from the actual number of rows returned when accessing the table, it will automatically create a directive to do dynamic sampling for the table. When any Sql statement referencing the table is compiled, optimizer will perform dynamic sampling for the table to get more accurate estimate. Notes: DBMSL_SPD is a invoker-rights package. The invoker requires ADMINISTER SQL MANAGEMENT OBJECT privilege for executing most of the subprograms of this package. Also the subprograms commit the current transaction (if any), perform the operation and commit it again. DBA view dba_sql_plan_directives shows all the directives created in the system and the view dba_sql_plan_dir_objects displays the objects that are included in the directives. -- Default value for SPD_RETENTION_WEEKS SPD_RETENTION_WEEKS_DEFAULT CONSTANT varchar2(4) := '53'; | STATE : NEW : Newly created directive. | : MISSING_STATS : The directive objects do not | have relevant stats. | : HAS_STATS : The objects have stats. | : PERMANENT : A permanent directive. Server | evaluated effectiveness and these | directives are useful. | | AUTO_DROP : YES : Directive will be dropped | automatically if not | used for SPD_RETENTION_WEEKS. | This is the default behavior. | NO : Directive will not be dropped | automatically. Procedure: flush_sql_plan_directive This procedure allows manually flushing the Sql Plan directives that are automatically recorded in SGA memory while executing sql statements. The information recorded in SGA are periodically flushed by oracle background processes. This procedure just provides a way to flush the information manually. ????”_optimizer_dynamic_plans”(enable dynamic plans)????????,???TRUE??DYNAMIC PLAN? ???FALSE???????????? ????,Dynamic Plan????????????Nested Loop?Hash Join???case ,????????Nested loop???????????HASH JOIN,?HASH JOIN????????????????? ????????subplan?????,???? pass?? ?join method???,?????STATISTICS COLLECTOR???cardinality?,???????HASH JOIN?????Nested Loop,????????????subplan?????access path; ???????Sales??????????????????,????HASH JOIN,??SUBPLAN??customers?????????;?????Nested Loop,???????cust_id?????Range Scan+Access by Rowid? Cardinality feedback Cardinality feedback????????11.2????,????????re-optimization???;  ???????????,Cardinality feedback?????????????????????????? ???????????????????,?????????????????,??????????Cardinality feedback????????????? ????????????????????????? ??????????????Cardinality feedback ??: ????????,???????????,??????????,????????????????selectivity ??? ????????????: ??????,?????????????????????????????????,??????????????????? ????????????????????????????????????????,?????????????????????????? ?????????,???????????????,?????????? ??????????Cardinality ????,??????join Cardinality ????????? Cardinality feedback???????cursor?,?Cursor???aged out????? SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ---------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem | ---------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | 20 | | | | |* 1 | HASH JOIN | | 1 | 4 | 13 |00:00:00.01 | 24 | 20 | 2061K| 2061K| 429K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 4 | 13 |00:00:00.01 | 7 | 6 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 1 | 288 |00:00:00.01 | 17 | 14 | | | | ---------------------------------------------------------------------------------------------------------------------------------------- SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | | | | |* 1 | HASH JOIN | | 1 | 13 | 13 |00:00:00.01 | 24 | 2061K| 2061K| 413K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 13 | 13 |00:00:00.01 | 7 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 288 | 288 |00:00:00.01 | 17 | | | | ------------------------------------------------------------------------------------------------------------------------------- Note ----- - statistics feedback used for this statement SQL> select count(*) from v$SQL where SQL_ID='cz0hg2zkvd10y'; COUNT(*) ---------- 2 SQL>select sql_ID,USE_FEEDBACK_STATS FROM V$SQL_SHARED_CURSOR where USE_FEEDBACK_STATS ='Y'; SQL_ID U ------------- - cz0hg2zkvd10y Y ????????Cardinality feedback????,???????????????????????????,????????????order_items???????? ????2??????plan hash value??(??????????),?????2????child cursor??????gather_plan_statistics???actual : A-ROWS  estimate :E-ROWS????????? Automatic Re-optimization ???dynamic plan, Re-optimization???????????????  ?  ??????????????? ????????????????????????????????  ???????????,??????????????, ???????????????????? ???????????  Re-optimization??, ????????????????????? Re-optimization????dynamic plan??????????  dynamic plan????????????????????, ???????????????????? ????,??????????join order ??????????????,?????????????join order????? ??????,????????Re-optimization, ??Re-optimization ??????????????????? ?Oracle database 12c?,join statistics?????????????????????,??????????????????????Re-optimization???????????adaptive cursor sharing????? ????????????????,???????????? ????? ???????statistics collectors ????????????????????Re-optimization??????2?????????????,???????????????? ??????????????Re-optimization?????,?????????????????????? ???v$SQL??????IS_REOPTIMIZABLE?????????????????????Re-optimization,??????????Re-optimization???,?????Re-optimization ,???????reporting????? IS_REOPTIMIZABLE VARCHAR2(1) This columns shows whether the next execution matching this child cursor will trigger a reoptimization. The values are:   Y: If the next execution will trigger a reoptimization R: If the child cursor contains reoptimization information, but will not trigger reoptimization because the cursor was compiled in reporting mode N: If the child cursor has no reoptimization information ??1: select plan_table_output from table (dbms_xplan.display_cursor('gwf99gfnm0t7g',NULL,'ALLSTATS LAST')); SQL_ID  gwf99gfnm0t7g, child number 0 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 1906736282 ------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation             | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT      |                     |      1 |        |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   1 |  NESTED LOOPS         |                     |      1 |      1 |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   2 |   MERGE JOIN CARTESIAN|                     |      1 |      4 |   9135 |00:00:00.02 |      34 |     15 |       |       |          | |*  3 |    TABLE ACCESS FULL  | PRODUCT_INFORMATION |      1 |      1 |     87 |00:00:00.01 |      33 |     14 |       |       |          | |   4 |    BUFFER SORT        |                     |     87 |    105 |   9135 |00:00:00.01 |       1 |      1 |  4096 |  4096 | 4096  (0)| |   5 |     INDEX FULL SCAN   | ORDER_PK            |      1 |    105 |    105 |00:00:00.01 |       1 |      1 |       |       |          | |*  6 |   INDEX UNIQUE SCAN   | ORDER_ITEMS_UK      |   9135 |      1 |    269 |00:00:00.01 |    1302 |      3 |       |       |          | ------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    6 - access("O"."ORDER_ID"="ORDER_ID" AND "P"."PRODUCT_ID"="O"."PRODUCT_ID") SQL_ID  gwf99gfnm0t7g, child number 1 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 35479787 -------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation              | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | -------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT       |                     |      1 |        |    269 |00:00:00.01 |      63 |      3 |       |       |          | |   1 |  NESTED LOOPS          |                     |      1 |    269 |    269 |00:00:00.01 |      63 |      3 |       |       |          | |*  2 |   HASH JOIN            |                     |      1 |    313 |    269 |00:00:00.01 |      42 |      3 |  1321K|  1321K| 1234K (0)| |*  3 |    TABLE ACCESS FULL   | PRODUCT_INFORMATION |      1 |     87 |     87 |00:00:00.01 |      16 |      0 |       |       |          | |   4 |    INDEX FAST FULL SCAN| ORDER_ITEMS_UK      |      1 |    665 |    665 |00:00:00.01 |      26 |      3 |       |       |          | |*  5 |   INDEX UNIQUE SCAN    | ORDER_PK            |    269 |      1 |    269 |00:00:00.01 |      21 |      0 |       |       |          | -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    5 - access("O"."ORDER_ID"="ORDER_ID") Note -----    - statistics feedback used for this statement    SQL> select IS_REOPTIMIZABLE,child_number FROM V$SQL  A where A.SQL_ID='gwf99gfnm0t7g'; IS CHILD_NUMBER -- ------------ Y             0 N             1    1* select child_number,other_xml From v$SQL_PLAN  where SQL_ID='gwf99gfnm0t7g' and other_xml is not nul SQL> / CHILD_NUMBER OTHER_XML ------------ --------------------------------------------------------------------------------            1 <other_xml><info type="cardinality_feedback">yes</info><info type="db_version">1              2.1.0.1</info><info type="parse_schema"><![CDATA["OE"]]></info><info type="plan_              hash">35479787</info><info type="plan_hash_2">3382491761</info><outline_data><hi              nt><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint><hint><![CDATA[OPTIMIZER_FEATUR              ES_ENABLE('12.1.0.1')]]></hint><hint><![CDATA[DB_VERSION('12.1.0.1')]]></hint><h              int><![CDATA[ALL_ROWS]]></hint><hint><![CDATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></              hint><hint><![CDATA[MERGE(@"SEL$2")]]></hint><hint><![CDATA[OUTLINE(@"SEL$1")]]>              </hint><hint><![CDATA[OUTLINE(@"SEL$2")]]></hint><hint><![CDATA[FULL(@"SEL$F5BB7              4E1" "P"@"SEL$2")]]></hint><hint><![CDATA[INDEX_FFS(@"SEL$F5BB74E1" "O"@"SEL$2"              ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PRODUCT_ID"))]]></hint><hint><![CDATA[I              NDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA[              LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$2" "O"@"SEL$1")]]></hint><hint><![C              DATA[USE_HASH(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint><hint><![CDATA[USE_NL(@"SEL$              F5BB74E1" "O"@"SEL$1")]]></hint></outline_data></other_xml>            0 <other_xml><info type="db_version">12.1.0.1</info><info type="parse_schema"><![C              DATA["OE"]]></info><info type="plan_hash">1906736282</info><info type="plan_hash              _2">2579473118</info><outline_data><hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]>              </hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('12.1.0.1')]]></hint><hint><![CD              ATA[DB_VERSION('12.1.0.1')]]></hint><hint><![CDATA[ALL_ROWS]]></hint><hint><![CD              ATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></hint><hint><![CDATA[MERGE(@"SEL$2")]]></hi              nt><hint><![CDATA[OUTLINE(@"SEL$1")]]></hint><hint><![CDATA[OUTLINE(@"SEL$2")]]>              </hint><hint><![CDATA[FULL(@"SEL$F5BB74E1" "P"@"SEL$2")]]></hint><hint><![CDATA[              INDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA              [INDEX(@"SEL$F5BB74E1" "O"@"SEL$2" ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PROD              UCT_ID"))]]></hint><hint><![CDATA[LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$1              " "O"@"SEL$2")]]></hint><hint><![CDATA[USE_MERGE_CARTESIAN(@"SEL$F5BB74E1" "O"@"              SEL$1")]]></hint><hint><![CDATA[USE_NL(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint></o              utline_data></other_xml> ??2: SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 0 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | 14 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 8 | 29 |00:00:00.01 | 17 | 14 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OWNER OBJECT_NAME COL_NAME OBJECT TYPE STATE REASON ----------------------- ----- ------------- ----------- ------ ---------------- ----- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; ELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 1 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 ----------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | ----------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 29 | 29 |00:00:00.01 | 17 | ----------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) Note ----- - cardinality feedback used for this statement SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' b74nw722wjvy3 1 select /*+g N ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' SELECT /*+gather_plan_statistics*/ CUST_EMAIL FROM CUSTOMERS WHERE CUST_STATE_PROVINCE='MA' AND COUNTRY_ID='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID 3tk6hj3nkcs2u, child number 0 ------------------------------------- Select /*+gather_plan_statistics*/ cust_email From customers Where cust_state_province='MA' And country_id='US' Plan hash value: 1683234692 ------------------------------------------------------------------------------- |Id | Operation | Name | Starts|E-Rows|A-Rows| A-Time |Buffers| ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01| 16 | |*1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 2| 2 |00:00:00.01| 16 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='MA' AND "COUNTRY_ID"='US')) Note ----- - dynamic sampling used for this statement (level=2) - 1 Sql Plan Directive used for this statement EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OW OBJECT_NA COL_NAME OBJECT TYPE STATE REASON ------------------- -- --------- ---------- ------- --------------- ------------- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE

    Read the article

  • Lots of first chance Microsoft.CSharp.RuntimeBinderExceptions thrown when dealing with dynamics

    - by Orion Edwards
    I've got a standard 'dynamic dictionary' type class in C# - class Bucket : DynamicObject { readonly Dictionary<string, object> m_dict = new Dictionary<string, object>(); public override bool TrySetMember(SetMemberBinder binder, object value) { m_dict[binder.Name] = value; return true; } public override bool TryGetMember(GetMemberBinder binder, out object result) { return m_dict.TryGetValue(binder.Name, out result); } } Now I call it, as follows: static void Main(string[] args) { dynamic d = new Bucket(); d.Name = "Orion"; // 2 RuntimeBinderExceptions Console.WriteLine(d.Name); // 2 RuntimeBinderExceptions } The app does what you'd expect it to, but the debug output looks like this: A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll 'ScratchConsoleApplication.vshost.exe' (Managed (v4.0.30319)): Loaded 'Anonymously Hosted DynamicMethods Assembly' A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll Any attempt to access a dynamic member seems to output a RuntimeBinderException to the debug logs. While I'm aware that first-chance exceptions are not a problem in and of themselves, this does cause some problems for me: I often have the debugger set to "break on exceptions", as I'm writing WPF apps, and otherwise all exceptions end up getting converted to a DispatcherUnhandledException, and all the actual information you want is lost. WPF sucks like that. As soon as I hit any code that's using dynamic, the debug output log becomes fairly useless. All the useful trace lines that I care about get hidden amongst all the useless RuntimeBinderExceptions Is there any way I can turn this off, or is the RuntimeBinder unfortunately just built like that? Thanks, Orion

    Read the article

  • Byte array serialization in JSON.NET

    - by Daniel Earwicker
    Given this simple class: class HasBytes { public byte[] Bytes { get; set; } } I can round-trip it through JSON using JSON.NET such that the byte array is base-64 encoded: var bytes = new HasBytes { Bytes = new byte[] { 1, 2, 3, 4 } }; // turn it into a JSON string var json = JsonConvert.SerializeObject(bytes); // get back a new instance of HasBytes var result1 = JsonConvert.DeserializeObject<HasBytes>(json); // all is well Debug.Assert(bytes.Bytes.SequenceEqual(result1.Bytes)); But if I deserialize this-a-wise: var result2 = (HasBytes)new JsonSerializer().Deserialize( new JTokenReader( JToken.ReadFrom(new JsonTextReader( new StringReader(json)))), typeof(HasBytes)); ... it throws an exception, "Expected bytes but got string". What other options/flags/whatever would need to be added to the "complicated" version to make it properly decode the base-64 string to initialize the byte array? Obviously I'd prefer to use the simple version but I'm trying to work with a CouchDB wrapper library called Divan, which sadly uses the complicated version, with the responsibilities for tokenizing/deserializing widely separated, and I want to make the simplest possible patch to how it currently works.

    Read the article

  • Android ADT Eclipse plugin, parseSDKContent failed

    - by Sebastian Ganslandt
    I've just set up my first Android development environment consisting of Eclipse 3.5 Mac OSX 10.5 Android SDK for x86 macs ADT Eclipse plugin 0.9.6 I've set set $PATH to my SDK/tools directory (which shouldn't matter if I only use Eclipse right?) and started Eclipse, but when I try to set the path to the SDK in Eclipse, i get the error "parseSdkContent failed". The stack trace of from the thrown exception is java.lang.IllegalArgumentException: http://www.w3.org/2001/XMLSchema at javax.xml.validation.SchemaFactory.newInstance(SchemaFactory.java:181) at com.android.ide.eclipse.adt.internal.sdk.LayoutDevicesXsd.getValidator(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.LayoutDeviceManager.parseLayoutDevices(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.LayoutDeviceManager.loadDefaultLayoutDevices(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.LayoutDeviceManager.loadDefaultAndUserDevices(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.Sdk.<init>(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.Sdk.loadSdk(Unknown Source) at com.android.ide.eclipse.adt.AdtPlugin$13.run(Unknown Source) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) I can't see that I've missed anything in the setup process, according to the instructions it should basically just work out of the box. Any ideas as to why this might fail?

    Read the article

  • Server cannot set status after HTTP headers have been sent IIS7.5

    - by marcinn
    Hi, Sometimes I get exception in my production environment: Process information Process ID: 3832 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information Exception type: System.Web.HttpException Exception message: Server cannot set status after HTTP headers have been sent. Request information Request URL: http://www.myulr.pl/logon Request path: /logon User host address: 10.11.9.1 User: user001 Is authenticated: True Authentication Type: Forms Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information Thread ID: 10 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.HttpResponse.set_StatusCode(Int32 value) at System.Web.HttpResponseWrapper.set_StatusCode(Int32 value) at System.Web.Mvc.HandleErrorAttribute.OnException(ExceptionContext filterContext) at System.Web.Mvc.ControllerActionInvoker.InvokeExceptionFilters(ControllerContext controllerContext, IList(1) filters, Exception exception) at System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) at System.Web.Mvc.Controller.ExecuteCore() at System.Web.Mvc.MvcHandler.<>c__DisplayClass8.<BeginProcessRequest>b__4() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass1.<MakeVoidDelegate>b__0() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass8(1).<BeginSynchronous>b__7(IAsyncResult _) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult(1).End() at System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& ompletedSynchronously) I didn't noticed this error on my test environment what should I check? I am using ASP.NET MVC 2 (Release Candidate 2)

    Read the article

  • Mutual SSL Client Authentication

    - by nordisk
    Hi, I'm trying to achieve mutual SSL client authentication but without much success so far. Let me explain my scenario first: I have a client certificate issued by an intermediate CA whose certificate in turn was issued by a root CA (the intermediate and root CAs are within the company's network). This is the certificate I am including as part of my call to the server (using the HttpWebRequest object). The server has imported my client certificate and it is one of the certificates presented to me. An important thing to note is that the server does not trust the intermediate CA or the root for that matter. What we're trying to achieve is authentication against the certificate directly, i.e. mutual authentication using my client certificate. The error I'm getting is: "The request was aborted: Could not create SSL/TLS secure channel." From my trace logs I also get the following: System.Net Information: 0 : [3380] SecureChannel#34868631 - We have user-provided certificates. The server has specified 2 issuer(s). Looking for certificates that match any of the issuers. System.Net Information: 0 : [3380] SecureChannel#34868631 - Left with 0 client certificates to choose from. One of the certificates presented to us from the server is the same as our client certificate but the matching between them seems to fail. It looks like it's trying to verify the issuer. Now to make things even more interesting: If the server trusts and sends back the intermediate CA then everything works fine! (This is not an option for the production environment though I'm told) Using jmeter to test the request works fine too. I can only assume that Java's SSL handshake implementation is somewhat different. So it really comes down to this: Do you need to implement mutual SSL authentication differently from normal client SSL authentication? Any ideas or comments would be greatly appreciated.

    Read the article

  • DevExpress AspxGridView clientside SelectionChanged problem when using paged ObjectDataSource

    - by Constantin Baciu
    The context is as follows: One DexExpress AspxGridView with a server-side paging/filtering/sorting mechanism (using ObjectDataSource). I've been having problems with the filter mechanism ( see this stack ). Now, the problem I'm having is this: the client-side events get mangled between DataSource events. :O . Let me explain what happens: if I change the page (or sort/filter for that matter), then, select one row from the grid, the client-side SelectionChanged event fires well. If I change the page (or sort/filter), the event doesn't fire anymore. Instead, on the server side, I get a "The method or operation is not implemented" exception with the following stack-trace: at DevExpress.Web.Data.WebDataProviderBase.GetListSouceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetListSourceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetKeyValueCore(Int32 index, GetKeyValueCallback getKeyValue) at DevExpress.Web.Data.WebDataSelectionBase.GetSelectedValues(String[] fieldNames, Int32 visibleStartIndex, Int32 visibleRowCountOnPage) at DevExpress.Web.Data.WebDataProxy.GetSelectedValues(String[] fieldNames) at DevExpress.Web.ASPxGridView.ASPxGridView.FBSelectFieldValues(String[] args) at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResultCore() at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResult() at DevExpress.Web.ASPxClasses.ASPxWebControl.System.Web.UI.ICallbackEventHandler.GetCallbackResult() Am I doing something wrong? Any help will be much appreciated.

    Read the article

  • DevExpress AspxGridView clientside SelectionChanged problem when using paged ObjectDataSource

    - by Constantin Baciu
    The context is as follows: One DexExpress AspxGridView with a server-side paging/filtering/sorting mechanism (using ObjectDataSource). I've been having problems with the filter mechanism ( see this stack ). Now, the problem I'm having is this: the client-side events get mangled between DataSource events. :O . Let me explain what happens: if I change the page (or sort/filter for that matter), then, select one row from the grid, the client-side SelectionChanged event fires well. If I change the page (or sort/filter), the event doesn't fire anymore. Instead, on the server side, I get a "The method or operation is not implemented" exception with the following stack-trace: at DevExpress.Web.Data.WebDataProviderBase.GetListSouceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetListSourceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetKeyValueCore(Int32 index, GetKeyValueCallback getKeyValue) at DevExpress.Web.Data.WebDataSelectionBase.GetSelectedValues(String[] fieldNames, Int32 visibleStartIndex, Int32 visibleRowCountOnPage) at DevExpress.Web.Data.WebDataProxy.GetSelectedValues(String[] fieldNames) at DevExpress.Web.ASPxGridView.ASPxGridView.FBSelectFieldValues(String[] args) at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResultCore() at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResult() at DevExpress.Web.ASPxClasses.ASPxWebControl.System.Web.UI.ICallbackEventHandler.GetCallbackResult() Am I doing something wrong? Any help will be much appreciated.

    Read the article

  • VS 2012 / 2013 AccessViolationException

    - by Goran
    When I run the project (F5) I receive the following exception in IDE: An unhandled exception of type 'System.AccessViolationException' occurred in System.Windows.Forms.dll Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Stack trace reports at System.Windows.Forms.UnsafeNativeMethods.SendMessage(HandleRef hWnd, Int32 msg, IntPtr wParam, IntPtr lParam) at System.Windows.Forms.Control.SendMessage(Int32 msg, Int32 wparam, IntPtr lparam) at System.Windows.Forms.Form.UpdateWindowIcon(Boolean redrawFrame) at System.Windows.Forms.Form.CreateHandle() at System.Windows.Forms.Control.get_Handle() at Microsoft.VisualStudio.HostingProcess.HostProc.RunParkingWindowThread() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() I have never noticed receiving the same exception when running without debugger (CTRL+F5). This is a WPF project, but exception occurs before the App_ctor is executed, so this is external code, and my application code did not start to execute. This happens sporadically, sometimes it happens only once, and sometimes I run the project and get this message for several times in a roll. Then it does not pop up for 5-6 runs, and then starts again. Anyone knows why is this happening? I have just installed clean W8.1 64 bit, VS2013 and TFS 2013 (although I had the same problem with W8 and VS2012, but not as often).

    Read the article

  • The HTTP request was forbidden with client authentication scheme 'Anonymous'

    - by dudia
    I am trying to configure a WCF server\client to work with SSL I get the following exception: The HTTP request was forbidden with client authentication scheme 'Anonymous' I have a self hosted WCF server. I have run hhtpcfg both my client and server certificates are stored under Personal and Trusted People on the Local Machine Here is the server code: binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; binding.Security.Mode = WebHttpSecurityMode.Transport; _host.Credentials.ClientCertificate.Authentication.CertificateValidationMode = System.ServiceModel.Security.X509CertificateValidationMode.PeerOrChainTrust; _host.Credentials.ClientCertificate.Authentication.RevocationMode = X509RevocationMode.NoCheck; _host.Credentials.ClientCertificate.Authentication.TrustedStoreLocation = StoreLocation.LocalMachine; _host.Credentials.ServiceCertificate.SetCertificate("cn=ServerSide", StoreLocation.LocalMachine, StoreName.My); Client Code: binding.Security.Mode = WebHttpSecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; WebChannelFactory<ITestClientForServer> cf = new WebChannelFactory<ITestClientForServer>(binding, url2Bind); cf.Credentials.ClientCertificate.SetCertificate("cn=ClientSide", StoreLocation.LocalMachine, StoreName.My); ServicePointManager.ServerCertificateValidationCallback += RemoteCertificateValidate; Looking at web_tracelog.svclog and trace.log reveals that the server cannot autheticate the client certificate My certificate are not signed by an Authorized CA but this is why I added them to the Trusted People.... What Am I missing? What am I missing?

    Read the article

  • How to troubleshoot errors with TeamCity

    - by Tomas Lycken
    I'm following this guide to set up a small environment for source control and automated builds - mostly for learning what it is and how it works, but also for using in those of my hobby projects that I believe will actually be useful some day. However, at the step where he commits and builds, I fail to get a success status in the TeamCity history log. I keep getting the error described in the stack trace below. I have verified with Windows Explorer that the solution file it can't find is actually there, so I really don't know what to do. How do I fix/troubleshoot this? [15:16:06]: Checking for changes [15:16:08]: Clearing temporary directory: C:\Program Files\JetBrains\BuildAgent\temp\buildTmp [15:16:08]: Checkout directory: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:08]: Updating sources: server side checkout... [15:16:08]: [Updating sources: server side checkout...] Building incremental patch for VCS root: DemoProjects [15:16:09]: [Updating sources: server side checkout...] Repository sources transferred [15:16:09]: [Updating sources: server side checkout...] Updating C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:10]: Start process: "c:\Program Files\JetBrains\BuildAgent\bin\..\plugins\dotnetPlugin\bin\JetBrains.BuildServer.MsBuildBootstrap.exe" "/workdir:C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588" /msbuildPath:C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe [15:16:10]: in: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:11]: TeamCity MSBuild bootstrap v5.1 Copyright (C) JetBrains s.r.o. [15:16:11]: Application failed with internal error: [15:16:11]: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: System.Exception: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Impl.MSBuildBootstrapFactory.Create(IClientRunArgs args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap.Core\src\Impl\MSBuildBootstrapFactory.cs:line 25 [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Program.Run(String[] _args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap\src\Program.cs:line 66 [15:16:11]: Process exited with code -11 [15:16:11]: Build finished

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >