Search Results

Search found 2396 results on 96 pages for 'inner'.

Page 49/96 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Working With Extended Events

    - by Fatherjack
    SQL Server 2012 has made working with Extended Events (XE) pretty simple when it comes to what sessions you have on your servers and what options you have selected and so forth but if you are like me then you still have some SQL Server instances that are 2008 or 2008 R2. For those servers there is no built-in way to view the Extended Event sessions in SSMS. I keep coming up against the same situations – Where are the xel log files? What events, actions or predicates are set for the events on the server? What sessions are there on the server already? I got tired of this being a perpetual question and wrote some TSQL to save as a snippet in SQL Prompt so that these details are permanently only a couple of clicks away. First, some history. If you just came here for the code skip down a few paragraphs and it’s all there. If you want a little time to reminisce about SQL Server then stick with me through the next paragraph or two. We are in a bit of a cross-over period currently, there are many versions of SQL Server but I would guess that SQL Server 2008, 2008 R2 and 2012 comprise the majority of installations. With each of these comes a set of management tools, of which SQL Server Management Studio (SSMS) is one. In 2008 and 2008 R2 Extended Events made their first appearance and there was no way to work with them in the SSMS interface. At some point the Extended Events guru Jonathan Kehayias (http://www.sqlskills.com/blogs/jonathan/) created the SQL Server 2008 Extended Events SSMS Addin which is really an excellent tool to ease XE session administration. This addin will install in SSMS 2008 or 2008R2 but not SSMS 2012. If you use a compatible version of SSMS then I wholly recommend downloading and using it to make your work with XE much easier. If you have SSMS 2012 installed, and there is no reason not to as it will let you work with all versions of SQL Server, then you cannot install this addin. If you are working with SQL Server 2012 then SSMS 2012 has built in functionality to manage XE sessions – this functionality does not apply for 2008 or 2008 R2 instances though. This means you are somewhat restricted and have to use TSQL to manage XE sessions on older versions of SQL Server. OK, those of you that skipped ahead for the code, you need to start from here: So, you are working with SSMS 2012 but have a SQL Server that is an earlier version that needs an XE session created or you think there is a session created but you aren’t sure, or you know it’s there but can’t remember if it is running and where the output is going. How do you find out? Well, none of the information is hidden as such but it is a bit of a wrangle to locate it and it isn’t a lot of code that is unlikely to remain in your memory. I have created two pieces of code. The first examines the SYS.Server_Event_… management views in combination with the SYS.DM_XE_… management views to give the name of all sessions that exist on the server, regardless of whether they are running or not and two pieces of TSQL code. One piece will alter the state of the session: if the session is running then the code will stop the session if executed and vice versa. The other piece of code will drop the selected session. If the session is running then the code will stop it first. Do not execute the DROP code unless you are sure you have the Create code to hand. It will be dropped from the server without a second chance to change your mind. /**************************************************************/ /***   To locate and describe event sessions on a server    ***/ /***                                                        ***/ /***   Generates TSQL to start/stop/drop sessions           ***/ /***                                                        ***/ /***        Jonathan Allen - @fatherjack                    ***/ /***                 June 2013                                ***/ /***                                                        ***/ /**************************************************************/ SELECT  [EES].[name] AS [Session Name - all sessions] ,         CASE WHEN [MXS].[name] IS NULL THEN ISNULL([MXS].[name], 'Stopped')              ELSE 'Running'         END AS SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'ALTER EVENT SESSION [' + [EES].[name]                          + '] ON SERVER STATE = START;')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP;'         END AS ALTER_SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'DROP EVENT SESSION [' + [EES].[name]                          + '] ON SERVER; -- This WILL drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP; ' + CHAR(10)                   + '-- DROP EVENT SESSION [' + [EES].[name]                   + '] ON SERVER; -- This WILL stop and drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.'         END AS DROP_Session FROM    [sys].[server_event_sessions] AS EES         LEFT JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name] WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) ORDER BY SessionState GO I have excluded the system_health and AlwaysOn sessions as I don’t want to accidentally execute the drop script for these sessions that are created as part of the SQL Server installation. It is possible to recreate the sessions but that is a whole lot of aggravation I’d rather avoid. The second piece of code gathers details of running XE sessions only and provides information on the Events being collected, any predicates that are set on those events, the actions that are set to be collected, where the collected information is being logged and if that logging is to a file target, where that file is located. /**********************************************/ /***    Running Session summary                ***/ /***                                        ***/ /***    Details key values of XE sessions     ***/ /***    that are in a running state            ***/ /***                                        ***/ /***        Jonathan Allen - @fatherjack    ***/ /***        June 2013                        ***/ /***                                        ***/ /**********************************************/ SELECT  [EES].[name] AS [Session Name - running sessions] ,         [EESE].[name] AS [Event Name] ,         COALESCE([EESE].[predicate], 'unfiltered') AS [Event Predicate Filter(s)] ,         [EESA].[Action] AS [Event Action(s)] ,         [EEST].[Target] AS [Session Target(s)] ,         ISNULL([EESF].[value], 'No file target in use') AS [File_Target_UNC] -- select * FROM    [sys].[server_event_sessions] AS EES         INNER JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name]         INNER JOIN [sys].[server_event_session_events] AS [EESE] ON [EES].[event_session_id] = [EESE].[event_session_id]         LEFT JOIN [sys].[server_event_session_fields] AS EESF ON ( [EES].[event_session_id] = [EESF].[event_session_id]                                                               AND [EESF].[name] = 'filename'                                                               )         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + sest.name                                         FROM    [sys].[server_event_session_targets]                                                 AS SEST                                         WHERE   [EES].[event_session_id] = [SEST].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Target]                     ) AS EEST         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + [sesa].NAME                                         FROM    [sys].[server_event_session_actions]                                                 AS sesa                                         WHERE   [sesa].[event_session_id] = [EES].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Action]                     ) AS EESA WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) /*Optional to exclude 'out-of-the-box' traces*/ I hope that these scripts are useful to you and I would be obliged if you would keep my name in the script comments. I have no problem with you using it in production or personal circumstances, however it has no warranty or guarantee. Don’t use it unless you understand it and are happy with what it is going to do. I am not ever responsible for the consequences of executing this script on your servers.

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • Generating radial indicator images using C#

    - by DigiMortal
    In one of my projects I needed to draw radial indicators for processes measured in percent. Simple images like the one shown on right. I solved the problem by creating images in C# and saving them on server hard disc so if image is once generated then it is returned from disc next time. I am not master of graphics or geometrics but here is the code I wrote. Drawing radial indicator To get things done quick’n’easy way – later may some of younger developers be the one who may need to changes things – I divided my indicator drawing process to four steps shown below. 1. Fill pie 2. Draw circles 3. Fill inner circle 4. Draw text Drawing image Here is the code to draw indicators. private static void SaveRadialIndicator(int percent, string filePath) {     using (Bitmap bitmap = new Bitmap(100, 100))     using (Graphics objGraphics = Graphics.FromImage(bitmap))     {         // Initialize graphics         objGraphics.Clear(Color.White);         objGraphics.SmoothingMode = SmoothingMode.AntiAlias;         objGraphics.TextRenderingHint = TextRenderingHint.ClearTypeGridFit;           // Fill pie         // Degrees are taken clockwise, 0 is parallel with x         // For sweep angle we must convert percent to degrees (90/25 = 18/5)         float startAngle = -90.0F;                        float sweepAngle = (18.0F / 5) * percent;           Rectangle rectangle = new Rectangle(5, 5, 90, 90);         objGraphics.FillPie(Brushes.Orange, rectangle, startAngle, sweepAngle);           // Draw circles         rectangle = new Rectangle(5, 5, 90, 90);         objGraphics.DrawEllipse(Pens.LightGray, rectangle);         rectangle = new Rectangle(20, 20, 60, 60);         objGraphics.DrawEllipse(Pens.LightGray, rectangle);           // Fill inner circle with white         rectangle = new Rectangle(21, 21, 58, 58);         objGraphics.FillEllipse(Brushes.White, rectangle);           // Draw text on image         // Use rectangle for text and align text to center of rectangle         var font = new Font("Arial", 13, FontStyle.Bold);         StringFormat stringFormat = new StringFormat();         stringFormat.Alignment = StringAlignment.Center;         stringFormat.LineAlignment = StringAlignment.Center;           rectangle = new Rectangle(20, 40, 62, 20);         objGraphics.DrawString(percent + "%", font, Brushes.DarkGray, rectangle, stringFormat);           // Save indicator to file         objGraphics.Flush();         if (File.Exists(filePath))             File.Delete(filePath);           bitmap.Save(filePath, ImageFormat.Png);     }        } Using indicators on web page To show indicators on your web page you can use the following code on page that outputs indicator images: protected void Page_Load(object sender, EventArgs e) {     var percentString = Request.QueryString["percent"];     var percent = 0;     if(!int.TryParse(percentString, out percent))         return;     if(percent < 0 || percent > 100)         return;       var file = Server.MapPath("~/images/percent/" + percent + ".png");     if(!File.Exists(file))         SaveImage(percent, file);       Response.Clear();     Response.ContentType = "image/png";     Response.WriteFile(file);     Response.End(); } Om your pages where you need indicator you can set image source to Indicator.aspx (if you named your indicator handling file like this) and add percent as query string:     <img src="Indicator.aspx?percent=30" /> That’s it! If somebody knows simpler way how to generate indicators like this I am interested in your feedback.

    Read the article

  • Backup Exec job completed with exceptions: RWS_AttachToDLE

    - by HannesFostie
    2 of this weekend's jobs completed with exceptions, and mention "RWS_AttachToDLE". I get the feeling the job did in fact complete without missing data, but I would like to be 100% sure (and can't verify the backup myself right now - colleague is out of the office and the backup in question is a bit of a black box for me, it works but I am not familiar with its inner workings). Also, how can I prevent this from happening? Google didn't prove to be very helpful, and experts exchange seem to have changed their system so that you can't simply scroll down to see the answers to a particular question ;-)

    Read the article

  • Radius Authorization against ActiveDirectory and the users file

    - by mohrphium
    I have a problem with my freeradius server configuration. I want to be able to authenticate users against Windows ActiveDirectory (2008 R2) and the users file, because some of my co-workers are not listed in AD. We use the freeradius server to authenticate WLAN users. (PEAP/MSCHAPv2) AD Authentication works great, but I still have problems with the /etc/freeradius/users file When I run freeradius -X -x I get the following: Mon Jul 2 09:15:58 2012 : Info: ++++[chap] returns noop Mon Jul 2 09:15:58 2012 : Info: ++++[mschap] returns noop Mon Jul 2 09:15:58 2012 : Info: [suffix] No '@' in User-Name = "testtest", looking up realm NULL Mon Jul 2 09:15:58 2012 : Info: [suffix] Found realm "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Stripped-User-Name = "testtest" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Realm = "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Authentication realm is LOCAL. Mon Jul 2 09:15:58 2012 : Info: ++++[suffix] returns ok Mon Jul 2 09:15:58 2012 : Info: [eap] EAP packet type response id 1 length 13 Mon Jul 2 09:15:58 2012 : Info: [eap] No EAP Start, assuming it's an on-going EAP conversation Mon Jul 2 09:15:58 2012 : Info: ++++[eap] returns updated Mon Jul 2 09:15:58 2012 : Info: [files] users: Matched entry testtest at line 1 Mon Jul 2 09:15:58 2012 : Info: ++++[files] returns ok Mon Jul 2 09:15:58 2012 : Info: ++++[expiration] returns noop Mon Jul 2 09:15:58 2012 : Info: ++++[logintime] returns noop Mon Jul 2 09:15:58 2012 : Info: [pap] WARNING: Auth-Type already set. Not setting to PAP Mon Jul 2 09:15:58 2012 : Info: ++++[pap] returns noop Mon Jul 2 09:15:58 2012 : Info: +++- else else returns updated Mon Jul 2 09:15:58 2012 : Info: ++- else else returns updated Mon Jul 2 09:15:58 2012 : Info: Found Auth-Type = EAP Mon Jul 2 09:15:58 2012 : Info: # Executing group from file /etc/freeradius/sites-enabled/default Mon Jul 2 09:15:58 2012 : Info: +- entering group authenticate {...} Mon Jul 2 09:15:58 2012 : Info: [eap] EAP Identity Mon Jul 2 09:15:58 2012 : Info: [eap] processing type tls Mon Jul 2 09:15:58 2012 : Info: [tls] Initiate Mon Jul 2 09:15:58 2012 : Info: [tls] Start returned 1 Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns handled Sending Access-Challenge of id 199 to 192.168.61.11 port 3072 EAP-Message = 0x010200061920 Message-Authenticator = 0x00000000000000000000000000000000 State = 0x85469e2a854487589fb1196910cb8ae3 Mon Jul 2 09:15:58 2012 : Info: Finished request 125. Mon Jul 2 09:15:58 2012 : Debug: Going to the next request Mon Jul 2 09:15:58 2012 : Debug: Waking up in 2.4 seconds. After that it repeats the login attempt and at some point tries to authenticate against ActiveDirectory with ntlm, which doesn't work since the user exists only in the users file. Can someone help me out here? Thanks. PS: Hope this helps, freeradius trying to auth against AD: Mon Jul 2 09:15:58 2012 : Info: ++[chap] returns noop Mon Jul 2 09:15:58 2012 : Info: ++[mschap] returns noop Mon Jul 2 09:15:58 2012 : Info: [suffix] No '@' in User-Name = "testtest", looking up realm NULL Mon Jul 2 09:15:58 2012 : Info: [suffix] Found realm "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Stripped-User-Name = "testtest" Mon Jul 2 09:15:58 2012 : Info: [suffix] Adding Realm = "NULL" Mon Jul 2 09:15:58 2012 : Info: [suffix] Authentication realm is LOCAL. Mon Jul 2 09:15:58 2012 : Info: ++[suffix] returns ok Mon Jul 2 09:15:58 2012 : Info: ++[control] returns ok Mon Jul 2 09:15:58 2012 : Info: [eap] EAP packet type response id 7 length 67 Mon Jul 2 09:15:58 2012 : Info: [eap] No EAP Start, assuming it's an on-going EAP conversation Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns updated Mon Jul 2 09:15:58 2012 : Info: [files] users: Matched entry testtest at line 1 Mon Jul 2 09:15:58 2012 : Info: ++[files] returns ok Mon Jul 2 09:15:58 2012 : Info: ++[smbpasswd] returns notfound Mon Jul 2 09:15:58 2012 : Info: ++[expiration] returns noop Mon Jul 2 09:15:58 2012 : Info: ++[logintime] returns noop Mon Jul 2 09:15:58 2012 : Info: [pap] WARNING: Auth-Type already set. Not setting to PAP Mon Jul 2 09:15:58 2012 : Info: ++[pap] returns noop Mon Jul 2 09:15:58 2012 : Info: Found Auth-Type = EAP Mon Jul 2 09:15:58 2012 : Info: # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel Mon Jul 2 09:15:58 2012 : Info: +- entering group authenticate {...} Mon Jul 2 09:15:58 2012 : Info: [eap] Request found, released from the list Mon Jul 2 09:15:58 2012 : Info: [eap] EAP/mschapv2 Mon Jul 2 09:15:58 2012 : Info: [eap] processing type mschapv2 Mon Jul 2 09:15:58 2012 : Info: [mschapv2] # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel Mon Jul 2 09:15:58 2012 : Info: [mschapv2] +- entering group MS-CHAP {...} Mon Jul 2 09:15:58 2012 : Info: [mschap] Creating challenge hash with username: testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] Told to do MS-CHAPv2 for testtest with NT-Password Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --username=%{mschap:User-Name:-None} -> --username=testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] No NT-Domain was found in the User-Name. Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: %{mschap:NT-Domain} -> Mon Jul 2 09:15:58 2012 : Info: [mschap] ... expanding second conditional Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --domain=%{%{mschap:NT-Domain}:-AD.CXO.NAME} -> --domain=AD.CXO.NAME Mon Jul 2 09:15:58 2012 : Info: [mschap] mschap2: 82 Mon Jul 2 09:15:58 2012 : Info: [mschap] Creating challenge hash with username: testtest Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --challenge=%{mschap:Challenge:-00} -> --challenge=dd441972f987d68b Mon Jul 2 09:15:58 2012 : Info: [mschap] expand: --nt-response=%{mschap:NT-Response:-00} -> --nt-response=7e6c537cd5c26093789cf7831715d378e16ea3e6c5b1f579 Mon Jul 2 09:15:58 2012 : Debug: Exec-Program output: Logon failure (0xc000006d) Mon Jul 2 09:15:58 2012 : Debug: Exec-Program-Wait: plaintext: Logon failure (0xc000006d) Mon Jul 2 09:15:58 2012 : Debug: Exec-Program: returned: 1 Mon Jul 2 09:15:58 2012 : Info: [mschap] External script failed. Mon Jul 2 09:15:58 2012 : Info: [mschap] FAILED: MS-CHAP2-Response is incorrect Mon Jul 2 09:15:58 2012 : Info: ++[mschap] returns reject Mon Jul 2 09:15:58 2012 : Info: [eap] Freeing handler Mon Jul 2 09:15:58 2012 : Info: ++[eap] returns reject Mon Jul 2 09:15:58 2012 : Info: Failed to authenticate the user. Mon Jul 2 09:15:58 2012 : Auth: Login incorrect (mschap: External script says Logon failure (0xc000006d)): [testtest] (from client techap01 port 0 via TLS tunnel) PPS: Maybe the problem is located here: In /etc/freeradius/modules/ntlm_auth I have set ntlm to: program = "/usr/bin/ntlm_auth --request-nt-key --domain=AD.CXO.NAME --username=%{mschap:User-Name} --password=%{User-Password}" I need this, so users can login without adding @ad.cxo.name to their usernames. But how can I tell freeradius to try both logins, [email protected] (should fail) testtest (against users file - should work)

    Read the article

  • Error when installing SQL Server 2008 R2 Express

    - by dretzlaff17
    When installing SQL Server 2008 R2 from the command line prompt, I am getting the following error that is recorded in the Summary file. Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20101217_131444\SystemConfigurationCheck_Report.htm Exception summary: The following is an exception stack listing the exceptions in outermost to innermost order Inner exceptions are being indented Exception type: System.ArgumentNullException Message: Value cannot be null. Parameter name: path2 Data: DisableWatson = true Stack: at System.IO.Path.Combine(String path1, String path2) at Microsoft.SqlServer.Configuration.SqlEngine.SqlEngineSetupPublic.RecomputeDirectoryPaths() at Microsoft.SqlServer.Configuration.SqlEngine.SqlEngineSetupPublic.Calculate() at Microsoft.SqlServer.Configuration.SetupExtension.FinalCalculateSettingsAction.ExecuteAction(String actionId) at Microsoft.SqlServer.Chainer.Infrastructure.Action.Execute(String actionId, TextWriter errorStream) at Microsoft.SqlServer.Setup.Chainer.Workflow.ActionInvocation.ExecuteActionHelper(TextWriter statusStream, ISequencedAction actionToRun) Has anyone seen this. Here is what I am sending for the command line parameters. /q /ACTION=Install /FEATURES=SQLEngine /SECURITYMODE=SQL /SAPWD="myPassword" /BROWSERSVCSTARTUPTYPE=Automatic /SQLSVCSTARTUPTYPE=Automatic /SQLSVCACCOUNT="NT AUTHORITY\Network Service" /SQLSYSADMINACCOUNTS="BUILTIN\ADMINISTRATORS" /AGTSVCACCOUNT="NT AUTHORITY\Network Service" /IACCEPTSQLSERVERLICENSETERMS

    Read the article

  • Cleaning Up Unused Users and Groups (Ubuntu 10.10 Server)

    - by PhpMyCoder
    Hello experts, I'm very much a beginner when it comes to Ubuntu and I've been learning the ropes by diving in and writing a (backend-language independent) web app framework that relies on apache, some clever mod_rewrites, Ubuntu permissions, groups, and users. One thing that really annoys my inner clean-freak is that there are loads of users and groups that are created when Ubuntu is installed that are never used (Or so I think). Since I'm just running a simple web app server, I would like to know: What users/groups can I remove? Since you'll probably ask for it...here's a list of all the users on my box (excluding the ones I know that I need): root daemon bin sys sync man lp mail uucp proxy backup list irc gnats nobody libuuid syslog And a list of all of the groups: root daemon bin sys adm tty disk lp mail uucp man proxy kmem dialout fax voice cdrom floppy tape sudo audio dip backup operator list irc src gnats shadow utmp video sasl plugdev users nogroup libuuid crontab syslog fuse mlocate ssl-cert lpadmin sambashare admin

    Read the article

  • UnicodeEncodeError when uploading files in Django admin

    - by Samuel Linde
    Note: I asked this question on StackOverflow, but I realize this might be a more proper place to ask this kind of question. I'm trying to upload a file called 'Testaråäö.txt' via the Django admin app. I'm running Django 1.3.1 with Gunicorn 0.13.4 and Nginx 0.7.6.7 on a Debian 6 server. Database is PostgreSQL 8.4.9. Other Unicode data is saved to the database with no problem, so I guess the problem must be with the filesystem somehow. I've set http { charset utf-8; } in my nginx.conf. LC_ALL and LANG is set to 'sv_SE.UTF-8'. Running 'locale' verifies this. I even tried setting LC_ALL and LANG in my nginx init script just to make sure locale is set properly. Here's the traceback: Traceback (most recent call last): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 307, in wrapper return self.admin_site.admin_view(view)(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/views/decorators/cache.py", line 79, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 197, in inner return view(request, *args, **kwargs) File "/srv/django/letebo/app/cms/admin.py", line 81, in change_view return super(PageAdmin, self).change_view(request, obj_id) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 28, in _wrapper return bound_func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 24, in bound_func return func(self, *args2, **kwargs2) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/transaction.py", line 217, in inner res = func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 985, in change_view self.save_formset(request, form, formset, change=True) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 677, in save_formset formset.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 482, in save return self.save_existing_objects(commit) + self.save_new_objects(commit) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 613, in save_new_objects self.new_objects.append(self.save_new(form, commit=commit)) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 717, in save_new obj.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 460, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 504, in save_base self.save_base(cls=parent, origin=org, using=using) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 543, in save_base for f in meta.local_fields if not isinstance(f, AutoField)] File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 255, in pre_save file.save(file.name, file, save=False) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 92, in save self.name = self.storage.save(name, content) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 48, in save name = self.get_available_name(name) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 74, in get_available_name while self.exists(name): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 218, in exists return os.path.exists(self.path(name)) File "/srv/.virtualenvs/letebo/lib/python2.6/genericpath.py", line 18, in exists st = os.stat(path) UnicodeEncodeError: 'ascii' codec can't encode characters in position 52-54: ordinal not in range(128) I tried running Gunicorn with debugging turned on, and the file uploads without any problem at all. I suppose this must mean that the issue is with Nginx. Still beats me where to look, though. Here are the raw response headers from Gunicorn and Nginx, if it makes any sense: Gunicorn: HTTP/1.1 302 FOUND Server: gunicorn/0.13.4 Date: Thu, 09 Feb 2012 14:50:27 GMT Connection: close Transfer-Encoding: chunked Expires: Thu, 09 Feb 2012 14:50:27 GMT Vary: Cookie Last-Modified: Thu, 09 Feb 2012 14:50:27 GMT Location: http://my-server.se:8000/admin/cms/page/15/ Cache-Control: max-age=0 Content-Type: text/html; charset=utf-8 Set-Cookie: messages="yada yada yada"; Path=/ Nginx: HTTP/1.1 500 INTERNAL SERVER ERROR Server: nginx/0.7.67 Date: Thu, 09 Feb 2012 14:50:57 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: close Vary: Cookie 500 UPDATE: Both locale.getpreferredencoding() and sys.getfilesystemencoding() outputs 'UTF-8'. locale.getdefaultlocale() outputs ('sv_SE', 'UTF8'). This seem correct to me, so I'm still not sure why I keep getting these errors.

    Read the article

  • Visual Query Builder

    - by johnnyArt
    If been using "dbForge Query Builder" lately and I'm gotten used to the ease of building and testing a query, specially for those complex ones with inner joins, aliases and multiple conditionals. The expiry date of the trial is about to come, and while wanting to remain on the legal side for this I'd rather not pay the 50USD it costs (although I must say it's pretty cheap for what it does). So my question would be: Are there any free alternatives to replace this visual query builder? I've failed to find any and fear that my only two options are paying for it, or going to the dark side.

    Read the article

  • error while adding web service to server in website panel

    - by sam
    I got following error while creating website for user in website panel.I am not able to create any hosting space in server's hosting plan it is showing 0 mb space. Stack Trace: [SoapException: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.UriFormatException: Invalid URI: The Authority/Host could not be parsed. at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) at System.Uri..ctor(String uriString) at Microsoft.Web.Services3.WebServicesClientProtocol.set_Url(String value) at WebsitePanel.Server.Client.ServerProxyConfigurator.Configure(WebServicesClientProtocol proxy) at WebsitePanel.EnterpriseServer.ServiceProviderProxy.ServerInit(WebServicesClientProtocol proxy, ServerProxyConfigurator cnfg, String serverUrl, String serverPassword) at WebsitePanel.EnterpriseServer.ServiceProviderProxy.ServerInit(WebServicesClientProtocol proxy, ServerProxyConfigurator cnfg, Int32 serverId) at WebsitePanel.EnterpriseServer.ServiceProviderProxy.Init(WebServicesClientProtocol proxy, Int32 serviceId) at WebsitePanel.EnterpriseServer.WebAppGalleryController.InitFeedsByServiceId(Int32 UserId, Int32 serviceId) at WebsitePanel.EnterpriseServer.esWebApplicationGallery.GetGalleryApplicationsByServiceId(Int32 serviceId) --- End of inner exception stack trace ---] System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +1485877 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +221 WebsitePanel.EnterpriseServer.esWebApplicationGallery.GetGalleryApplicationsByServiceId(Int32 serviceId) +68 WebsitePanel.Portal.WebAppGalleryHelpers.GetGalleryApplicationsByServiceId(Int32 serviceId) +31 can anybody help me in this.

    Read the article

  • WCF WebService: Client can't connect, as soon as request client cerficate is activated.

    - by Hinek
    I have an .NET 3.5 WCF WebService hostet in IIS 6 and using a SSL certificate. The communication between client and server works. Then I activate "request client certificate" and the client can't connect anymore Exception: System.ServiceModel.Security.SecurityNegotiationException: Could not establish secure channel for SSL/TLS with authority 'polizei-bv.stadt.hamburg.de'. Inner Exception: System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel. The certificate, the client uses is in the certificate store (local computer), the root ca is int the trusted root certification authorities store. Where can I check for an explanation on the server side? How can I check if the client really supplies it's certificate (client is not on my side)?

    Read the article

  • Speeding up ROW_NUMBER in SQL Server

    - by BlueRaja
    We have a number of machines which record data into a database at sporadic intervals. For each record, I'd like to obtain the time period between this recording and the previous recording. I can do this using ROW_NUMBER as follows: WITH TempTable AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Machine_ID ORDER BY Date_Time) AS Ordering FROM dbo.DataTable ) SELECT [Current].*, Previous.Date_Time AS PreviousDateTime FROM TempTable AS [Current] INNER JOIN TempTable AS Previous ON [Current].Machine_ID = Previous.Machine_ID AND Previous.Ordering = [Current].Ordering + 1 The problem is, it goes really slow (several minutes on a table with about 10k entries) - I tried creating separate indicies on Machine_ID and Date_Time, and a single joined-index, but nothing helps. Is there anyway to rewrite this query to go faster?

    Read the article

  • How to generate the right password format for Apache2 authentication in use with DBD and MySQL 5.1?

    - by Walkman
    I want to authenticate users for a folder from a MySQL 5.1 database with AuthType Basic. The passwords are stored in plain text (they are not really passwords, so doesn't matter). The password format for apache however only allows for SHA1, MD5 on Linux systems as described here. How could I generate the right format with an SQL query ? Seems like apache format is a binary format with a lenght of 20, but the mysql SHA1 function return 40 long. My SQL query is something like this: SELECT CONCAT('{SHA}', BASE64_ENCODE(SHA1(access_key))) FROM user_access_keys INNER JOIN users ON user_access_keys.user_id = users.id WHERE name = %s where base64_encode is a stored function (Mysql 5.1 doesn't have TO_BASE64 yet). This query returns a 61 byte BLOB which is not the same format that apache uses. How could I generate the same format ? You can suggest other method for this too. The point is that I want to authenticate users from a MySQL5.1 database using plain text as password.

    Read the article

  • Can a switch consume bandwidth?

    - by aashiq
    I have a network with a router and a switch. At first my ISP's optical fiber is connected with media converter input port. Than Ethernet cable (output port of Media converter) is connected with the switch. Then an output Ethernet cable is connected with our inner Microtik router. Then this router is connected with another LAN switch. From this switch we have got every connection other switch. It is our total network structure. Our bandwidth is 2 Mbps. From the 11th of March our MRTG graph shows high all the time even when my all switch is switched off except LAN switch. That's why our line is breaking up (Voice call). How could be it possible? My all PC's bandwidth is limited but when connected PC with media converter directly then the graph shows normal. That's why I can't blame my ISP.

    Read the article

  • Web server replica not working in other server

    - by user761076
    I have a Drupal installation (php+mysql) in a server, and I'm trying to copy this installation to another server with the same configuration, same physical and virtual path, same db configuration, etc. The thing is, in my new server I get the homepage to work, but not the inner pages, so I guess has something to do with rewrite (mod_rewrite is installed) (both .htaccess are the same). When I access http://localhost/myweb/content/mypage I get a 404 or a "Forbidden" if I uncomment this in httpd.conf (original httpd.conf does not have this entry): <Directory path/to/docs"> DirectoryIndex index.php index.html Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any clue? Thank you

    Read the article

  • How do I join two worksheets in Excel as I would in SQL?

    - by Joel Coehoorn
    I have two worksheets in two different Excel files. They both contain a list of names and addresses. One is a master list that includes other fields, and the other is a list that only includes name and address and an id column that was pared down by another office. I want to use the 2nd list to filter the first. I know how I could do this very easily with a database inner join, but I'm less clear on how to do this efficiently in Excel. How can join two worksheets in Excel? Bonus points for showing how to do outer joins as well, and I would greatly prefer konwing how to do this without needing a macro.

    Read the article

  • Query Performance Degrades with High Number of Logical Reads

    - by electricsk8
    I'm using Confio Ignite8 to derive this information, and monitor waits. I have one query that runs frequently, and I notice that on some days there is an extremely high number of logical reads incurred, +300,000,000 for 91,000 executions. On a good day, the logical reads are much lower, 18,000,000 for 94,000 executions. The execution plan for the query utilizes clustered index seeks, and is below. StmtText |--Nested Loops(Inner Join, OUTER REFERENCES:([f].[ParentId])) |--Clustered Index Seek(OBJECT:([StructuredFN].[dbo].[Folder].[PK_Folders] AS [f]), SEEK:([f].[FolderId]=(8125)), WHERE:([StructuredFN].[dbo].[Folder].[DealId] as [f].[DealId]=(300)) ORDERED FORWARD) |--Clustered Index Seek(OBJECT:([StructuredFN].[dbo].[Folder].[PK_Folders] AS [p]), SEEK:([p].[FolderId]=[StructuredFN].[dbo].[Folder].[ParentId] as [f].[ParentId]), WHERE:([StructuredFN].[dbo].[Folder].[DealId] as [p].[DealId]=(300)) ORDERED FORWARD) Output from showstatistics io ... Table 'Folder'. Scan count 0, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Any ideas on how to troubleshoot where these high logical reads come from on certain days, and others nothing?

    Read the article

  • Cross join problem query

    - by user66121
    i have following table structure HUB_DETAILS (Master) Branch_ID Branch_Name VTRCheckList (Master) CLid CLName VTRCheckListDetails (Detail) CLid Branch_ID VTRValue vtrRespDate Actually when i run the following query it does comes with all the Checklist names alongwith all branch names but shows the value in every branch infact only 1 branch has data in the given date criteria. it should show 0 if there is no data in checklist of the respective branch. SELECT VTRCheckList.CLName, Hub_Details.BranchName, sum(cast(VTRCheckListDetails.VtrValue as int)) as 'Total' FROM VTRCheckListDetails INNER JOIN VTRCheckList ON VTRCheckListDetails.CLid = VTRCheckList.CLid CROSS JOIN Hub_Details where Convert(date,VTRCheckListDetails.vtrRespDate, 105) >= convert(date,'01-01-2011',105) and Convert(date, VTRCheckListDetails.vtrRespDate, 105) <= convert(date,'30-01-2011',105) GROUP BY VTRCheckList.CLName, Hub_Details.BranchName

    Read the article

  • How do I put logical operators in an Excel =IF Formula?

    - by Brian Hooper
    I'm trying to enter a formula to display text according to an IF condition. The best I can manage is something like... =IF(myval>=minval & myval <= maxval, "OK", "Not OK") But this appears to work exactly wrongly, displaying OK when myval is out of range and Not OK when it is in range. How do I specify the logical AND correctly? I have tried && as I have seen in questions here, and inner brackets, but these result in errors.

    Read the article

  • Cannot delete folder - Content seems to be nested recursively

    - by RikuXan
    I cannot delete a folder located on my hard disk by any means. I don't quite know how it was created, all I know is, that it is a pretty deep structure of folders (too deep to delete it at once, since Windows restriction path name too long), but the problem in the end is, that I can't "pull out" the inner folders, because they don't seem to be folders anymore (Context menu lacks things like "Properties", "Cut", "Copy", "Delete" etc.) Here a picture of how a right click looks like on one of these "folders": As you can see, the current folder is in very deep, but that is not the problem, rather the one I left-clicked on. Has anyone any advice on how to get rid of these? I tried a chkdsk, said no errors. I also tried deleting those folder via a VMWare Ubuntu, to no success. I also tried a batch file from a volunteer at MS boards, that should automatically de-nest such folders, but I guess mine is a special case, since the tool only created more such folders.

    Read the article

  • If a SQL Server Replication Distributor and Subscriber are on the same server, should a PUSH or PULL subsciption be used?

    - by userx
    Thanks in advance for any help. I'm setting up a new Microsoft SQL Server replication and I have the Distributor and Subscriber running on the same server. The Publisher is on a remote server (as it is a production database and MS recommends that for high volumes, the Distributor should be remote). I don't know much about the inner workings of PUSH vs PULL subscriptions, but my gut tells me that a PUSH subscription would be less resource intensive because (1) the Distributor is already remote, so this shouldn't negatively effect the Publisher and (2) pushing the transactions from the Distributor to the Subscriber is more efficient than the Subscriber polling the Distribution database. Does any one have any resources or insight into PUSH vs PULL which would recommend one over the other? Is there really going to be that big of a difference in performance / reliability / security?

    Read the article

  • A unique identifier for a Domain

    - by jchoover
    I asked a question over on StackOverflow and was directed to ask a related one here to see if I could get any additional input. Basically, I am looking to have my application aware of what domain it's running under, if any at all. (I want to expose certain debugging facilities only in house, and due to our deployment model it isn't possible to have a different build.) Since I am over paranoid, I didn't want to just rely on the domain name to ensure we are in house. As such I noted the DOMAIN_CONTROLLER_INFO (http://msdn.microsoft.com/en-us/library/ms675912(v=vs.85).aspx ) returned from DsGetDcName (http://msdn.microsoft.com/en-us/library/ms675983(v=vs.85).aspx) has a GUID associated with it, however I can find little if any information on it. I am assuming this GUID is generated at the time the first DC in a domain is created, and that it would live on for the life of the domain. Does anyone else have any inner knowledge and would be kind enough to confirm or deny my assumptions?

    Read the article

  • WCF client endpoint identity - configuration question

    - by Roel
    Hi all, I'm having a strange situation here. I got it working, but I don't understand why. Situation is as follows: There is a WCF service which my application (a website) has to call. The WCF service exposes a netTcpBinding and requires Transport Security (Windows). Client and server are in the same domain, but on different servers. So generating a client results in the following config (mostly defaults) <system.serviceModel> <bindings> <netTcpBinding> <binding name="MyTcpEndpoint" ...> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign"/> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://localhost:xxxxx/xxxx/xxx/1.0" binding="netTcpBinding" bindingConfiguration="MyTcpEndpoint" contract="Service.IMyService" name="TcpEndpoint"/> </client> </system.serviceModel> When I run the website and make the call to the service, I get the following error: System.ServiceModel.Security.SecurityNegotiationException: Either the target name is incorrect or the server has rejected the client credentials. ---> System.Security.Authentication.InvalidCredentialException: Either the target name is incorrect or the server has rejected the client credentials. ---> System.ComponentModel.Win32Exception: The logon attempt failed --- End of inner exception stack trace --- at System.Net.Security.NegoState.EndProcessAuthentication(IAsyncResult result) at System.Net.Security.NegotiateStream.EndAuthenticateAsClient(IAsyncResult asyncResult) at System.ServiceModel.Channels.WindowsStreamSecurityUpgradeProvider.WindowsStreamSecurityUpgradeInitiator.InitiateUpgradeAsyncResult.OnCompleteAuthenticateAsClient(IAsyncResult result) at System.ServiceModel.Channels.StreamSecurityUpgradeInitiatorAsyncResult.CompleteAuthenticateAsClient(IAsyncResult result) --- End of inner exception stack trace --- Server stack trace: at System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result) at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result) .... Now, if I just alter the configuration of the client like so: <endpoint address="net.tcp://localhost:xxxxx/xxxx/xxx/1.0" binding="netTcpBinding" bindingConfiguration="MyTcpEndpoint" contract="Service.IMyService" name="TcpEndpoint"> <identity> <dns /> </identity> </endpoint> everything works and my server happily reports that it got called by the service account which hosts the AppPool for my website. All good. My question now is: why does this work? What does this do? I got to this solution by mere trial-and-error. To me it seems that all the <dns /> tag does is tell the client to use the default DNS for authentication, but doesn't it do that anyway? Thanks for providing me with some insight.

    Read the article

  • Overriding the save() method of a model that uses django-mptt

    - by saturdayplace
    I've been using django-mptt in my project for a while now, it's fabulous. Recently, I've found a need to override a model's save() method that uses mptt, and I'm getting an error when I try to save a new instance of that model: Exception Type: ValueError at /admin/scrivener/page/add/ Exception Value: Cannot use None as a query value I'm assuming that this is a result of the fact that the instance hasn't been stuck into a tree yet, but I'm not sure how to go about fixing this. I added a comment about it onto a similar issue on the project's tracker, but I was hoping that someone here might be able to put me on the right track faster. Here's the traceback. Environment: Request Method: POST Request URL: http://localhost:8000/admin/scrivener/page/add/ Django Version: 1.2 rc 1 SVN-13117 Python Version: 2.6.4 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'django.contrib.sitemaps', 'mptt', 'filebrowser', 'south', 'haystack', 'django_static', 'etc', 'scrivener', 'gregor', 'annunciator'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware') Traceback: File "B:\django-apps\3rd Party Source\django\core\handlers\base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "B:\django-apps\3rd Party Source\django\contrib\admin\options.py" in wrapper 239. return self.admin_site.admin_view(view)(*args, **kwargs) File "B:\django-apps\3rd Party Source\django\utils\decorators.py" in _wrapped_view 74. response = view_func(request, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\views\decorators\cache.py" in _wrapped_view_func 69. response = view_func(request, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\contrib\admin\sites.py" in inner 190. return view(request, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\utils\decorators.py" in _wrapper 21. return decorator(bound_func)(*args, **kwargs) File "B:\django-apps\3rd Party Source\django\utils\decorators.py" in _wrapped_view 74. response = view_func(request, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\utils\decorators.py" in bound_func 17. return func(self, *args2, **kwargs2) File "B:\django-apps\3rd Party Source\django\db\transaction.py" in _commit_on_success 299. res = func(*args, **kw) File "B:\django-apps\3rd Party Source\django\contrib\admin\options.py" in add_view 795. self.save_model(request, new_object, form, change=False) File "B:\django-apps\3rd Party Source\django\contrib\admin\options.py" in save_model 597. obj.save() File "B:\django-apps\scrivener\models.py" in save 205. self.url = self.get_absolute_url() File "B:\django-apps\3rd Party Source\django\utils\functional.py" in _curried 55. return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) File "B:\django-apps\3rd Party Source\django\db\models\base.py" in get_absolute_url 940. return settings.ABSOLUTE_URL_OVERRIDES.get('%s.%s' % (opts.app_label, opts.module_name), func)(self, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\db\models\__init__.py" in inner 31. bits = func(*args, **kwargs) File "B:\django-apps\scrivener\models.py" in get_absolute_url 194. for ancestor in self.get_ancestors(): File "B:\django-apps\3rd Party Source\mptt\models.py" in get_ancestors 23. opts.tree_id_attr: getattr(self, opts.tree_id_attr), File "B:\django-apps\3rd Party Source\django\db\models\manager.py" in filter 141. return self.get_query_set().filter(*args, **kwargs) File "B:\django-apps\3rd Party Source\django\db\models\query.py" in filter 550. return self._filter_or_exclude(False, *args, **kwargs) File "B:\django-apps\3rd Party Source\django\db\models\query.py" in _filter_or_exclude 568. clone.query.add_q(Q(*args, **kwargs)) File "B:\django-apps\3rd Party Source\django\db\models\sql\query.py" in add_q 1131. can_reuse=used_aliases) File "B:\django-apps\3rd Party Source\django\db\models\sql\query.py" in add_filter 1000. raise ValueError("Cannot use None as a query value") Exception Type: ValueError at /admin/scrivener/page/add/ Exception Value: Cannot use None as a query value

    Read the article

  • Scope quandary with namespaces, function templates, and static data

    - by Adrian McCarthy
    This scoping problem seems like the type of C++ quandary that Scott Meyers would have addressed in one of his Effective C++ books. I have a function, Analyze, that does some analysis on a range of data. The function is called from a few places with different types of iterators, so I have made it a template (and thus implemented it in a header file). The function depends on a static table of data, AnalysisTable, that I don't want to expose to the rest of the code. My first approach was to make the table a static const inside Analysis. namespace MyNamespace { template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { static const int AnalysisTable[] = { /* data */ }; ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace It appears that the compiler creates a copy of AnalysisTable for each instantiation of Analyze, which is wasteful of space (and, to a small degree, time). So I moved the table outside the function like this: namespace MyNamespace { const int AnalysisTable[] = { /* data */ }; template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace There's only one copy of the table now, but it's exposed to the rest of the code. I'd rather keep this implementation detail hidden, so I introduced an unnamed namespace: namespace MyNamespace { namespace { // unnamed to hide AnalysisTable const int AnalysisTable[] = { /* data */ }; } // unnamed namespace template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace But now I again have multiple copies of the table, because each compilation unit that includes this header file gets its own. If Analyze weren't a template, I could move all the implementation detail out of the header file. But it is a template, so I seem stuck. My next attempt was to put the table in the implementation file and to make an extern declaration within Analyze. // foo.h ------ namespace MyNamespace { template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { extern const int AnalysisTable[]; ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace // foo.cpp ------ #include "foo.h" namespace MyNamespace { const int AnalysisTable[] = { /* data */ }; } This looks like it should work, and--indeed--the compiler is satisfied. The linker, however, complains, "unresolved external symbol AnalysisTable." Drat! (Can someone explain what I'm missing here?) The only thing I could think of was to give the inner namespace a name, declare the table in the header, and provide the actual data in an implementation file: // foo.h ----- namespace MyNamespace { namespace PrivateStuff { extern const int AnalysisTable[]; } // unnamed namespace template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses PrivateStuff::AnalysisTable return result; } } // namespace MyNamespace // foo.cpp ----- #include "foo.h" namespace MyNamespace { namespace PrivateStuff { const int AnalysisTable[] = { /* data */ }; } } Once again, I have exactly one instance of AnalysisTable (yay!), but other parts of the program can access it (boo!). The inner namespace makes it a little clearer that they shouldn't, but it's still possible. Is it possible to have one instance of the table and to move the table beyond the reach of everything but Analyze?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >