Search Results

Search found 11924 results on 477 pages for 'symbolic reference'.

Page 96/477 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Wrapping ASP.NET Client Callbacks

    - by Ricardo Peres
    Client Callbacks are probably the less known (and I dare say, less loved) of all the AJAX options in ASP.NET, which also include the UpdatePanel, Page Methods and Web Services. The reason for that, I believe, is it’s relative complexity: Get a reference to a JavaScript function; Dynamically register function that calls the above reference; Have a JavaScript handler call the registered function. However, it has some the nice advantage of being self-contained, that is, doesn’t need additional files, such as web services, JavaScript libraries, etc, or static methods declared on a page, or any kind of attributes. So, here’s what I want to do: Have a DOM element which exposes a method that is executed server side, passing it a string and returning a string; Have a server-side event that handles the client-side call; Have two client-side user-supplied callback functions for handling the success and error results. I’m going to develop a custom control without user interface that does the registration of the client JavaScript method as well as a server-side event that can be hooked by some handler on a page. My markup will look like this: 1: <script type="text/javascript"> 1:  2:  3: function onCallbackSuccess(result, context) 4: { 5: } 6:  7: function onCallbackError(error, context) 8: { 9: } 10:  </script> 2: <my:CallbackControl runat="server" ID="callback" SendAllData="true" OnCallback="OnCallback"/> The control itself looks like this: 1: public class CallbackControl : Control, ICallbackEventHandler 2: { 3: #region Public constructor 4: public CallbackControl() 5: { 6: this.SendAllData = false; 7: this.Async = true; 8: } 9: #endregion 10:  11: #region Public properties and events 12: public event EventHandler<CallbackEventArgs> Callback; 13:  14: [DefaultValue(true)] 15: public Boolean Async 16: { 17: get; 18: set; 19: } 20:  21: [DefaultValue(false)] 22: public Boolean SendAllData 23: { 24: get; 25: set; 26: } 27:  28: #endregion 29:  30: #region Protected override methods 31:  32: protected override void Render(HtmlTextWriter writer) 33: { 34: writer.AddAttribute(HtmlTextWriterAttribute.Id, this.ClientID); 35: writer.RenderBeginTag(HtmlTextWriterTag.Span); 36:  37: base.Render(writer); 38:  39: writer.RenderEndTag(); 40: } 41:  42: protected override void OnInit(EventArgs e) 43: { 44: String reference = this.Page.ClientScript.GetCallbackEventReference(this, "arg", "onCallbackSuccess", "context", "onCallbackError", this.Async); 45: String script = String.Concat("\ndocument.getElementById('", this.ClientID, "').callback = function(arg, context, onCallbackSuccess, onCallbackError){", ((this.SendAllData == true) ? "__theFormPostCollection.length = 0; __theFormPostData = ''; WebForm_InitCallback(); " : String.Empty), reference, ";};\n"); 46:  47: this.Page.ClientScript.RegisterStartupScript(this.GetType(), String.Concat("callback", this.ClientID), script, true); 48:  49: base.OnInit(e); 50: } 51:  52: #endregion 53:  54: #region Protected virtual methods 55: protected virtual void OnCallback(CallbackEventArgs args) 56: { 57: EventHandler<CallbackEventArgs> handler = this.Callback; 58:  59: if (handler != null) 60: { 61: handler(this, args); 62: } 63: } 64:  65: #endregion 66:  67: #region ICallbackEventHandler Members 68:  69: String ICallbackEventHandler.GetCallbackResult() 70: { 71: CallbackEventArgs args = new CallbackEventArgs(this.Context.Items["Data"] as String); 72:  73: this.OnCallback(args); 74:  75: return (args.Result); 76: } 77:  78: void ICallbackEventHandler.RaiseCallbackEvent(String eventArgument) 79: { 80: this.Context.Items["Data"] = eventArgument; 81: } 82:  83: #endregion 84: } And the event argument class: 1: [Serializable] 2: public class CallbackEventArgs : EventArgs 3: { 4: public CallbackEventArgs(String argument) 5: { 6: this.Argument = argument; 7: this.Result = String.Empty; 8: } 9:  10: public String Argument 11: { 12: get; 13: private set; 14: } 15:  16: public String Result 17: { 18: get; 19: set; 20: } 21: } You will notice two properties on the CallbackControl: Async: indicates if the call should be made asynchronously or synchronously (the default); SendAllData: indicates if the callback call will include the view and control state of all of the controls on the page, so that, on the server side, they will have their properties set when the Callback event is fired. The CallbackEventArgs class exposes two properties: Argument: the read-only argument passed to the client-side function; Result: the result to return to the client-side callback function, set from the Callback event handler. An example of an handler for the Callback event would be: 1: protected void OnCallback(Object sender, CallbackEventArgs e) 2: { 3: e.Result = String.Join(String.Empty, e.Argument.Reverse()); 4: } Finally, in order to fire the Callback event from the client, you only need this: 1: <input type="text" id="input"/> 2: <input type="button" value="Get Result" onclick="document.getElementById('callback').callback(callback(document.getElementById('input').value, 'context', onCallbackSuccess, onCallbackError))"/> The syntax of the callback function is: arg: some string argument; context: some context that will be passed to the callback functions (success or failure); callbackSuccessFunction: some function that will be called when the callback succeeds; callbackFailureFunction: some function that will be called if the callback fails for some reason. Give it a try and see if it helps!

    Read the article

  • Silverlight Recruiting Application Part 4 - Navigation and Modules

    After our brief intermission (and the craziness of Q1 2010 release week), we're back on track here and today we get to dive into how we are going to navigate through our applications as well as how to set up our modules. That way, as I start adding the functionality- adding Jobs and Applicants, Interview Scheduling, and finally a handy Dashboard- you'll see how everything is communicating back and forth. This is all leading up to an eventual webinar, in which I'll dive into this process and give a honest look at the current story for MVVM vs. Code-Behind applications. (For a look at the future with SL4 and a little thing called MEF, check out what Ross is doing over at his blog!) Preamble... Before getting into really talking about this app, I've done a little bit of work ahead of time to create a ton of files that I'll need. Since the webinar is going to cover the Dashboard, it's not here, but otherwise this is a look at what the project layout looks like (and remember, this is both projects since they share the .Web): So as you can see, from an architecture perspective, the code-behind app is much smaller and more streamlined- aka a better fit for the one man shop that is me. Each module in the MVVM app has the same setup, which is the Module class and corresponding Views and ViewModels. Since the code-behind app doesn't need a go-between project like Infrastructure, each MVVM module is instead replaced by a single Silverlight UserControl which will contain all the logic for each respective bit of functionality. My Very First Module Navigation is going to be key to my application, so I figured the first thing I would setup is my MenuModule. First step here is creating a Silverlight Class Library named MenuModule, creatingthe View and ViewModel folders, and adding the MenuModule.cs class to handle module loading. The most important thing here is that my MenuModule inherits from IModule, which runs an Initialize on each module as it is created that, in my case, adds the views to the correct regions. Here's the MenuModule.cs code: public class MenuModule : IModule { private readonly IRegionManager regionManager; private readonly IUnityContainer container; public MenuModule(IUnityContainer container, IRegionManager regionmanager) { this.container = container; this.regionManager = regionmanager; } public void Initialize() { var addMenuView = container.Resolve<MenuView>(); regionManager.Regions["MenuRegion"].Add(addMenuView); } } Pretty straightforward here... We inject a container and region manager from Prism/Unity, then upon initialization we grab the view (out of our Views folder) and add it to the region it needs to live in. Simple, right? When the MenuView is created, the only thing in the code-behind is a reference to the set the MenuViewModel as the DataContext. I'd like to achieve MVVM nirvana and have zero code-behind by placing the viewmodel in the XAML, but for the reasons listed further below I can't. Navigation - MVVM Since navigation isn't the biggest concern in putting this whole thing together, I'm using the Button control to handle different options for loading up views/modules. There is another reason for this- out of the box, Prism has command support for buttons, which is one less custom command I had to work up for the functionality I would need. This comes from the Microsoft.Practices.Composite.Presentation assembly and looks as follows when put in code: <Button x:Name="xGoToJobs" Style="{StaticResource menuStyle}" Content="Jobs" cal:Click.Command="{Binding GoModule}" cal:Click.CommandParameter="JobPostingsView" /> For quick reference, 'menuStyle' is just taking care of margins and spacing, otherwise it looks, feels, and functions like everyone's favorite Button. What MVVM's this up is that the Click.Command is tying to a DelegateCommand (also coming fromPrism) on the backend. This setup allows you to tie user interaction to a command you setup in your viewmodel, which replaces the standard event-based setup you'd see in the code-behind app. Due to databinding magic, it all just works. When we get looking at the DelegateCommand in code, it ends up like this: public class MenuViewModel : ViewModelBase { private readonly IRegionManager regionManager; public DelegateCommand<object> GoModule { get; set; } public MenuViewModel(IRegionManager regionmanager) { this.regionManager = regionmanager; this.GoModule = new DelegateCommand<object>(this.goToView); } public void goToView(object obj) { MakeMeActive(this.regionManager, "MainRegion", obj.ToString()); } } Another for reference, ViewModelBase takes care of iNotifyPropertyChanged and MakeMeActive, which switches views in the MainRegion based on the parameters. So our public DelegateCommand GoModule ties to our command on the view, that in turn calls goToView, and the parameter on the button is the name of the view (which we pass with obj.ToString()) to activate. And how do the views get the names I can pass as a string? When I called regionManager.Regions[regionname].Add(view), there is an overload that allows for .Add(view, "viewname"), with viewname being what I use to activate views. You'll see that in action next installment, just wanted to clarify how that works. With this setup, I create two more buttons in my MenuView and the MenuModule is good to go. Last step is to make sure my MenuModule loads in my Bootstrapper: protected override IModuleCatalog GetModuleCatalog() { ModuleCatalog catalog = new ModuleCatalog(); // add modules here catalog.AddModule(typeof(MenuModule.MenuModule)); return catalog; } Clean, simple, MVVM-delicious. Navigation - Code-Behind Keeping with the history of significantly shorter code-behind sections of this series, Navigation will be no different. I promise. As I explained in a prior post, due to the one-project setup I don't have to worry about the same concerns so my menu is part of MainPage.xaml. So I can cheese-it a bit, though, since I've already got three buttons all set I'm just copying that code and adding three click-events instead of the command/commandparameter setup: <!-- Menu Region --> <StackPanel Grid.Row="1" Orientation="Vertical"> <Button x:Name="xJobsButton" Content="Jobs" Style="{StaticResource menuStyleCB}" Click="xJobsButton_Click" /> <Button x:Name="xApplicantsButton" Content="Applicants" Style="{StaticResource menuStyleCB}" Click="xApplicantsButton_Click" /> <Button x:Name="xSchedulingModule" Content="Scheduling" Style="{StaticResource menuStyleCB}" Click="xSchedulingModule_Click" /> </StackPanel> Simple, easy to use events, and no extra assemblies required! Since the code for loading each view will be similar, we'll focus on JobsView for now.The code-behind with this setup looks something like... private JobsView _jobsView; public MainPage() { InitializeComponent(); } private void xJobsButton_Click(object sender, RoutedEventArgs e) { if (MainRegion.Content.GetType() != typeof(JobsView)) { if (_jobsView == null) _jobsView = new JobsView(); MainRegion.Content = _jobsView; } } What am I doing here? First, for each 'view' I create a private reference which MainPage will hold on to. This allows for a little bit of state-maintenance when switching views. When a button is clicked, first we make sure the 'view' typeisn't active (why load it again if it is already at center stage?), then we check if the view has been created and create if necessary, then load it up. Three steps to switching views and is easy as pie. Part 4 Results The end result of all this is that I now have a menu module (MVVM) and a menu section (code-behind) that load their respective views. Since I'm using the same exact XAML (except with commands/events depending on the project), the end result for both is again exactly the same and I'll show a slightly larger image to show it off: Next time, we add the Jobs Module and wire up RadGridView and a separate edit page to handle adding and editing new jobs. That's when things get fun. And somewhere down the line, I'll make the menu look slicker. :) Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using Lightbox with _Screen

    Although, I have to admit that I discovered Bernard Bout's ideas and concepts about implementing a lightbox in Visual FoxPro quite a while ago, there was no "spare" time in active projects that allowed me to have a closer look into his solution(s). Luckily, these days I received a demand to focus a little bit more on this. This article describes the steps about how to integrate and make use of Bernard's lightbox class in combination with _Screen in Visual FoxPro. The requirement in this project was to be able to visually lock the whole application (_Screen area) and guide the user to an information that should not be ignored easily. Depending on the importance any current user activity should be interrupted and focus put onto the notification. Getting the "meat", eh, source code Please check out Bernard's blog on Foxite directly in order to get the latest and greatest version. As time of writing this article I use version 6.0 as described in this blog entry: The Fastest Lightbox Ever The Lightbox class is sub-classed from the imgCanvas class from the GdiPlusX project on VFPx and therefore you need to have the source code of GdiPlusX as well, and integrate it into your development environment. The version I use is available here: Release GDIPlusX 1.20 As soon as you open the bbGdiLightbox class the first it, VFP might ask you to update the reference to the gdiplusx.vcx. As we have the sources, no problem and you have access to Bernard's code. The class itself is pretty easy to understand, some properties that you do not need to change and three methods: Setup(), ShowLightbox() and BeforeDraw() The challenge - _Screen or not? Reading Bernard's article about the fastest lightbox ever, he states the following: "The class will only work on a form. It will not support any other containers" Really? And what about _Screen? Isn't that a form class, too? Yes, of course it is but nonetheless trying to use _Screen directly will fail. Well, let's have look at the code to see why: WITH This .Left = 0 .Top = 0 .Height = ThisForm.Height .Width = ThisForm.Width .ZOrder(0) .Visible = .F.ENDWITH During the setup of the lightbox as well as while capturing the image as replacement for your forms and controls, the object reference Thisform is used. Which is a little bit restrictive to my opinion but let's continue. The second issue lies in the method ShowLightbox() and introduced by the call of .Bitmap.FromScreen(): Lparameters tlVisiblilty* tlVisiblilty - show or hide (T/F)* grab a screen dump with controlsIF tlVisiblilty Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = IIF(ThisForm.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing loCaptureBmp = .Bitmap.FromScreen(ThisForm.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; ThisForm.Width ,; ThisForm.Height) ENDWITH * save it to a property This.capturebmp = loCaptureBmp ThisForm.SetAll("Visible",.F.) This.DraW() This.Visible = .T.ELSE ThisForm.SetAll("Visible",.T.) This.Visible = .F.ENDIF My first trials in using the class ended in an exception - GdiPlusError:OutOfMemory - thrown by the Bitmap object. Frankly speaking, this happened mainly because of my lack of knowledge about GdiPlusX. After reading some documentation, especially about the FromScreen() method I experimented a little bit. Capturing the visible area of _Screen actually was not the real problem but the dimensions I specified for the bitmap. The modifications - step by step First of all, it is to get rid of restrictive object references on Thisform and to change them into either This.Parent or more generic into This.oForm (even better: This.oControl). The Lightbox.Setup() method now sets the necessary object reference like so: *====================================================================* Initial setup* Default value: This.oControl = "This.Parent"* Alternative: This.oControl = "_Screen"*====================================================================With This .oControl = Evaluate(.oControl) If Vartype(.oControl) == T_OBJECT .Anchor = 0 .Left = 0 .Top = 0 .Width = .oControl.Width .Height = .oControl.Height .Anchor = 15 .ZOrder(0) .Visible = .F. EndIfEndwith Also, based on other developers' comments in Bernard articles on his lightbox concept and evolution I found the source code to handle the differences between a form and _Screen and goes into Lightbox.ShowLightbox() like this: *====================================================================* tlVisibility - show or hide (T/F)* grab a screen dump with controls*====================================================================Lparameters tlVisibility Local loControl m.loControl = This.oControl If m.tlVisibility Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = Iif(m.loControl.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing If Upper(m.loControl.Name) == Upper("Screen") loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd) Else loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; m.loControl.Width ,; m.loControl.Height) EndIf Endwith * save it to a property This.CaptureBmp = loCaptureBmp m.loControl.SetAll("Visible",.F.) This.Draw() This.Visible = .T. Else This.CaptureBmp = .Null. m.loControl.SetAll("Visible",.T.) This.Visible = .F. Endif {loadposition content_adsense} Are we done? Almost... Although, Bernard says it clearly in his article: "Just drop the class on a form and call it as shown." It did not come clear to my mind in the first place with _Screen, but, yeah, he is right. Dropping the class on a form provides a permanent link between those two classes, it creates a valid This.Parent object reference. Bearing in mind that the lightbox class can not be "dropped" on the _Screen, we have to create the same type of binding during runtime execution like so: *====================================================================* Create global lightbox component*==================================================================== Local llOk, loException As Exception m.llOk = .F. m.loException = .Null. If Not Vartype(_Screen.Lightbox) == "O" Try _Screen.AddObject("Lightbox", "bbGdiLightbox") Catch To m.loException Assert .F. Message m.loException.Message EndTry EndIf m.llOk = (Vartype(_Screen.Lightbox) == "O")Return m.llOk Through runtime instantiation we create a valid binding to This.Parent in the lightbox object and the code works as expected with _Screen. Ease your life: Use properties instead of constants Having a closer look at the BeforeDraw() method might wet your appetite to simplify the code a little bit. Looking at the sample screenshots in Bernard's article you see several forms in different colors. This got me to modify the code like so: *====================================================================* Apply the actual lightbox effect on the captured bitmap.*====================================================================If Vartype(This.CaptureBmp) == T_OBJECT Local loGfx As xfcGraphics loGfx = This.oGfx With _Screen.System.Drawing loGfx.DrawImage(This.CaptureBmp,This.Rectangle,This.Rectangle,.GraphicsUnit.Pixel) * change the colours as needed here * possible colours are (220,128,0,0),(220,0,0,128) etc. loBrush = .SolidBrush.New(.Color.FromArgb( ; This.Opacity, .Color.FromRGB(This.BorderColor))) loGfx.FillRectangle(loBrush,This.Rectangle) EndwithEndif Create an additional property Opacity to specify the grade of translucency you would like to have without the need to change the code in each instance of the class. This way you only need to change the values of Opacity and BorderColor to tweak the appearance of your lightbox. This could be quite helpful to signalize different levels of importance (ie. green, yellow, orange, red, etc...) of notifications to the users of the application. Final thoughts Using the lightbox concept in combination with _Screen instead of forms is possible. Already Jim Wiggins comments in Bernard's article to loop through the _Screen.Forms collection in order to cascade the lightbox visibility to all active forms. Good idea. But honestly, I believe that instead of looping all forms one could use _Screen.SetAll("ShowLightbox", .T./.F., "Form") with Form.ShowLightbox_Access method to gain more speed. The modifications described above might provide even more features to your applications while consuming less resources and performance. Additionally, the restrictions to capture only forms does not exist anymore. Using _Screen you are able to capture and cover anything. The captured area of _Screen does not include any toolbars, docked windows, or menus. Therefore, it is advised to take this concept on a higher level and to combine it with additional classes that handle the state of toolbars, docked windows and menus. Which I did for the customer's project.

    Read the article

  • SortedDictionary and SortedList

    - by Simon Cooper
    Apart from Dictionary<TKey, TValue>, there's two other dictionaries in the BCL - SortedDictionary<TKey, TValue> and SortedList<TKey, TValue>. On the face of it, these two classes do the same thing - provide an IDictionary<TKey, TValue> interface where the iterator returns the items sorted by the key. So what's the difference between them, and when should you use one rather than the other? (as in my previous post, I'll assume you have some basic algorithm & datastructure knowledge) SortedDictionary We'll first cover SortedDictionary. This is implemented as a special sort of binary tree called a red-black tree. Essentially, it's a binary tree that uses various constraints on how the nodes of the tree can be arranged to ensure the tree is always roughly balanced (for more gory algorithmical details, see the wikipedia link above). What I'm concerned about in this post is how the .NET SortedDictionary is actually implemented. In .NET 4, behind the scenes, the actual implementation of the tree is delegated to a SortedSet<KeyValuePair<TKey, TValue>>. One example tree might look like this: Each node in the above tree is stored as a separate SortedSet<T>.Node object (remember, in a SortedDictionary, T is instantiated to KeyValuePair<TKey, TValue>): class Node { public bool IsRed; public T Item; public SortedSet<T>.Node Left; public SortedSet<T>.Node Right; } The SortedSet only stores a reference to the root node; all the data in the tree is accessed by traversing the Left and Right node references until you reach the node you're looking for. Each individual node can be physically stored anywhere in memory; what's important is the relationship between the nodes. This is also why there is no constructor to SortedDictionary or SortedSet that takes an integer representing the capacity; there are no internal arrays that need to be created and resized. This may seen trivial, but it's an important distinction between SortedDictionary and SortedList that I'll cover later on. And that's pretty much it; it's a standard red-black tree. Plenty of webpages and datastructure books cover the algorithms behind the tree itself far better than I could. What's interesting is the comparions between SortedDictionary and SortedList, which I'll cover at the end. As a side point, SortedDictionary has existed in the BCL ever since .NET 2. That means that, all through .NET 2, 3, and 3.5, there has been a bona-fide sorted set class in the BCL (called TreeSet). However, it was internal, so it couldn't be used outside System.dll. Only in .NET 4 was this class exposed as SortedSet. SortedList Whereas SortedDictionary didn't use any backing arrays, SortedList does. It is implemented just as the name suggests; two arrays, one containing the keys, and one the values (I've just used random letters for the values): The items in the keys array are always guarenteed to be stored in sorted order, and the value corresponding to each key is stored in the same index as the key in the values array. In this example, the value for key item 5 is 'z', and for key item 8 is 'm'. Whenever an item is inserted or removed from the SortedList, a binary search is run on the keys array to find the correct index, then all the items in the arrays are shifted to accomodate the new or removed item. For example, if the key 3 was removed, a binary search would be run to find the array index the item was at, then everything above that index would be moved down by one: and then if the key/value pair {7, 'f'} was added, a binary search would be run on the keys to find the index to insert the new item, and everything above that index would be moved up to accomodate the new item: If another item was then added, both arrays would be resized (to a length of 10) before the new item was added to the arrays. As you can see, any insertions or removals in the middle of the list require a proportion of the array contents to be moved; an O(n) operation. However, if the insertion or removal is at the end of the array (ie the largest key), then it's only O(log n); the cost of the binary search to determine it does actually need to be added to the end (excluding the occasional O(n) cost of resizing the arrays to fit more items). As a side effect of using backing arrays, SortedList offers IList Keys and Values views that simply use the backing keys or values arrays, as well as various methods utilising the array index of stored items, which SortedDictionary does not (and cannot) offer. The Comparison So, when should you use one and not the other? Well, here's the important differences: Memory usage SortedDictionary and SortedList have got very different memory profiles. SortedDictionary... has a memory overhead of one object instance, a bool, and two references per item. On 64-bit systems, this adds up to ~40 bytes, not including the stored item and the reference to it from the Node object. stores the items in separate objects that can be spread all over the heap. This helps to keep memory fragmentation low, as the individual node objects can be allocated wherever there's a spare 60 bytes. In contrast, SortedList... has no additional overhead per item (only the reference to it in the array entries), however the backing arrays can be significantly larger than you need; every time the arrays are resized they double in size. That means that if you add 513 items to a SortedList, the backing arrays will each have a length of 1024. To conteract this, the TrimExcess method resizes the arrays back down to the actual size needed, or you can simply assign list.Capacity = list.Count. stores its items in a continuous block in memory. If the list stores thousands of items, this can cause significant problems with Large Object Heap memory fragmentation as the array resizes, which SortedDictionary doesn't have. Performance Operations on a SortedDictionary always have O(log n) performance, regardless of where in the collection you're adding or removing items. In contrast, SortedList has O(n) performance when you're altering the middle of the collection. If you're adding or removing from the end (ie the largest item), then performance is O(log n), same as SortedDictionary (in practice, it will likely be slightly faster, due to the array items all being in the same area in memory, also called locality of reference). So, when should you use one and not the other? As always with these sort of things, there are no hard-and-fast rules. But generally, if you: need to access items using their index within the collection are populating the dictionary all at once from sorted data aren't adding or removing keys once it's populated then use a SortedList. But if you: don't know how many items are going to be in the dictionary are populating the dictionary from random, unsorted data are adding & removing items randomly then use a SortedDictionary. The default (again, there's no definite rules on these sort of things!) should be to use SortedDictionary, unless there's a good reason to use SortedList, due to the bad performance of SortedList when altering the middle of the collection.

    Read the article

  • How to use ULS in SharePoint 2010 for Custom Code Exception Logging?

    - by venkatx5
    What is ULS in SharePoint 2010? ULS stands for Unified Logging Service which captures and writes Exceptions/Logs in Log File(A Plain Text File with .log extension). SharePoint logs Each and every exceptions with ULS. SharePoint Administrators should know ULS and it's very useful when anything goes wrong. but when you ask any SharePoint 2007 Administrator to check log file then most of them will Kill you. Because read and understand the log file is not so easy. Imagine open a plain text file of 20 MB in NotePad and go thru line by line. Now Microsoft developed a tool "ULS Viewer" to view those Log files in easily readable format. This tools also helps to filter events based on exception priority. You can read on this blog to know in details about ULS Viewer . Where to get ULS Viewer? ULS Viewer is developed by Microsoft and available to download for free. URL : http://code.msdn.microsoft.com/ULSViewer/Release/ProjectReleases.aspx?ReleaseId=3308 Note: Eventhought this tool developed by Microsoft, it's not supported by Microsoft. Means you can't support for this tool from Microsoft and use it on your own Risk. By the way what's the risk in viewing Log Files?! How to use ULS in SharePoint 2010 Custom Code? ULS can be extended to use in user solutions to log exceptions. In Detail, Developer can use ULS to log his own application errors and exceptions on SharePoint Log files. So now all in Single Place (That's why it's called "Unified Logging"). Well in this article I am going to use Waldek's Code (Reference Link). However the article is core and am writing container for that (Basically how to implement the code in Detail). Let's see the steps. Open Visual Studio 2010 -> File -> New Project -> Visual C# -> Windows -> Class Library -> Name : ULSLogger (Make sure you've selected .net Framework 3.5)   In Solution Explorer Panel, Rename the Class1.cs to LoggingService.cs   Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint"   Right Click on the Project -> Properties. Select "Signing" Tab -> Check "Sign the Assembly".   In the below drop down select <New> and enter "ULSLogger", uncheck the "Protect my key with a Password" option.   Now copy the below code and paste. (Or Just refer.. :-) ) using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Administration; using System.Runtime.InteropServices; namespace ULSLogger { public class LoggingService : SPDiagnosticsServiceBase { public static string vsDiagnosticAreaName = "Venkats SharePoint Logging Service"; public static string CategoryName = "vsProject"; public static uint uintEventID = 700; // Event ID private static LoggingService _Current; public static LoggingService Current {  get   {    if (_Current == null)     {       _Current = new LoggingService();     }    return _Current;   } }private LoggingService() : base("Venkats SharePoint Logging Service", SPFarm.Local) {}protected override IEnumerable<SPDiagnosticsArea> ProvideAreas() { List<SPDiagnosticsArea> areas = new List<SPDiagnosticsArea>  {   new SPDiagnosticsArea(vsDiagnosticAreaName, new List<SPDiagnosticsCategory>    {     new SPDiagnosticsCategory(CategoryName, TraceSeverity.Medium, EventSeverity.Error)    })   }; return areas; }public static string LogErrorInULS(string errorMessage) { string strExecutionResult = "Message Not Logged in ULS. "; try  {   SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];   LoggingService.Current.WriteTrace(uintEventID, category, TraceSeverity.Unexpected, errorMessage);   strExecutionResult = "Message Logged"; } catch (Exception ex) {  strExecutionResult += ex.Message; } return strExecutionResult; }public static string LogErrorInULS(string errorMessage, TraceSeverity tsSeverity) { string strExecutionResult = "Message Not Logged in ULS. "; try  {  SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];  LoggingService.Current.WriteTrace(uintEventID, category, tsSeverity, errorMessage);  strExecutionResult = "Message Logged";  } catch (Exception ex)  {   strExecutionResult += ex.Message;   } return strExecutionResult;  } } }   Just build the solution and it's ready to use now. This ULS solution can be used in SharePoint Webparts or Console Application. Lets see how to use it in a Console Application. SharePoint Server 2010 must be installed in the same Server or the application must be hosted in SharPoint Server 2010 environment. The console application must be set to "x64" Platform target.   Create a New Console Application. (Visual Studio -> File -> New Project -> C# -> Windows -> Console Application) Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint" Open Program.cs add "using Microsoft.SharePoint.Administration;" Right Click on References -> Add Reference -> Under "Browse" tab select the "ULSLogger.dll" which we created first. (Path : ULSLogger\ULSLogger\bin\Debug\) Right Click on Project -> Properties -> Select "Build" Tab -> Under "Platform Target" option select "x64". Open the Program.cs and paste the below code. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint.Administration; using ULSLogger; namespace ULSLoggerClient {  class Program   {   static void Main(string[] args)     {     Console.WriteLine("ULS Logging Started.");     string strResult = LoggingService.LogErrorInULS("My Application is Working Fine.");      Console.WriteLine("ULS Logging Info. Result : " + strResult);     string strResult = LoggingService.LogErrorInULS("My Application got an Exception.", TraceSeverity.High);     Console.WriteLine("ULS Logging Waring Result : " + strResult);      Console.WriteLine("ULS Logging Completed.");      Console.ReadLine();     }   } } Just build the solution and execute. It'll log the message on the log file. Make sure you are using Farm Administrator User ID. You can play with Message and TraceSeverity as required. Now Open ULS Viewer -> File -> Open From -> ULS -> Select First Option to open the default ULS Log. It's Uls RealTime and will show all log entries in readable table format. Right Click on a row and select "Filter By This Item". Select "Event ID" and enter value "700" that we used in the application. Click Ok and now you'll see the Exceptions/Logs which logged by our application.   If you want to see High Priority Messages only then Click Icons except Red Cross Icon on the Toolbar. The tooltip will tell what's the icons used for.

    Read the article

  • How to hide jQuery Sub-Menus(ddsmoothmenu)?

    - by Tim
    I'm new to jQuery and i must admit that i've understood nothing yet, the syntax appears to me as an unknown language although i thought that i had my experiences with javascript. Nevertheless i managed it to implement this menu in my asp.net masterpage's header. Even got it to work that the content-page is loaded with ajax with help from here. But finally i'm failing with the menu to disappear when the new page was loaded asynchronously. I dont know how to hide this accursed jQuery Menu. Following the part of the js-file where the events are registered for hiding/disappearing. I dont know how to get the part that is responsible for it and even i dont know how to implement that part in my Anchor-onclick function where i dont have a reference to the jQuery Object. buildmenu:function($, setting){ var smoothmenu=ddsmoothmenu var $mainmenu=$("#"+setting.mainmenuid+">ul") //reference main menu UL $mainmenu.parent().get(0).className=setting.classname || "ddsmoothmenu" var $headers=$mainmenu.find("ul").parent() $headers.hover( function(e){ $(this).children('a:eq(0)').addClass('selected') }, function(e){ $(this).children('a:eq(0)').removeClass('selected') } ) $headers.each(function(i){ //loop through each LI header var $curobj=$(this).css({zIndex: 100-i}) //reference current LI header var $subul=$(this).find('ul:eq(0)').css({display:'block'}) $subul.data('timers', {}) this._dimensions={w:this.offsetWidth, h:this.offsetHeight, subulw:$subul.outerWidth(), subulh:$subul.outerHeight()} this.istopheader=$curobj.parents("ul").length==1? true : false //is top level header? $subul.css({top:this.istopheader && setting.orientation!='v'? this._dimensions.h+"px" : 0}) $curobj.children("a:eq(0)").css(this.istopheader? {paddingRight: smoothmenu.arrowimages.down[2]} : {}).append( //add arrow images '<img src="'+ (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[1] : smoothmenu.arrowimages.right[1]) +'" class="' + (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[0] : smoothmenu.arrowimages.right[0]) + '" style="border:0;" />' ) if (smoothmenu.shadow.enable){ this._shadowoffset={x:(this.istopheader?$subul.offset().left+smoothmenu.shadow.offsetx : this._dimensions.w), y:(this.istopheader? $subul.offset().top+smoothmenu.shadow.offsety : $curobj.position().top)} //store this shadow's offsets if (this.istopheader) $parentshadow=$(document.body) else{ var $parentLi=$curobj.parents("li:eq(0)") $parentshadow=$parentLi.get(0).$shadow } this.$shadow=$('<div class="ddshadow'+(this.istopheader? ' toplevelshadow' : '')+'"></div>').prependTo($parentshadow).css({left:this._shadowoffset.x+'px', top:this._shadowoffset.y+'px'}) //insert shadow DIV and set it to parent node for the next shadow div } $curobj.hover( function(e){ var $targetul=$subul //reference UL to reveal var header=$curobj.get(0) //reference header LI as DOM object clearTimeout($targetul.data('timers').hidetimer) $targetul.data('timers').showtimer=setTimeout(function(){ header._offsets={left:$curobj.offset().left, top:$curobj.offset().top} var menuleft=header.istopheader && setting.orientation!='v'? 0 : header._dimensions.w menuleft=(header._offsets.left+menuleft+header._dimensions.subulw>$(window).width())? (header.istopheader && setting.orientation!='v'? -header._dimensions.subulw+header._dimensions.w : -header._dimensions.w) : menuleft //calculate this sub menu's offsets from its parent if ($targetul.queue().length<=1){ //if 1 or less queued animations $targetul.css({left:menuleft+"px", width:header._dimensions.subulw+'px'}).animate({height:'show',opacity:'show'}, ddsmoothmenu.transition.overtime) if (smoothmenu.shadow.enable){ var shadowleft=header.istopheader? $targetul.offset().left+ddsmoothmenu.shadow.offsetx : menuleft var shadowtop=header.istopheader?$targetul.offset().top+smoothmenu.shadow.offsety : header._shadowoffset.y if (!header.istopheader && ddsmoothmenu.detectwebkit){ //in WebKit browsers, restore shadow's opacity to full header.$shadow.css({opacity:1}) } header.$shadow.css({overflow:'', width:header._dimensions.subulw+'px', left:shadowleft+'px', top:shadowtop+'px'}).animate({height:header._dimensions.subulh+'px'}, ddsmoothmenu.transition.overtime) } } }, ddsmoothmenu.showhidedelay.showdelay) }, function(e){ var $targetul=$subul var header=$curobj.get(0) clearTimeout($targetul.data('timers').showtimer) $targetul.data('timers').hidetimer=setTimeout(function(){ $targetul.animate({height:'hide', opacity:'hide'}, ddsmoothmenu.transition.outtime) if (smoothmenu.shadow.enable){ if (ddsmoothmenu.detectwebkit){ //in WebKit browsers, set first child shadow's opacity to 0, as "overflow:hidden" doesn't work in them header.$shadow.children('div:eq(0)').css({opacity:0}) } header.$shadow.css({overflow:'hidden'}).animate({height:0}, ddsmoothmenu.transition.outtime) } }, ddsmoothmenu.showhidedelay.hidedelay) } ) //end hover }) //end $headers.each() $mainmenu.find("ul").css({display:'none', visibility:'visible'}) } one link of my menu what i want to hide when the content is redirected to another page(i need "closeMenu-function"): <li><a href="DeliveryControl.aspx" onclick="AjaxContent.getContent(this.href);closeMenu();return false;">Delivery Control</a></li> In short: I want to fade out the submenus the same way they do automatically onblur, so that only the headermenu stays visible but i dont know how. Thanks, Tim EDIT: thanks to Starx' private-lesson in jQuery for beginners i solved it: I forgot the # in $("#smoothmenu1"). After that it was not difficult to find and call the hover-function from the menu's headers to let them fade out smoothly: $("#smoothmenu1").find("ul").hover(); Regards, Tim

    Read the article

  • How do I convert the below PHP code to VB.NET?

    - by Greg
    How do I convert the below PHP code to VB.NET? <?php $X_HOST ="foo.com"; $X_URL = "/index.php"; $X_PORT ="8080"; $X_USERNAME = "foo"; $X_PASSWORD = "bar"; $s_POST_DATA = "Channel=UK.VODAFONE"; // Channel $s_POST_DATA .= "&Shortcode=12345"; // Shortcode $s_POST_DATA .= "&SourceReference=3456"; // Source Reference $s_POST_DATA .= "&MSISDN=447811111111"; // Phone $s_POST_DATA .= "&Content=test"; // Content $s_POST_DATA .= "&DataType=0"; // Data Type $s_POST_DATA .= "&Premium=1"; // Premium $s_POST_DATA .= "&CampaignID=4321"; // CampaignID $s_Request = "POST ".$X_URL." HTTP/1.0\r\n"; $s_Request .="Host: ".$X_HOST.":".$X_PORT."\r\n"; $s_Request .="Authorization: Basic ".base64_encode($X_USERNAME.":".$X_PASSWORD)."\r\n"; $s_Request .="Content-Type: application/x-www-form-urlencoded\r\n"; $s_Request .="Content-Length: ".strlen($s_POST_DATA)."\r\n"; $s_Request .="\r\n".$s_POST_DATA; //Sends out the request to the server. $fp = fsockopen ($X_HOST, $X_PORT, $errno, $errstr, 30) or die("Error!!!"); fputs ($fp, $s_Request); while (!feof($fp)) { $s_GatewayResponse .= fgets ($fp, 128); } fclose ($fp); //Array of official response codes. $a_Responses = array( "100" => "Server has returned an unspecified error.", "101" => "Server successfully received the request.", "102" => "Server has returned an database error", "103" => "Server has returned an syntax error." ); echo "<HTML>\n<BODY>\n\n"; //Checks for an official response code. foreach ($a_Responses as $s_ResponseCode => $s_ResponseDescription) { if (stristr($s_GatewayResponse, "\n$s_ResponseCode\n")) { echo "A response code of $s_ResponseCode was returned – "; echo $s_ResponseDescription"; $b_CodeReturned = true; } } //Checks for an authorization failure where an official response code has //not been recognized. if (!$b_CodeReturned) { if (stristr($s_GatewayResponse, "HTTP/1.1 401")) { echo "The server rejected your username/password (HTTP 401)."; } else { echo "No recognised response code was returned by the server."; } } echo "\n\n</BODY>\n</HTML>"; ?> and <?php $s_ref = $HTTP_POST_VARS["Reference"]; // Reference $s_trg = $HTTP_POST_VARS["Trigger"]; // trigger $s_shc = $HTTP_POST_VARS["Shortcode"]; // shortcode $s_pho = $HTTP_POST_VARS["MSISDN"]; // MSISDN $s_con = $HTTP_POST_VARS["Content"]; // Content $s_chn = $HTTP_POST_VARS["Channel"]; // Channel $s_pay = $HTTP_POST_VARS["DataType"]; // Data Type $s_dat = $HTTP_POST_VARS["DateReceived"]; // Date Received $s_cam = $HTTP_POST_VARS["CampaignID"]; // CampaignID $b_IsValid = getValidateRequest($s_ref, $s_trg, $s_shc, $s_pho, $s_con, $s_cam, $s_chn, $s_pay, $s_dat); if ($b_IsValid) { $s_ResponseCode = "success"; } else { $s_ResponseCode = "fail"; } exit($s_ResponseCode); /*******************************************************************************/ function getValidateRequest ($s_req_ref, $s_req_trg, $s_req_shc, $s_req_pho, $s_req_con, $s_req_cam, $s_req_chn, $s_req_pay, $s_req_dat) { /* * Stub function to be replaced with whatever process is needed to * process/validate request from server by specific client requirements. */ return(true); } ?> lastly <?php $s_ref = $HTTP_POST_VARS["Reference"]; // Reference $s_sta = $HTTP_POST_VARS["Status"]; // Status $s_dat = $HTTP_POST_VARS["DateDelivered"]; // Date Delivered $b_IsValid = getValidateReceipt($s_ref, $s_sta, $s_dat); if ($b_IsValid) { $s_ResponseCode = "success"; } else { $s_ResponseCode = "fail"; } exit($s_ResponseCode); /*******************************************************************************/ function getValidateReceipt ($s_req_ref, $s_req_sta, $s_req_dat) { /* * Stub function to be replaced with whatever process is needed to * process/validate receipts from server by specific client requirements. */ return(true); } ?> Thank you very much in advance Regards Greg

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • ant ftp doesn't download files in subdirectories

    - by Kristof Neirynck
    Hello, I'm trying to download files in subdirectories from an ftp server with ant. The exact set of files is known. Some of them are in subdirectories. Ant only seems to download the ones in the root directory. It does work if I download all files without listing them. The first ftp action should do the exact same thing as the second. Instead it complains about "Hidden files" and seems to prefix the paths with "\\". Does anyone know what's wrong here? Is this a bug in commons-net? build.xml <?xml version="1.0" encoding="utf-8"?> <project name="example" default="example" basedir="."> <taskdef name="ftp" classname="org.apache.tools.ant.taskdefs.optional.net.FTP" /> <target name="example"> <!-- 2 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="root1.txt,root2.txt,a/a.txt,a/b/ab.txt,c/c.txt" /> </ftp> <!-- 5 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="**/*" /> </ftp> </target> </project> run_ant.bat @ECHO OFF PUSHD %~dp0 SET CLASSPATH= SET ANT_HOME=C:\apache-ant-1.8.0 SET ant=%ANT_HOME%\bin\ant.bat SET antoptions=-nouserlib -noclasspath -d SET ftpjars=^ -lib lib\jakarta-oro-2.0.8.jar ^ -lib lib\commons-net-2.0.jar CALL %ant% %antoptions% %ftpjars% > output.txt POPD output.txt Unable to locate tools.jar. Expected to find it in C:\PROGRA~1\Java\jre6\lib\tools.jar Apache Ant version 1.8.0 compiled on February 1 2010 Trying the default build file: build.xml Buildfile: G:\ftp\build.xml Adding reference: ant.PropertyHelper Detected Java version: 1.6 in: C:\PROGRA~1\Java\jre6 Detected OS: Windows 7 Adding reference: ant.ComponentHelper Setting ro project property: ant.file -> G:\ftp\build.xml Setting ro project property: ant.file.type -> file Adding reference: ant.projectHelper Adding reference: ant.parsing.context Adding reference: ant.targets parsing buildfile G:\ftp\build.xml with URI = file:/G:/ftp/build.xml Setting ro project property: ant.project.name -> example Adding reference: example Setting ro project property: ant.project.default-target -> example Setting ro project property: ant.file.example -> G:\ftp\build.xml Setting ro project property: ant.file.type.example -> file Project base dir set to: G:\ftp +Target: +Target: example Adding reference: ant.LocalProperties parsing buildfile jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml from a zip file Class org.apache.tools.ant.taskdefs.optional.net.FTP loaded from parent loader (parentFirst) Setting ro project property: ant.project.invoked-targets -> example Attempting to create object of type org.apache.tools.ant.helper.DefaultExecutor Adding reference: ant.executor Build sequence for target(s) `example' is [example] Complete build sequence is [example, ] example: [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [root1.txt, root2.txt, a/a.txt, a/b/ab.txt] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 2 files retrieved [ftp] disconnecting [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [**/*] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] transferring a\a.txt to G:\ftp\downloads\a\a.txt [ftp] File G:\ftp\downloads\a\a.txt copied from localhost [ftp] transferring a\b\ab.txt to G:\ftp\downloads\a\b\ab.txt [ftp] File G:\ftp\downloads\a\b\ab.txt copied from localhost [ftp] transferring c\c.txt to G:\ftp\downloads\c\c.txt [ftp] File G:\ftp\downloads\c\c.txt copied from localhost [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 5 files retrieved [ftp] disconnecting BUILD SUCCESSFUL Total time: 0 seconds server log (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,232 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a/b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,233 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,234 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,235 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,236 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root1.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,237 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root2.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> QUIT (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 221 Goodbye (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> disconnected. (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,239 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,240 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD a (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,241 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD //a/ (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD b (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,242 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD c (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/c" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,243 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,244 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/a.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,245 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/b/ab.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,246 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR c/c.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,247 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root1.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,248 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root2.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> QUIT (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 221 Goodbye (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> disconnected.

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • Visual Studio 2010 Productivity Power Tool Extensions

    - by ScottGu
    Last month I blogged about the Extension Manager that is built-into VS 2010 – as well as about a cool VS 2010 PowerCommands extension that provides some extra features for Visual Studio.  The Visual Studio 2010 Extension Manager provides an easy way for developers to quickly find and install extensions and plugins that enhance the built-in functionality to VS 2010. New VS 2010 Productivity Power Tools Release Earlier this week Jason Zander announced the availability of a new VS 2010 Productivity Power Tools release that includes a bunch of great new VS 2010 extensions that provide a bunch of cool new functionality for you to take advantage of.  You can download and install the release for free here.  Some of the code editor improvements it provides include: Entire Line Highlighting: Makes it easier to track cursor location within the editor Entire Line Selection: Triple Clicking a line in the code editor now selects the entire line (like with MS Word) Code Block Movement: Use Alt+Up/Down Arrow now moves selected code blocks up/down in the editor Consistent Tabs vs. Spaces: Ensure consistent tab vs. space usage across your projects Colorized Parameters: It is now easier to see/identify method parameters Column Guide: You can now add vertical column guidelines to help with text alignment and sizes Align assignments: Makes it easier to line-up multiple variable assignments within your code HTML Clipboard Support: Copy/paste code from VS into an HTML buffer (useful for blogging!) Ctrl + Click Go to Definition: You can now hold down the Ctrl key and click a type to go to its definition It also includes several tab management improvements for managing document tabs within the IDE: Show Close Button in Tab Well: Shows a close button in document well for the active tab (like VS 2008 did) Colored Tabs: You can now select the color of each document tab by project or by regex Pinned Tabs: Enables you to pin tabs to keep them always visible and available Vertical Tabs: You can now show document tabs vertically to fit more tabs than normal Remove Tabs by Usage Order: Better behavior when adding new tabs and one needs to be hidden for space reasons Sort Tabs by Project: Tabs can be sorted by project they belong to, keeping them grouped together Sort Tabs Alphabetically: Tabs can be sorted alphabetically And last – but not least – it includes a new and improved “Add Reference” dialog: This new Add Reference dialog caches assembly information – which means it loads within a second or two (note: the very first time it still loads assembly data – but it then caches it and makes it fast afterwards). The new Add Reference dialog also now includes searching support – making it easier to find the assembly you are looking for. You can read more about all of the above improvements in Jason’s blog post about the release. New Visualization and Modeling Feature Pack Release Earlier this week we also shipped a new feature pack that adds additional modeling and code visualization features to VS 2010 Ultimate.  You can download it here. The Visualization and Modeling Feature Pack includes a bunch of great new capabilities including: Web Site Visualization: New support for generating a DGML visualization for ASP.NET projects C/C++ Native Code Visualization: New support for generating DGML diagrams for C/C++ projects Generate Code from UML Class Diagrams: You can now generate code from your UML diagrams Create UML Class Diagrams from Code: Create UML diagrams from existing code bases Import UML from XML: Import UML class, sequence, and use case elements from XMI 2.1 files Custom Validation Layer Rules: Write custom code to create, modify, and validate layer diagrams Jason’s blog post covers more about these features as well. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • MongoDB usage best practices

    - by andresv
    The project I'm working on uses MongoDB for some stuff so I'm creating some documents to help developers speedup the learning curve and also avoid mistakes and help them write clean & reliable code. This is my first version of it, so I'm pretty sure I will be adding more stuff to it, so stay tuned! C# Official driver notes The 10gen official MongoDB driver should always be referenced in projects by using NUGET. Do not manually download and reference assemblies in any project. C# driver quickstart guide: http://www.mongodb.org/display/DOCS/CSharp+Driver+Quickstart Reference links C# Language Center: http://www.mongodb.org/display/DOCS/CSharp+Language+Center MongoDB Server Documentation: http://www.mongodb.org/display/DOCS/Home MongoDB Server Downloads: http://www.mongodb.org/downloads MongoDB client drivers download: http://www.mongodb.org/display/DOCS/Drivers MongoDB Community content: http://www.mongodb.org/display/DOCS/CSharp+Community+Projects Tutorials Tutorial MongoDB con ASP.NET MVC - Ejemplo Práctico (Spanish):http://geeks.ms/blogs/gperez/archive/2011/12/02/tutorial-mongodb-con-asp-net-mvc-ejemplo-pr-225-ctico.aspx MongoDB and C#:http://www.codeproject.com/Articles/87757/MongoDB-and-C C# driver LINQ tutorial:http://www.mongodb.org/display/DOCS/CSharp+Driver+LINQ+Tutorial C# driver reference: http://www.mongodb.org/display/DOCS/CSharp+Driver+Tutorial Safe Mode Connection The C# driver supports two connection modes: safe and unsafe. Safe connection mode (only applies to methods that modify data in a database like Inserts, Deletes and Updates. While the current driver defaults to unsafe mode (safeMode == false) it's recommended to always enable safe mode, and force unsafe mode on specific things we know aren't critical. When safe mode is enabled, the driver internal code calls the MongoDB "getLastError" function to ensure the last operation is completed before returning control the the caller. For more information on using safe mode and their implicancies on performance and data reliability see: http://www.mongodb.org/display/DOCS/getLastError+Command If safe mode is not enabled, all data modification calls to the database are executed asynchronously (fire & forget) without waiting for the result of the operation. This mode could be useful for creating / updating non-critical data like performance counters, usage logging and so on. It's important to know that not using safe mode implies that data loss can occur without any notification to the caller. As with any wait operation, enabling safe mode also implies dealing with timeouts. For more information about C# driver safe mode configuration see: http://www.mongodb.org/display/DOCS/CSharp+getLastError+and+SafeMode The safe mode configuration can be specified at different levels: Connection string: mongodb://hostname/?safe=true Database: when obtaining a database instance using the server.GetDatabase(name, safeMode) method Collection: when obtaining a collection instance using the database.GetCollection(name, safeMode) method Operation: for example, when executing the collection.Insert(document, safeMode) method Some useful SafeMode article: http://stackoverflow.com/questions/4604868/mongodb-c-sharp-safemode-official-driver Exception Handling The driver ensures that an exception will be thrown in case of something going wrong, in case of using safe mode (as said above, when not using safe mode no exception will be thrown no matter what the outcome of the operation is). As explained here https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/mS6jIq5FUiM there is no need to check for any returned value from a driver method inserting data. With updates the situation is similar to any other relational database: if an update command doesn't affect any records, the call will suceed anyway (no exception thrown) and you manually have to check for something like "records affected". For MongoDB, an Update operation will return an instance of the "SafeModeResult" class, and you can verify the "DocumentsAffected" property to ensure the intended document was indeed updated. Note: Please remember that an Update method might return a null instance instead of an "SafeModeResult" instance when safe mode is not enabled. Useful Community Articles Comments about how MongoDB works and how that might affect your application: http://ethangunderson.com/blog/two-reasons-to-not-use-mongodb/ FourSquare using MongoDB had serious scalability problems: http://mashable.com/2010/10/07/mongodb-foursquare/ Is MongoDB a replacement for Memcached? http://www.quora.com/Is-MongoDB-a-good-replacement-for-Memcached/answer/Rick-Branson MongoDB Introduction, shell, when not to use, maintenance, upgrade, backups, memory, sharding, etc: http://www.markus-gattol.name/ws/mongodb.html MongoDB Collection level locking support: https://jira.mongodb.org/browse/SERVER-1240 MongoDB performance tips: http://www.quora.com/MongoDB/What-are-some-best-practices-for-optimal-performance-of-MongoDB-particularly-for-queries-that-involve-multiple-documents Lessons learned migrating from SQL Server to MongoDB: http://www.wireclub.com/development/TqnkQwQ8CxUYTVT90/read MongoDB replication performance: http://benshepheard.blogspot.com.ar/2011/01/mongodb-replication-performance.html

    Read the article

  • SQL SERVER – Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following key word will force the query to use indexes created on views? a) ENCRYPTION b) SCHEMABINDING c) NOEXPAND d) CHECK OPTION Query Hints: BIG HINT POST Usually, the assumption is that Index on the table will use Index on the table and Index on view will be used by view. However, that is the misconception. It does not happen this way. In fact, if you notice the image, you will find the both of them (table and view) use both the index created on the table. The index created on the view is not used. The reason for the same as listed in BOL. The cost of using the indexed view may exceed the cost of getting the data from the base tables, or the query is so simple that a query against the base tables is fast and easy to find. This often happens when the indexed view is defined on small tables. You can use the NOEXPAND hint if you want to force the query processor to use the indexed view. This may require you to rewrite your query if you don’t initially reference the view explicitly. You can get the actual cost of the query with NOEXPAND and compare it to the actual cost of the query plan that doesn’t reference the view. If they are close, this may give you the confidence that the decision of whether or not to use the indexed view doesn’t matter. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 4. SQL Joes 2 Pros Development Series – Structured Error Handling SQL Joes 2 Pros Development Series – SQL Server Error Messages SQL Joes 2 Pros Development Series – Table-Valued Functions SQL Joes 2 Pros Development Series – Table-Valued Store Procedure Parameters SQL Joes 2 Pros Development Series – Easy Introduction to CHECK Options SQL Joes 2 Pros Development Series – Introduction to Views SQL Joes 2 Pros Development Series – All about SQL Constraints Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MVC Portable Areas &ndash; Deploying Static Files

    - by Steve Michelotti
    This is the second post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources As I’ve been digging more into portable areas, one of the things I’ve liked best is the deployment story which enables my *.aspx, *.ascx pages to be compiled into the assembly as embedded resources rather than having to maintain all those files separately. In traditional web forms, that was always the thing to prevented developers from utilizing *.ascx user controls across projects (see this post for using portable areas in web forms).  However, though the aspx pages are embedded, the supporting static files (e.g., images, css, javascript) are *not*. Most of the demos available online today tend to brush over this issue and focus solely on the aspx side of things. But to create truly robust portable areas, it’s important to have a good story for these supporting files as well.  I’ve been working with two different approaches so far (of course I’d really like to hear if other people are using alternatives). Scenario For the approaches below, the scenario really isn’t that important. It could be something as trivial as this partial view: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <img src="<%: Url.Content("~/images/arrow.gif") %>" /> Hello World! The point is that there needs to be careful consideration for *any* scenario that links to an external file such as an image, *.css, *.js, etc. In the example shown above, it uses the Url.Content() method to convert to a relative path. But this method won’t necessary work depending on how you deploy your portable area. One approach to address this issue is to build your portable area project with MSDeploy/WebDeploy so that it is packaged properly before incorporating into the host application. All of the *.cs files are removed and the project is ready for xcopy deployment – however, I do *not* need the “Views” folder since all of the mark up has been compiled into the assembly as embedded resources. Now in the host application we create a folder called “Modules” and deploy any portable areas as sub-folders under that: At this point we can add a simple assembly reference to the Widget1.dll sitting in the Modules\Widget1\bin folder. I can now render the portable image in my view like any other portable area. However, the problem with that is that the view results in this:   It couldn’t find arrow.gif because it looked for /images/arrow.gif and it was *actually* located at /images/Modules/Widget1/images/arrow.gif. One solution is to make the physical location of the portable configurable from the perspective of the host like this: 1: <appSettings> 2: <add key="Widget1" value="Modules\Widget1"/> 3: </appSettings> Using the <appSettings> section is a little cheesy but it could be better formalized into its own section. In fact, if were you willing to rely on conventions (e.g., “Modules\{areaName}”) then then config could be eliminated completely. With this config in place, we could create our own Html helper method called Url.AreaContent() that “wraps” the OOTB Url.Content() method while simply pre-pending the area location path: 1: public static string AreaContent(this UrlHelper urlHelper, string contentPath) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: var areaPath = (string)ConfigurationManager.AppSettings[areaName]; 5:   6: return urlHelper.Content("~/" + areaPath + "/" + contentPath); With these two items in place, we just change our Url.Content() call to Url.AreaContent() like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! and the arrow.gif now renders correctly:     Since we’re just using our own Url.AreaContent() rather than the built-in Url.Content(), this solution works for images, *.css, *.js, or any externally referenced files.  Additionally, any images referenced inside a css file will work provided it’s a relative reference and not an absolute reference. An alternative to this approach is to build the static file into the assembly as embedded resources themselves. I’ll explore this in another post (linked at the top).

    Read the article

  • SharePoint 2010 Hosting :: Setting Default Column Values on a Folder Programmatically

    - by mbridge
    The reason I write this post today is because my initial searches on the Internet provided me with nothing on the topic.  I was hoping to find a reference to the SDK but I didn’t have any luck.  What I want to do is set a default column value on an existing folder so that new items in that folder automatically inherit that value.  It’s actually pretty easy to do once you know what the class is called in the API.  I did some digging and discovered that class is MetadataDefaults. It can be found in Microsoft.Office.DocumentManagement.dll.  Note: if you can’t find it in the GAC, this DLL is in the 14/CONFIG/BIN folder and not the 14/ISAPI folder.  Add a reference to this DLL in your project.  In my case, I am building a console application, but you might put this in an event receiver or workflow. In my example today, I have simple custom folder and document content types.  I have one shared site column called DocumentType.  I have a document library which each of these content types registered.  In my document library, I have a folder named Test and I want to set its default column values using code.  Here is what it looks like.  Start by getting a reference to the list in question.  This assumes you already have a SPWeb object.  In my case I have created it and it is called site. SPList customDocumentLibrary = site.Lists["CustomDocuments"]; You then pass the SPList object to the MetadataDefaults constructor. MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary); Now I just need to get my SPFolder object in question and pass it to the meethod SetFieldDefault.  This takes a SPFolder object, a string with the name of the SPField to set the default on, and finally the value of the default (in my case “Memo”). SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"]; columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo"); You can set multiple defaults here.  When you’re done, you will need to call .Update(). columnDefaults.Update(); Here is what it all looks like together. using (SPSite siteCollection = new SPSite("http://sp2010/sites/ECMSource")) {     using (SPWeb site = siteCollection.OpenWeb())     {         SPList customDocumentLibrary = site.Lists["CustomDocuments"];         MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary);          SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"];         columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo");         columnDefaults.Update();     } } You can verify that your property was set correctly on the Change Default Column Values page in your list This is something that I could see used a lot on an ItemEventReceiver attached to a folder to do metadata inheritance.  Whenever, the user changed the value of the folder’s property, you could have it update the default.  Your code might look something columnDefaults.SetFieldDefault(properties.ListItem.Folder, "MyField", properties.ListItem[" This is a great way to keep the child items updated any time the value a folder’s property changes.  I’m also wondering if this can be done via CAML.  I tried saving a site template, but after importing I got an error on the default values page.  I’ll keep looking and let you know what I find out.

    Read the article

  • ArchBeat Link-o-Rama for October 14-20, 2012

    - by Bob Rhubart
    The Top 10 items shared on the OTN ArchBeat Facebook page for the week of October 14-21, 2012. Panel: On the Impact of Software | InfoQ Les Hatton (Oakwood Computing Associates), Clive King (Oracle), Paul Good (Shell), Mike Andrews (Microsoft) and Michiel van Genuchten (moderator) discuss the impact of software engineering on our lives in this panel discussion recorded at the Computer Society Software Experts Summit 2012. ResCare Solves Content Lifecycle Challenges with Oracle WebCenter Learn how ResCare solves content lifecycle challenges with Oracle WebCenter. Speakers: Joe Lichtefeld, VP of Application Services & PMO, ResCare Wayne Boerger, Product Manager, TEAM Informatics Doug Thompson, EVP Global Development, TEAM Informatics Date: Tuesday, October 30, 2012 Time: 10:00 a.m. PT / 1:00 p.m. ET WebLogic Server 11gR1 Interactive Quick Reference "The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark Rittman Oracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Mobile Apps for EBS | Capgemini Oracle Blog Capgemini solution architect Satish Iyer breifly describes how Oracle ADF and Oracle SOA Suite can be used to fill the gap in mobile applications for Oracle EBS. Introducing the New Face of Fusion Applications | Misha Vaughan Oracle ACE Directors Debra Lilly and Floyd Teter have already blogged about the the new face of Oracle Fusion Applications. Now Applications User Experience Architect Misha Vaughan shares a brief overview of how the Oracle Applications User Experience (UX) team developed the new look. BPM 11g - Dynamic Task Assignment with Multi-level Organization Units | Mark Foster "I've seen several requirements to have a more granular level of task assignment in BPM 11g based on some value in the data passed to the process," says Fusion Middleware A-Team architect Mark Foster. "Parametric Roles is normally the first port of call to try to satisfy this requirement, but in this blog we will show how a lot of use-cases can be satisfied by the easier to implement and flexible Organization Unit." OTN Architect Day Los Angeles - Oct 25 Oracle Technology Network Architect Day in Los Angeles happens in one week. Register now to make sure you don't miss out on a rich schedule of expert technical sessions and peer interaction covering the use of Oracle technologies in cloud computing, SOA, and more. Even better: it's all free. When: October 25, 2012, 8:30am - 5:00pm. Where: Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Oracle VM VirtualBox 4.2.2 released | Oracle's Virtualization Blog The Fat Bloke weighs in with a short post with information on where you can find information and the download for the latest VirtualBox release. Advanced Oracle SOA Suite #OOW 2012 SOA Presentations The Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. Thought for the Day "Software: do you write it like a book, grow it like a plant, accrete it like a pearl, or construct it like a building?" — Jeff Atwood Source: softwarequotes.com

    Read the article

  • Information on upgrading Kinect Applications to MS SDK Beta 2.

    - by mbcrump
    Introduction Microsoft recently released the Kinect for Windows SDK Beta 2. It contains many enhancements and fixes that can be found here. The only problem with it is that a lot of current demo applications no longer function properly. Today, I’m going to walk you through a typical scenario of upgrading a Kinect application built with Beta 1 to Beta 2. Note: This tutorial covers WPF, but you can use the same techniques for WinForms. 1) Fix the references Let’s start with a fairly popular Kinect demo called Kinect User Interface Demo. This project uses the beta 1 version of Microsoft.Research.Kinect.dll and version 1.0.0.0 of Coding4Fun’s Kinect library. After you download the source code and extract the zip you will see the following references in Visual Studio 2010: Pay attention to the following references as these are the .dlls that you will have to update: Coding4Fun.Kinect.Wpf Microsoft.Research.Kinect If you click on Coding4Fun.Kinect.Wpf file you will see the following version information (v1.0.0.0): This needs to be upgraded to the Coding4Fun Kinect library built against Beta 2. So head over to http://c4fkinect.codeplex.com/ and hit download and you will have the following files. Go ahead and hit the delete key on your keyboard to remove the Coding4Fun.Kinect.Wpf.dll file from your project. Select “Add Reference” and navigate out to the folder where you extracted the files and select Coding4Fun.Kinect.Wpf.dll. If you click on the Coding4Fun.Kinect.Wpf.dll file and check properties it should be listed at 1.1.0.0: Fix Microsoft.Research.Kinect.dll The official SDK Beta 2 released a new .dll that you will need to reference in your application. Go ahead and select Microsoft.Research.Kinect.dll in your application and hit the Delete key on your keyboard. Go ahead and select Add Reference again and select Microsoft.Research.Kinect.dll from the .NET tab. Double check and make sure the version number is 1.0.0.45 as shown below. References fixed – Runtime needs to be updated. So we have fixed the references in a typical Kinect application that uses Microsoft’s SDK and C4F Kinect libraries. Now, we will need to update the runtime. All Beta 1 Kinect applications will instantiate the Runtime with the following code: Can you see that it is now marked with [Depreciated]? That means we need to update it before Microsoft decides to remove it from future versions of the SDK. We can fix this very easily by replacing this code: readonly Runtime _runtime = new Runtime(); with Microsoft.Research.Kinect.Nui.Runtime _nui; and adding similar code to our Loaded event as shown below public MainWindow() { InitializeComponent(); Loaded += new RoutedEventHandler(MainWindow_Loaded); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { if (Runtime.Kinects.Count == 0) { txtInfo.Text = "Missing Kinect"; } else { _nui = Runtime.Kinects[0]; _nui.Initialize(RuntimeOptions.UseColor); // Video Frame Ready Event can happen now!!! //_nui.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(_nui_VideoFrameReady); _nui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } } In this sample, I am testing to see if a Kinect is detected and if it is then I initialize the runtime with my first Kinect by using the Runtime.Kinects[0]. You can also specify other Kinect devices here. The rest of the code is standard code that you simply modify however you wish (ie Skeletal, Depth, etc) depending on what type of video feed you want. Conclusion As you can see it really wasn’t that painful to upgrade your project to Beta 2. I would recommend that you go ahead and upgrade to Beta 2 as future versions of the SDK will use these methods.  Thanks for reading. Subscribe to my feed

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 21-27, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of October 21-27, 2012. OTN Architect Day: Los Angeles This is your brain on IT architecture. Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables—plus a free lunch. [NOTE: The event was last week, of course. Big thanks to the session presenters and especially to those Angelinos who came out for the event.] WebLogic Server 11gR1 Interactive Quick Reference"The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Podcast: Are You Future Proof? The latest OTN ArchBeat Podcast series features Oracle ACE Directors Ron Batra, Basheer Khan, and Ronald van Luttikhuizen, three practicing architects in an open discussion about how changes in enterprise IT are raising the bar for success for software architects and developers. Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company’s position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as 'TEN." My score? Never mind. Advanced Oracle SOA Suite OOW 2012 PresentationsThe Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes contain the articles we’ve written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Oracle’s Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark RittmanOracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. OW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. Thought for the Day "Not everything that can be counted counts, and not everything that counts can be counted." — Albert Einstein (March 14, 1879 – April 18, 1955) Source: Quotes For Software Engineers

    Read the article

  • An Actionable Common Approach to Federal Enterprise Architecture

    - by TedMcLaughlan
    The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office. Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables - and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes.  Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain - Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

    Read the article

  • How can i install sun java to Fedora 8

    - by Tushar Ahirrao
    Hi I want to install java on my fedora 8 server but come to the step 9 that is mention in http://fedorasolved.org/browser-solutions/java-i386 but at the step 10 when i enter the command ln -s /opt/jre1.6.0_18/plugin/i386/ns7/libjavaplugin_oji.so /usr/lib/mozilla/plugins/libjavaplugin_oji.so it gives following error ln -s /opt/jre1.6.0_18/plugin/i386/ns7/libjavaplugin_oji.so /usr/lib/mozilla/plugins/libjavaplugin_oji.so ln: creating symbolic link `/usr/lib/mozilla/plugins/libjavaplugin_oji.so': No such file or directory can you help me please?

    Read the article

  • Migrate clearcase to perforce

    - by stimms
    I have a large quantity of clearcase data which needs to be migrated into perforce. The revisions span the better part of a decade and I need to preserve as much branch and tag information as possible. Additionally we make extensive use of symbolic links, supported in clearcase but not in perforce. What advice or tools can you suggest which might make this easier?

    Read the article

  • How can I specify a different configuration for Vim based on the executable name?

    - by Jacobo de Vera
    I am trying to open Vim with different configuration options depending on the executable file name. I intend to create a number of symbolic links to vim and I'd like to do something like this in my .vimrc if execname == "vim2" " configuration here endif Is there a variable in Vim that holds the name of the executable file being run? Alternatively, is there another way I can have different configurations without having to keep more than one .vimrc file?

    Read the article

  • Can I automate the finding of -l parameter I use when linking based on header files (gcc)?

    - by kavic
    Normally when linking against a static library, I have to specify a library directory and the name of a libX.so (or its symbolic link) as -lX flag for linking [and its directory with -L flag]. Can I automate this based on my header files (in c/c++) only? Or maybe it is not a good idea? Is there a software for locating the -L and -l parameters automatically? Is some table stored somewhere on the system about this on popular linux systems or even cygwin?

    Read the article

  • libXcodeDebuggerSupport.dylib is missing in iOS 4.2.1 development SDK

    - by Kalle
    Note: creating a symbolic link to use the 4.2 lib seems to work fine -- maybe cd /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1\ \(8C148\)/Symbols/ sudo ln -s ../../4.2 (8C134)/Symbols/Developer Request: See end of this question! After upgrading from 4.2.0 (beta, I believe) to 4.2.1, the libXcodeDebuggerSupport.dylib file is missing, which results in: warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib (file not found). which I guess isn't good. Looking at the directory in question I note: .../DeviceSupport/4.2 (8C134)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib but .../DeviceSupport/4.2.1 (8C148)/Symbols/System/ .../DeviceSupport/4.2.1 (8C148)/Symbols/usr/ the above two dirs make up all the content in the 4.2.1 folder. No "Developer" folder. Checking the /usr/ dir there, I find no libXcodeDebuggerSupport.dylib file in the lib dir either, so ln -s'ing isn't an option. Worth mentioning: after the upgrade, I plugged the iPad in and had to click "Use for development" in Xcode organizer. Doing so, I got a message about symbols missing for that version, and Xcode proceeded to generate such, then failed. I restored the iPad and did "Use for development" again, and nothing about missing symbols appeared... Update: deletion of /Developer and reinstallation of Xcode from scratch does not fix this issue. Update 2: I just realized that after the reinstall of Xcode, .../DeviceSupport/4.2 (8C134)/Symbols is now a symbolic link, lrwxr-xr-x 1 root admin 36 Dec 3 17:17 Symbols -> ../../Developer/SDKs/iPhoneOS4.2.sdk And the directory in question has the appropriate files. Maybe this is simply a matter of linking the 4.2.1 dir in the same fashion? I'll try that and see if Xcode freaks out. If someone who has this file could provide a md5 sum that would be splendid. This is what it says for me: $ md5 /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2\ \(8C134\)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib MD5 (/Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2 (8C134)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib) = 08f93a0a2e3b03feaae732691f112688 If the MD5 sum is identical to the output of $ md5 /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1\ \(8C148\)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib then we're all set.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >