Search Results

Search found 30246 results on 1210 pages for 'object persistence'.

Page 258/1210 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • does xml ignores (,) OR (-)??? (xml of flash object).

    - by nectar
    <SCENE> <xml_image>telcos.jpg</xml_image> <xml_bigtext>TELCOS</xml_bigtext> <xml_smalltext>New-Age Properties, Wifi Everywhere</xml_smalltext> <xml_align>right</xml_align> <xml_bigtextcolor>#ffffff</xml_bigtextcolor> <xml_bigtextshadow>#0000000</xml_bigtextshadow> <xml_bigtextcolor>#ffffff</xml_bigtextcolor> <xml_bigtextshadow>#0000000</xml_bigtextshadow> this is the xml file of my flash.But ignores the highfun(-) and comma(,) like New-Age comes like NewAge and Properties, comes like PropertiesWifi WHY?

    Read the article

  • AS3 - Can I have access to the object (or function) who call me?

    - by lk
    I've asked this same question with Python. Now I like to know if this can be done in AS3. If I have something like this: package { public class SomeClass { private function A():void { C() } private function B():void { C() } private function C():void { // who is the caller, A or B ??? } public function SomeClass() { A() B() } } } Despite the design or other issues, this is only a question of an inquiring mind. Note: This has to be done without changing C signature Note 2: I like to have an access to an instance of the caller function so I can call that caller function (if I want to)

    Read the article

  • How to check the type name of an object in derived classes?

    - by Vincenzo
    This is my code: class Base { /* something */ }; class Derived : public Base { /* something */ }; vector<Base*> v; // somebody else initializes it, somewhere int counter = 0; for (vector<Base*>::iterator i=v.begin(); i!=v.end(); ++i) { if (typeof(*i) == "Derived") { // this line is NOT correct counter++; } } cout << "Found " << counter << " derived classes"; One line in the code is NOT correct. How should I write it properly? Many thanks in advance!

    Read the article

  • Announcing Entity Framework Code-First (CTP5 release)

    - by ScottGu
    This week the data team released the CTP5 build of the new Entity Framework Code-First library.  EF Code-First enables a pretty sweet code-centric development workflow for working with data.  It enables you to: Develop without ever having to open a designer or define an XML mapping file Define model objects by simply writing “plain old classes” with no base classes required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping I’m a big fan of the EF Code-First approach, and wrote several blog posts about it this summer: Code-First Development with Entity Framework 4 (July 16th) EF Code-First: Custom Database Schema Mapping (July 23rd) Using EF Code-First with an Existing Database (August 3rd) Today’s new CTP5 release delivers several nice improvements over the CTP4 build, and will be the last preview build of Code First before the final release of it.  We will ship the final EF Code First release in the first quarter of next year (Q1 of 2011).  It works with all .NET application types (including both ASP.NET Web Forms and ASP.NET MVC projects). Installing EF Code First You can install and use EF Code First CTP5 using one of two ways: Approach 1) By downloading and running a setup program.  Once installed you can reference the EntityFramework.dll assembly it provides within your projects.      or: Approach 2) By using the NuGet Package Manager within Visual Studio to download and install EF Code First within a project.  To do this, simply bring up the NuGet Package Manager Console within Visual Studio (View->Other Windows->Package Manager Console) and type “Install-Package EFCodeFirst”: Typing “Install-Package EFCodeFirst” within the Package Manager Console will cause NuGet to download the EF Code First package, and add it to your current project: Doing this will automatically add a reference to the EntityFramework.dll assembly to your project:   NuGet enables you to have EF Code First setup and ready to use within seconds.  When the final release of EF Code First ships you’ll also be able to just type “Update-Package EFCodeFirst” to update your existing projects to use the final release. EF Code First Assembly and Namespace The CTP5 release of EF Code First has an updated assembly name, and new .NET namespace: Assembly Name: EntityFramework.dll Namespace: System.Data.Entity These names match what we plan to use for the final release of the library. Nice New CTP5 Improvements The new CTP5 release of EF Code First contains a bunch of nice improvements and refinements. Some of the highlights include: Better support for Existing Databases Built-in Model-Level Validation and DataAnnotation Support Fluent API Improvements Pluggable Conventions Support New Change Tracking API Improved Concurrency Conflict Resolution Raw SQL Query/Command Support The rest of this blog post contains some more details about a few of the above changes. Better Support for Existing Databases EF Code First makes it really easy to create model layers that work against existing databases.  CTP5 includes some refinements that further streamline the developer workflow for this scenario. Below are the steps to use EF Code First to create a model layer for the Northwind sample database: Step 1: Create Model Classes and a DbContext class Below is all of the code necessary to implement a simple model layer using EF Code First that goes against the Northwind database: EF Code First enables you to use “POCO” – Plain Old CLR Objects – to represent entities within a database.  This means that you do not need to derive model classes from a base class, nor implement any interfaces or data persistence attributes on them.  This enables the model classes to be kept clean, easily testable, and “persistence ignorant”.  The Product and Category classes above are examples of POCO model classes. EF Code First enables you to easily connect your POCO model classes to a database by creating a “DbContext” class that exposes public properties that map to the tables within a database.  The Northwind class above illustrates how this can be done.  It is mapping our Product and Category classes to the “Products” and “Categories” tables within the database.  The properties within the Product and Category classes in turn map to the columns within the Products and Categories tables – and each instance of a Product/Category object maps to a row within the tables. The above code is all of the code required to create our model and data access layer!  Previous CTPs of EF Code First required an additional step to work against existing databases (a call to Database.Initializer<Northwind>(null) to tell EF Code First to not create the database) – this step is no longer required with the CTP5 release.  Step 2: Configure the Database Connection String We’ve written all of the code we need to write to define our model layer.  Our last step before we use it will be to setup a connection-string that connects it with our database.  To do this we’ll add a “Northwind” connection-string to our web.config file (or App.Config for client apps) like so:   <connectionStrings>          <add name="Northwind"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\northwind.mdf;User Instance=true"          providerName="System.Data.SqlClient" />   </connectionStrings> EF “code first” uses a convention where DbContext classes by default look for a connection-string that has the same name as the context class.  Because our DbContext class is called “Northwind” it by default looks for a “Northwind” connection-string to use.  Above our Northwind connection-string is configured to use a local SQL Express database (stored within the \App_Data directory of our project).  You can alternatively point it at a remote SQL Server. Step 3: Using our Northwind Model Layer We can now easily query and update our database using the strongly-typed model layer we just built with EF Code First. The code example below demonstrates how to use LINQ to query for products within a specific product category.  This query returns back a sequence of strongly-typed Product objects that match the search criteria: The code example below demonstrates how we can retrieve a specific Product object, update two of its properties, and then save the changes back to the database: EF Code First handles all of the change-tracking and data persistence work for us, and allows us to focus on our application and business logic as opposed to having to worry about data access plumbing. Built-in Model Validation EF Code First allows you to use any validation approach you want when implementing business rules with your model layer.  This enables a great deal of flexibility and power. Starting with this week’s CTP5 release, EF Code First also now includes built-in support for both the DataAnnotation and IValidatorObject validation support built-into .NET 4.  This enables you to easily implement validation rules on your models, and have these rules automatically be enforced by EF Code First whenever you save your model layer.  It provides a very convenient “out of the box” way to enable validation within your applications. Applying DataAnnotations to our Northwind Model The code example below demonstrates how we could add some declarative validation rules to two of the properties of our “Product” model: We are using the [Required] and [Range] attributes above.  These validation attributes live within the System.ComponentModel.DataAnnotations namespace that is built-into .NET 4, and can be used independently of EF.  The error messages specified on them can either be explicitly defined (like above) – or retrieved from resource files (which makes localizing applications easy). Validation Enforcement on SaveChanges() EF Code-First (starting with CTP5) now automatically applies and enforces DataAnnotation rules when a model object is updated or saved.  You do not need to write any code to enforce this – this support is now enabled by default.  This new support means that the below code – which violates our above rules – will automatically throw an exception when we call the “SaveChanges()” method on our Northwind DbContext: The DbEntityValidationException that is raised when the SaveChanges() method is invoked contains a “EntityValidationErrors” property that you can use to retrieve the list of all validation errors that occurred when the model was trying to save.  This enables you to easily guide the user on how to fix them.  Note that EF Code-First will abort the entire transaction of changes if a validation rule is violated – ensuring that our database is always kept in a valid, consistent state. EF Code First’s validation enforcement works both for the built-in .NET DataAnnotation attributes (like Required, Range, RegularExpression, StringLength, etc), as well as for any custom validation rule you create by sub-classing the System.ComponentModel.DataAnnotations.ValidationAttribute base class. UI Validation Support A lot of our UI frameworks in .NET also provide support for DataAnnotation-based validation rules. For example, ASP.NET MVC, ASP.NET Dynamic Data, and Silverlight (via WCF RIA Services) all provide support for displaying client-side validation UI that honor the DataAnnotation rules applied to model objects. The screen-shot below demonstrates how using the default “Add-View” scaffold template within an ASP.NET MVC 3 application will cause appropriate validation error messages to be displayed if appropriate values are not provided: ASP.NET MVC 3 supports both client-side and server-side enforcement of these validation rules.  The error messages displayed are automatically picked up from the declarative validation attributes – eliminating the need for you to write any custom code to display them. Keeping things DRY The “DRY Principle” stands for “Do Not Repeat Yourself”, and is a best practice that recommends that you avoid duplicating logic/configuration/code in multiple places across your application, and instead specify it only once and have it apply everywhere. EF Code First CTP5 now enables you to apply declarative DataAnnotation validations on your model classes (and specify them only once) and then have the validation logic be enforced (and corresponding error messages displayed) across all applications scenarios – including within controllers, views, client-side scripts, and for any custom code that updates and manipulates model classes. This makes it much easier to build good applications with clean code, and to build applications that can rapidly iterate and evolve. Other EF Code First Improvements New to CTP5 EF Code First CTP5 includes a bunch of other improvements as well.  Below are a few short descriptions of some of them: Fluent API Improvements EF Code First allows you to override an “OnModelCreating()” method on the DbContext class to further refine/override the schema mapping rules used to map model classes to underlying database schema.  CTP5 includes some refinements to the ModelBuilder class that is passed to this method which can make defining mapping rules cleaner and more concise.  The ADO.NET Team blogged some samples of how to do this here. Pluggable Conventions Support EF Code First CTP5 provides new support that allows you to override the “default conventions” that EF Code First honors, and optionally replace them with your own set of conventions. New Change Tracking API EF Code First CTP5 exposes a new set of change tracking information that enables you to access Original, Current & Stored values, and State (e.g. Added, Unchanged, Modified, Deleted).  This support is useful in a variety of scenarios. Improved Concurrency Conflict Resolution EF Code First CTP5 provides better exception messages that allow access to the affected object instance and the ability to resolve conflicts using current, original and database values.  Raw SQL Query/Command Support EF Code First CTP5 now allows raw SQL queries and commands (including SPROCs) to be executed via the SqlQuery and SqlCommand methods exposed off of the DbContext.Database property.  The results of these method calls can be materialized into object instances that can be optionally change-tracked by the DbContext.  This is useful for a variety of advanced scenarios. Full Data Annotations Support EF Code First CTP5 now supports all standard DataAnnotations within .NET, and can use them both to perform validation as well as to automatically create the appropriate database schema when EF Code First is used in a database creation scenario.  Summary EF Code First provides an elegant and powerful way to work with data.  I really like it because it is extremely clean and supports best practices, while also enabling solutions to be implemented very, very rapidly.  The code-only approach of the library means that model layers end up being flexible and easy to customize. This week’s CTP5 release further refines EF Code First and helps ensure that it will be really sweet when it ships early next year.  I recommend using NuGet to install and give it a try today.  I think you’ll be pleasantly surprised by how awesome it is. Hope this helps, Scott

    Read the article

  • The Art of Productivity

    - by dwahlin
    Getting things done has always been a challenge regardless of gender, age, race, skill, or job position. No matter how hard some people try, they end up procrastinating tasks until the last minute. Some people simply focus better when they know they’re out of time and can’t procrastinate any longer. How many times have you put off working on a term paper in school until the very last minute? With only a few hours left your mental energy and focus seem to kick in to high gear especially as you realize that you either get the paper done now or risk failing. It’s amazing how a little pressure can turn into a motivator and allow our minds to focus on a given task. Some people seem to specialize in procrastinating just about everything they do while others tend to be the “doers” who get a lot done and ultimately rise up the ladder at work. What’s the difference between these types of people? Is it pure laziness or are other factors at play? I think that some people are certainly more motivated than others, but I also think a lot of it is based on the process that “doers” tend to follow - whether knowingly or unknowingly. While I’ve certainly fought battles with procrastination, I’ve always had a knack for being able to get a lot done in a relatively short amount of time. I think a lot of my “get it done” attitude goes back to the the strong work ethic my parents instilled in me at a young age. I remember my dad saying, “You need to learn to work hard!” when I was around 5 years old. I remember that moment specifically because I was on a tractor with him the first time I heard it while he was trying to move some large rocks into a pile. The tractor was big but so were the rocks and my dad had to balance the tractor perfectly so that it didn’t tip forward too far. It was challenging work and somewhat tedious but my dad finished the task and taught me a few important lessons along the way including persistence, the importance of having a skill, and getting the job done right without skimping along the way. In this post I’m going to list a few of the techniques and processes I follow that I hope may be beneficial to others. I blogged about the general concept back in 2009 but thought I’d share some updated information and lessons learned since then. Most of the ideas that follow came from learning and refining my daily work process over the years. However, since most of the ideas are common sense (at least in my opinion), I suspect they can be found in other productivity processes that are out there. Let’s start off with one of the most important yet simple tips: Start Each Day with a List. Start Each Day with a List What are you planning to get done today? Do you keep track of everything in your head or rely on your calendar? While most of us think that we’re pretty good at managing “to do” lists strictly in our head you might be surprised at how affective writing out lists can be. By writing out tasks you’re forced to focus on the most important tasks to accomplish that day, commit yourself to those tasks, and have an easy way to track what was supposed to get done and what actually got done. Start every morning by making a list of specific tasks that you want to accomplish throughout the day. I’ll even go so far as to fill in times when I’d like to work on tasks if I have a lot of meetings or other events tying up my calendar on a given day. I’m not a big fan of using paper since I type a lot faster than I write (plus I write like a 3rd grader according to my wife), so I use the Sticky Notes feature available in Windows. Here’s an example of yesterday’s sticky note: What do you add to your list? That’s the subject of the next tip. Focus on Small Tasks It’s no secret that focusing on small, manageable tasks is more effective than trying to focus on large and more vague tasks. When you make your list each morning only add tasks that you can accomplish within a given time period. For example, if I only have 30 minutes blocked out to work on an article I don’t list “Write Article”. If I do that I’ll end up wasting 30 minutes stressing about how I’m going to get the article done in 30 minutes and ultimately get nothing done. Instead, I’ll list something like “Write Introductory Paragraphs for Article”. The next day I may add, “Write first section of article” or something that’s small and manageable – something I’m confident that I can get done. You’ll find that once you’ve knocked out several smaller tasks it’s easy to continue completing others since you want to keep the momentum going. In addition to keeping my tasks focused and small, I also make a conscious effort to limit my list to 4 or 5 tasks initially. I’ve found that if I list more than 5 tasks I feel a bit overwhelmed which hurts my productivity. It’s easy to add additional tasks as you complete others and you get the added benefit of that confidence boost of knowing that you’re being productive and getting things done as you remove tasks and add others. Getting Started is the Hardest (Yet Easiest) Part I’ve always found that getting started is the hardest part and one of the biggest contributors to procrastination. Getting started working on tasks is a lot like getting a large rock pushed to the bottom of a hill. It’s difficult to get the rock rolling at first, but once you manage to get it rocking some it’s really easy to get it rolling on its way to the bottom. As an example, I’ve written 100s of articles for technical magazines over the years and have really struggled with the initial introductory paragraphs. Keep in mind that these are the paragraphs that don’t really add that much value (in my opinion anyway). They introduce the reader to the subject matter and nothing more. What a waste of time for me to sit there stressing about how to start the article. On more than one occasion I’ve spent more than an hour trying to come up with 2-3 paragraphs of text.  Talk about a productivity killer! Whether you’re struggling with a writing task, some code for a project, an email, or other tasks, jumping in without thinking too much is the best way to get started I’ve found. I’m not saying that you shouldn’t have an overall plan when jumping into a task, but on some occasions you’ll find that if you simply jump into the task and stop worrying about doing everything perfectly that things will flow more smoothly. For my introductory paragraph problem I give myself 5 minutes to write out some general concepts about what I know the article will cover and then spend another 10-15 minutes going back and refining that information. That way I actually have some ideas to work with rather than a blank sheet of paper. If I still find myself struggling I’ll write the rest of the article first and then circle back to the introductory paragraphs once I’m done. To sum this tip up: Jump into a task without thinking too hard about it. It’s better to to get the rock at the top of the hill rocking some than doing nothing at all. You can always go back and refine your work.   Learn a Productivity Technique and Stick to It There are a lot of different productivity programs and seminars out there being sold by companies. I’ve always laughed at how much money people spend on some of these motivational programs/seminars because I think that being productive isn’t that hard if you create a re-useable set of steps and processes to follow. That’s not to say that some of these programs/seminars aren’t worth the money of course because I know they’ve definitely benefited some people that have a hard time getting things done and staying focused. One of the best productivity techniques I’ve ever learned is called the “Pomodoro Technique” and it’s completely free. This technique is an extremely simple way to manage your time without having to remember a bunch of steps, color coding mechanisms, or other processes. The technique was originally developed by Francesco Cirillo in the 80s and can be implemented with a simple timer. In a nutshell here’s how the technique works: Pick a task to work on Set the timer to 25 minutes and work on the task Once the timer rings record your time Take a 5 minute break Repeat the process Here’s why the technique works well for me: It forces me to focus on a single task for 25 minutes. In the past I had no time goal in mind and just worked aimlessly on a task until I got interrupted or bored. 25 minutes is a small enough chunk of time for me to stay focused. Any distractions that may come up have to wait until after the timer goes off. If the distraction is really important then I stop the timer and record my time up to that point. When the timer is running I act as if I only have 25 minutes total for the task (like you’re down to the last 25 minutes before turning in your term paper….frantically working to get it done) which helps me stay focused and turns into a “beat the clock” type of game. It’s actually kind of fun if you treat it that way and really helps me focus on a the task at hand. I automatically know how much time I’m spending on a given task (more on this later) by using this technique. I know that I have 5 minutes after each pomodoro (the 25 minute sprint) to waste on anything I’d like including visiting a website, stepping away from the computer, etc. which also helps me stay focused when the 25 minute timer is counting down. I use this technique so much that I decided to build a program for Windows 8 called Pomodoro Focus (I plan to blog about how it was built in a later post). It’s a Windows Store application that allows people to track tasks, productive time spent on tasks, interruption time experienced while working on a given task, and the number of pomodoros completed. If a time estimate is given when the task is initially created, Pomodoro Focus will also show the task completion percentage. I like it because it allows me to track my tasks, time spent on tasks (very useful in the consulting world), and even how much time I wasted on tasks (pressing the pause button while working on a task starts the interruption timer). I recently added a new feature that charts productive and interruption time for tasks since I wanted to see how productive I was from week to week and month to month. A few screenshots from the Pomodoro Focus app are shown next, I had a lot of fun building it and use it myself to as I work on tasks.   There are certainly many other productivity techniques and processes out there (and a slew of books describing them), but the Pomodoro Technique has been the simplest and most effective technique I’ve ever come across for staying focused and getting things done.   Persistence is Key Getting things done is great but one of the biggest lessons I’ve learned in life is that persistence is key especially when you’re trying to get something done that at times seems insurmountable. Small tasks ultimately lead to larger tasks getting accomplished, however, it’s not all roses along the way as some of the smaller tasks may come with their own share of bumps and bruises that lead to discouragement about the end goal and whether or not it is worth achieving at all. I’ve been on several long-term projects over my career as a software developer (I have one personal project going right now that fits well here) and found that repeating, “Persistence is the key!” over and over to myself really helps. Not every project turns out to be successful, but if you don’t show persistence through the hard times you’ll never know if you succeeded or not. Likewise, if you don’t persistently stick to the process of creating a daily list, follow a productivity process, etc. then the odds of consistently staying productive aren’t good.   Track Your Time How much time do you actually spend working on various tasks? If you don’t currently track time spent answering emails, on phone calls, and working on various tasks then you might be surprised to find out that a task that you thought was going to take you 30 minutes ultimately ended up taking 2 hours. If you don’t track the time you spend working on tasks how can you expect to learn from your mistakes, optimize your time better, and become more productive? That’s another reason why I like the Pomodoro Technique – it makes it easy to stay focused on tasks while also tracking how much time I’m working on a given task.   Eliminate Distractions I blogged about this final tip several years ago but wanted to bring it up again. If you want to be productive (and ultimately successful at whatever you’re doing) then you can’t waste a lot of time playing games or on Twitter, Facebook, or other time sucking websites. If you see an article you’re interested in that has no relation at all to the tasks you’re trying to accomplish then bookmark it and read it when you have some spare time (such as during a pomodoro break). Fighting the temptation to check your friends’ status updates on Facebook? Resist the urge and realize how much those types of activities are hurting your productivity and taking away from your focus. I’ll admit that eliminating distractions is still tough for me personally and something I have to constantly battle. But, I’ve made a conscious decision to cut back on my visits and updates to Facebook, Twitter, Google+ and other sites. Sure, my Klout score has suffered as a result lately, but does anyone actually care about those types of scores aside from your online “friends” (few of whom you’ve actually met in person)? :-) Ultimately it comes down to self-discipline and how badly you want to be productive and successful in your career, life goals, hobbies, or whatever you’re working on. Rather than having your homepage take you to a time wasting news site, game site, social site, picture site, or others, how about adding something like the following as your homepage? Every time your browser opens you’ll see a personal message which helps keep you on the right track. You can download my ubber-sophisticated homepage here if interested. Summary Is there a single set of steps that if followed can ultimately lead to productivity? I don’t think so since one size has never fit all. Every person is different, works in their own unique way, and has their own set of motivators, distractions, and more. While I certainly don’t consider myself to be an expert on the subject of productivity, I do think that if you learn what steps work best for you and gradually refine them over time that you can come up with a personal productivity process that can serve you well. Productivity is definitely an “art” that anyone can learn with a little practice and persistence. You’ve seen some of the steps that I personally like to follow and I hope you find some of them useful in boosting your productivity. If you have others you use please leave a comment. I’m always looking for ways to improve.

    Read the article

  • Enabling Service Availability in WCF Services

    - by cibrax
    It is very important for the enterprise to know which services are operational at any given point. There are many factors that can affect the availability of the services, some of them are external like a database not responding or any dependant service not working. However, in some cases, you only want to know whether a service is up or down, so a simple heart-beat mechanism with “Ping” messages would do the trick. Unfortunately, WCF does not provide a built-in mechanism to support this functionality, and you probably don’t to implement a “Ping” operation in any service that you have out there. For solving this in a generic way, there is a WCF extensibility point that comes to help us, the “Operation Invokers”. In a nutshell, an operation invoker is the class responsible invoking the service method with a set of parameters and generate the output parameters with the return value. What I am going to do here is to implement a custom operation invoker that intercepts any call to the service, and detects whether a “Ping” header was attached to the message. If the “Ping” header is detected, the operation invoker returns a new header to tell the client that the service is alive, and the real operation execution is omitted. In that way, we have a simple heart beat mechanism based on the messages that include a "Ping” header, so the client application can determine at any point whether the service is up or down. My operation invoker wraps the default implementation attached by default to any operation by WCF. internal class PingOperationInvoker : IOperationInvoker { IOperationInvoker innerInvoker; object[] outputs = null; object returnValue = null; public const string PingHeaderName = "Ping"; public const string PingHeaderNamespace = "http://tellago.serviceModel"; public PingOperationInvoker(IOperationInvoker innerInvoker, OperationDescription description) { this.innerInvoker = innerInvoker; outputs = description.SyncMethod.GetParameters() .Where(p => p.IsOut) .Select(p => DefaultForType(p.ParameterType)).ToArray(); var returnValue = DefaultForType(description.SyncMethod.ReturnType); } private static object DefaultForType(Type targetType) { return targetType.IsValueType ? Activator.CreateInstance(targetType) : null; } public object Invoke(object instance, object[] inputs, out object[] outputs) { object returnValue; if (Invoke(out returnValue, out outputs)) { return returnValue; } else { return this.innerInvoker.Invoke(instance, inputs, out outputs); } } private bool Invoke(out object returnValue, out object[] outputs) { object untypedProperty = null; if (OperationContext.Current .IncomingMessageProperties.TryGetValue(HttpRequestMessageProperty.Name, out untypedProperty)) { var httpRequestProperty = untypedProperty as HttpRequestMessageProperty; if (httpRequestProperty != null) { if (httpRequestProperty.Headers[PingHeaderName] != null) { outputs = this.outputs; if (OperationContext.Current .IncomingMessageProperties.TryGetValue(HttpRequestMessageProperty.Name, out untypedProperty)) { var httpResponseProperty = untypedProperty as HttpResponseMessageProperty; httpResponseProperty.Headers.Add(PingHeaderName, "Ok"); } returnValue = this.returnValue; return true; } } } var headers = OperationContext.Current.IncomingMessageHeaders; if (headers.FindHeader(PingHeaderName, PingHeaderNamespace) > -1) { outputs = this.outputs; MessageHeader<string> header = new MessageHeader<string>("Ok"); var untyped = header.GetUntypedHeader(PingHeaderName, PingHeaderNamespace); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); returnValue = this.returnValue; return true; } returnValue = null; outputs = null; return false; } } The implementation above looks for the “Ping” header either in the Http Request or the Soap message. The next step is to implement a behavior for attaching this operation invoker to the services we want to monitor. [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, AllowMultiple = false, Inherited = true)] public class PingBehavior : Attribute, IServiceBehavior, IOperationBehavior { public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters) { } public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) { } public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) { foreach (var endpoint in serviceDescription.Endpoints) { foreach (var operation in endpoint.Contract.Operations) { if (operation.Behaviors.Find<PingBehavior>() == null) operation.Behaviors.Add(this); } } } public void AddBindingParameters(OperationDescription operationDescription, BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, ClientOperation clientOperation) { } public void ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation) { dispatchOperation.Invoker = new PingOperationInvoker(dispatchOperation.Invoker, operationDescription); } public void Validate(OperationDescription operationDescription) { } } As an operation invoker can only be added in an “operation behavior”, a trick I learned in the past is that you can implement a service behavior as well and use the “Validate” method to inject it in all the operations, so the final configuration is much easier and cleaner. You only need to decorate the service with a simple attribute to enable the “Ping” functionality. [PingBehavior] public class HelloWorldService : IHelloWorld { public string Hello(string name) { return "Hello " + name; } } On the other hand, the client application needs to send a dummy message with a “Ping” header to detect whether the service is available or not. In order to simplify this task, I created a extension method in the WCF client channel to do this work. public static class ClientChannelExtensions { const string PingNamespace = "http://tellago.serviceModel"; const string PingName = "Ping"; public static bool IsAvailable<TChannel>(this IClientChannel channel, Action<TChannel> operation) { try { using (OperationContextScope scope = new OperationContextScope(channel)) { MessageHeader<string> header = new MessageHeader<string>(PingName); var untyped = header.GetUntypedHeader(PingName, PingNamespace); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); try { operation((TChannel)channel); var headers = OperationContext.Current.IncomingMessageHeaders; if (headers.Any(h => h.Name == PingName && h.Namespace == PingNamespace)) { return true; } else { return false; } } catch (CommunicationException) { return false; } } } catch (Exception) { return false; } } } This extension method basically adds a “Ping” header to the request message, executes the operation passed as argument (Action<TChannel> operation), and looks for the corresponding “Ping” header in the response to see the results. The client application can use this extension with a single line of code, var client = new ServiceReference.HelloWorldClient(); var isAvailable = client.InnerChannel.IsAvailable<IHelloWorld>((c) => c.Hello(null)); The “isAvailable” variable will tell the client application whether the service is available or not. You can download the complete implementation from this location.    

    Read the article

  • SharePoint 2010 Search Error 0x800703fa

    - by Ben
    We have migrated from SharePoint 2007 to 2010. Everything appears to be working correctly except for an intermitent error with search. Occastionally search results will crash for all of our sites and when we look up the coorliation id we get the following error: Exception when fetching results: System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: Illegal operation attempted on a registry key that has been marked for deletion. (Exception from HRESULT: 0x800703FA) (Fault Detail is equal to An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is: System.Runtime.InteropServices.COMException: Illegal operation attempted on a registry key that has been marked for deletion. (Exception from HRESULT: 0x800703FA) at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo) at Microsoft.Office.Server.Search.Query.KeywordQueryInternal.Execute() at Microsoft.Office.Server.Search.Query.QueryInternal.Execute(QueryProperties properties) at Microsoft.Office.Server.Search.Administration.SearchServiceApplication.Execute(QueryProperties properties) at SyncInvokeExecute(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc) We reset IIS and the problem resolves itself for a while. Has anyone come across a perminant fix for this?

    Read the article

  • Prevent ListBox from focusing but leave ListBoxItem(s) focusable (wpf)

    - by modosansreves
    Here is what happens: I have a listbox with items. Listbox has focus. Some item (say, 5th) is selected (has a blue background), but has no 'border'. When I press 'Down' key, the focus moves from ListBox to the first ListBoxItem. (What I want is to make 6th item selected, regardless of the 'border') When I navigate using 'Tab', the Listbox never receives the focus again. But when the collection is emptied and filled again, ListBox itself gets focus, pressing 'Down' moves the focus to the item. How to prevent ListBox from gaining focus? P.S. listBox1.SelectedItem is my own class, I don't know how to make ListBoxItem out of it to .Focus() it. EDIT: the code Xaml: <UserControl.Resources> <me:BooleanToVisibilityConverter x:Key="visibilityConverter"/> <me:BooleanToItalicsConverter x:Key="italicsConverter"/> </UserControl.Resources> <ListBox x:Name="lbItems"> <ListBox.ItemTemplate> <DataTemplate> <Grid> <ProgressBar HorizontalAlignment="Stretch" VerticalAlignment="Bottom" Visibility="{Binding Path=ShowProgress, Converter={StaticResource visibilityConverter}}" Maximum="1" Margin="4,0,0,0" Value="{Binding Progress}" /> <TextBlock Text="{Binding Path=VisualName}" FontStyle="{Binding Path=IsFinished, Converter={StaticResource italicsConverter}}" Margin="4" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> <me:OuterItem Name="Regular Folder" IsFinished="True" Exists="True" IsFolder="True"/> <me:OuterItem Name="Regular Item" IsFinished="True" Exists="True"/> <me:OuterItem Name="Yet to be created" IsFinished="False" Exists="False"/> <me:OuterItem Name="Just created" IsFinished="False" Exists="True"/> <me:OuterItem Name="In progress" IsFinished="False" Exists="True" Progress="0.7"/> </ListBox> where OuterItem is: public class OuterItem : IOuterItem { public Guid Id { get; set; } public string Name { get; set; } public bool IsFolder { get; set; } public bool IsFinished { get; set; } public bool Exists { get; set; } public double Progress { get; set; } /// Code below is of lesser importance, but anyway /// #region Visualization helper properties public bool ShowProgress { get { return !IsFinished && Exists; } } public string VisualName { get { return IsFolder ? "[ " + Name + " ]" : Name; } } #endregion public override string ToString() { if (IsFinished) return Name; if (!Exists) return " ??? " + Name; return Progress.ToString("0.000 ") + Name; } public static OuterItem Get(IOuterItem item) { return new OuterItem() { Id = item.Id, Name = item.Name, IsFolder = item.IsFolder, IsFinished = item.IsFinished, Exists = item.Exists, Progress = item.Progress }; } } ?onverters are: /// Are of lesser importance too (for understanding), but will be useful if you copy-paste to get it working public class BooleanToItalicsConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { bool normal = (bool)value; return normal ? FontStyles.Normal : FontStyles.Italic; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } public class BooleanToVisibilityConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { bool exists = (bool)value; return exists ? Visibility.Visible : Visibility.Collapsed; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } But most important, is that UserControl.Loaded() has: lbItems.Items.Clear(); lbItems.ItemsSource = fsItems; where fsItems is ObservableCollection<OuterItem>. The usability problem I describe takes place when I Clear() that collection (fsItems) and fill with new items.

    Read the article

  • Marshal a C# struct to C++ VARIANT

    - by jortan
    To start with, I'm not very familiar with the COM-technology, this is the first time I'm working with it so bear with me. I'm trying to call a COM-object function from C#. This is the interface in the idl-file: [id(6), helpstring("vConnectInfo=ConnectInfoType")] HRESULT ConnectTarget([in,out] VARIANT* vConnectInfo); This is the interop interface I got after running tlbimp: void ConnectTarget(ref object vConnectInfo); The c++ code in COM object for the target function: STDMETHODIMP PCommunication::ConnectTarget(VARIANT* vConnectInfo) { if (!((vConnectInfo->vt & VT_ARRAY) && (vConnectInfo->vt & VT_BYREF))) { return E_INVALIDARG; } ConnectInfoType *pConnectInfo = (ConnectInfoType *)((*vConnectInfo->pparray)->pvData); ... } This COM-object is running in another process, it is not in a dll. I can add that the COM object is also used from another program written in C++. In that case there is no problem because in C++ a VARIANT is created and pparray-pvData is set to the connInfo data-structure and then the COM-object is called with the VARIANT as parameter. In C#, as I understand, my struct should be marshalled as a VARIANT automatically. These are two methods I've been using (or actually I've tried a lot more...) to call this method from C#: private void method1_Click(object sender, EventArgs e) { pcom.PCom PCom = new pcom.PCom(); pcom.IGeneralManagementServices mgmt = (pcom.IGeneralManagementServices)PCom; m_ci = new ConnectInfoType(); fillConnInfo(ref m_ci); mgmt.ConnectTarget(m_ci); } In the above case the struct gets marshalled as VT_UNKNOWN. This is a simple case and works if the parameter is not a struct (eg. works for int). private void method4_Click(object sender, EventArgs e) { ConnectInfoType ci = new ConnectInfoType(); fillConnInfo(ref ci); pcom PCom = new pcom.PCom(); pcom.IGeneralManagementServices mgmt = (pcom.IGeneralManagementServices)PCom; ParameterModifier[] pms = new ParameterModifier[1]; ParameterModifier pm = new ParameterModifier(1); pm[0] = true; pms[0] = pm; object[] param = new object[1]; param[0] = ci; object[] args = new object[1]; args[0] = param; mgmt.GetType().InvokeMember("ConnectTarget", BindingFlags.InvokeMethod, null, mgmt, args, pms, null, null); } In this case it gets marshalled as VT_ARRAY | VT_BYREF | VT_VARIANT. The problem is that when debugging the "target-function" ConnectTarget I cannot find the data I send in the SAFEARRAY-struct (or in any other place in memory either) What do I do with a VT_VARIANT? Any ideas on how to get my struct-data? Update: The ConnectInfoType struct: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public class ConnectInfoType { public short network; public short nodeNumber; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 51)] public string connTargPassWord; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string sConnectId; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 16)] public string sConnectPassword; public EnuConnectType eConnectType; public int hConnectHandle; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string sAccessPassword; }; And the corresponding struct in c++: typedef struct ConnectInfoType { short network; short nodeNumber; char connTargPassWord[51]; char sConnectId[8]; char sConnectPassword[16]; EnuConnectType eConnectType; int hConnectHandle; char sAccessPassword[8]; } ConnectInfoType;

    Read the article

  • How do I get jqGrid to work using ASP.NET + JSON on the backend?

    - by briandus
    Hi friends, ok, I'm back. I totally simplified my problem to just three simple fields and I'm still stuck on the same line using the addJSONData method. I've been stuck on this for days and no matter how I rework the ajax call, the json string, blah blah blah...I can NOT get this to work! I can't even get it to work as a function when adding one row of data manually. Can anyone PLEASE post a working sample of jqGrid that works with ASP.NET and JSON? Would you please include 2-3 fields (string, integer and date preferably?) I would be happy to see a working sample of jqGrid and just the manual addition of a JSON object using the addJSONData method. Thanks SO MUCH!! If I ever get this working, I will post a full code sample for all the other posting for help from ASP.NET, JSON users stuck on this as well. Again. THANKS!! tbl.addJSONData(objGridData); //err: tbl.addJSONData is not a function!! Here is what Firebug is showing when I receive this message: • objGridData Object total=1 page=1 records=5 rows=[5] ? Page "1" Records "5" Total "1" Rows [Object ID=1 PartnerID=BCN, Object ID=2 PartnerID=BCN, Object ID=3 PartnerID=BCN, 2 more... 0=Object 1=Object 2=Object 3=Object 4=Object] (index) 0 (prop) ID (value) 1 (prop) PartnerID (value) "BCN" (prop) DateTimeInserted (value) Thu May 29 2008 12:08:45 GMT-0700 (Pacific Daylight Time) * There are three more rows Here is the value of the variable tbl (value) 'Table.scroll' <TABLE cellspacing="0" cellpadding="0" border="0" style="width: 245px;" class="scroll grid_htable"><THEAD><TR><TH class="grid_sort grid_resize" style="width: 55px;"><SPAN> </SPAN><DIV id="jqgh_ID" style="cursor: pointer;">ID <IMG src="http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images/sort_desc.gif"/></DIV></TH><TH class="grid_resize" style="width: 90px;"><SPAN> </SPAN><DIV id="jqgh_PartnerID" style="cursor: pointer;">PartnerID </DIV></TH><TH class="grid_resize" style="width: 100px;"><SPAN> </SPAN><DIV id="jqgh_DateTimeInserted" style="cursor: pointer;">DateTimeInserted </DIV></TH></TR></THEAD></TABLE> Here is the complete function: $('table.scroll').jqGrid({ datatype: function(postdata) { mtype: "POST", $.ajax({ url: 'EDI.asmx/GetTestJSONString', type: "POST", contentType: "application/json; charset=utf-8", data: "{}", dataType: "text", //not json . let me try to parse success: function(msg, st) { if (st == "success") { var gridData; //strip of "d:" notation var result = JSON.parse(msg); for (var property in result) { gridData = result[property]; break; } var objGridData = eval("(" + gridData + ")"); //creates an object with visible data and structure var tbl = jQuery('table.scroll')[0]; alert(objGridData.rows[0].PartnerID); //displays the correct data //tbl.addJSONData(objGridData); //error received: addJSONData not a function //error received: addJSONData not a function (This uses eval as shown in the documentation) //tbl.addJSONData(eval("(" + objGridData + ")")); //the line below evaluates fine, creating an object and visible data and structure //var objGridData = eval("(" + gridData + ")"); //BUT, the same thing will not work here //tbl.addJSONData(eval("(" + gridData + ")")); //FIREBUG SHOWS THIS AS THE VALUE OF gridData: // "{"total":"1","page":"1","records":"5","rows":[{"ID":1,"PartnerID":"BCN","DateTimeInserted":new Date(1214412777787)},{"ID":2,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125000)},{"ID":3,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125547)},{"ID":4,"PartnerID":"EHG","DateTimeInserted":new Date(1235603192033)},{"ID":5,"PartnerID":"EMDEON","DateTimeInserted":new Date(1235603192000)}]}" } } }); }, jsonReader: { root: "rows", //arry containing actual data page: "page", //current page total: "total", //total pages for the query records: "records", //total number of records repeatitems: false, id: "ID" //index of the column with the PK in it }, colNames: [ 'ID', 'PartnerID', 'DateTimeInserted' ], colModel: [ { name: 'ID', index: 'ID', width: 55 }, { name: 'PartnerID', index: 'PartnerID', width: 90 }, { name: 'DateTimeInserted', index: 'DateTimeInserted', width: 100}], rowNum: 10, rowList: [10, 20, 30], imgpath: 'http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images', pager: jQuery('#pager'), sortname: 'ID', viewrecords: true, sortorder: "desc", caption: "TEST Example")};

    Read the article

  • Merge DataGrid ColumnHeaders

    - by Vishal
    I would like to merge two column-Headers. Before you go and mark this question as duplicate please read further. I don't want a super-Header. I just want to merge two column-headers. Take a look at image below: Can you see two columns with headers Mobile Number 1 and Mobile Number 2? I want to show there only 1 column header as Mobile Numbers. Here is the XAML used for creating above mentioned dataGrid: <DataGrid Grid.Row="1" Margin="0,10,0,0" ItemsSource="{Binding Ledgers}" IsReadOnly="True" AutoGenerateColumns="False"> <DataGrid.Columns> <DataGridTextColumn Header="Customer Name" Binding="{Binding LedgerName}" /> <DataGridTextColumn Header="City" Binding="{Binding City}" /> <DataGridTextColumn Header="Mobile Number 1" Binding="{Binding MobileNo1}" /> <DataGridTextColumn Header="Mobile Number 2" Binding="{Binding MobileNo2}" /> <DataGridTextColumn Header="Opening Balance" Binding="{Binding OpeningBalance}" /> </DataGrid.Columns> </DataGrid> Update1: Update2 I have created a converter as follows: public class MobileNumberFormatConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value != null && value != DependencyProperty.UnsetValue) { if (value.ToString().Length <= 15) { int spacesToAdd = 15 - value.ToString().Length; string s = value.ToString().PadRight(value.ToString().Length + spacesToAdd); return s; } return value.ToString().Substring(0, value.ToString().Length - 3) + "..."; } return ""; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } I have used it in XAML as follows: <DataGridTextColumn Header="Mobile Numbers"> <DataGridTextColumn.Binding> <MultiBinding StringFormat=" {0} {1}"> <Binding Path="MobileNo1" Converter="{StaticResource mobileNumberFormatConverter}"/> <Binding Path="MobileNo2" Converter="{StaticResource mobileNumberFormatConverter}"/> </MultiBinding> </DataGridTextColumn.Binding> </DataGridTextColumn> The output I got: Update3: At last I got the desired output. Here is the code for Converter: public class MobileNumberFormatConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value != null && value != DependencyProperty.UnsetValue) { if (parameter.ToString().ToUpper() == "N") { if (value.ToString().Length <= 15) { return value.ToString(); } else { return value.ToString().Substring(0, 12); } } else if (parameter.ToString().ToUpper() == "S") { if (value.ToString().Length <= 15) { int spacesToAdd = 15 - value.ToString().Length; string spaces = ""; return spaces.PadRight(spacesToAdd); } else { return "..."; } } } return ""; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } Here is my XAML: <DataGridTemplateColumn Header="Mobile Numbers"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock> <Run Text="{Binding MobileNo1, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=N}" /> <Run Text="{Binding MobileNo1, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=S}" FontFamily="Consolas"/> <Run Text=" " FontFamily="Consolas"/> <Run Text="{Binding MobileNo2, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=N}" /> <Run Text="{Binding MobileNo2, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=S}" FontFamily="Consolas"/> </TextBlock> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> Output:

    Read the article

  • Getting Query Parameters in Javascript

    - by PhubarBaz
    I find myself needing to get query parameters that are passed into a web app on the URL quite often. At first I wrote a function that creates an associative array (aka object) with all of the parameters as keys and returns it. But then I was looking at the revealing module pattern, a nice javascript design pattern designed to hide private functions, and came up with a way to do this without even calling a function. What I came up with was this nice little object that automatically initializes itself into the same associative array that the function call did previously. // Creates associative array (object) of query params var QueryParameters = (function() {     var result = {};     if (window.location.search)     {         // split up the query string and store in an associative array         var params = window.location.search.slice(1).split("&");         for (var i = 0; i < params.length; i++)         {             var tmp = params[i].split("=");             result[tmp[0]] = unescape(tmp[1]);         }     }     return result; }()); Now all you have to do to get the query parameters is just reference them from the QueryParameters object. There is no need to create a new object or call any function to initialize it. var debug = (QueryParameters.debug === "true"); or if (QueryParameters["debug"]) doSomeDebugging(); or loop through all of the parameters. for (var param in QueryParameters) var value = QueryParameters[param]; Hope you find this object useful.

    Read the article

  • Android SDK having trouble with ADB

    - by MowDownJoe
    So, I installed the Android SDK, Eclipse, and the ADT. Upon firing up Eclipse the first time after setting up the ADT, this error popped up: [2012-05-29 12:11:06 - adb] /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] 'adb version' failed! /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] 'adb version' failed! /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory I'm not quite sure how this is. Feels weird that there's a missing library there. I'm using Ubuntu 12.04. No adb is a pretty big blow as an Android developer. How do I fix?

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

  • Is there a pattern or best practice for passing a reference type to multiple classes vs a static class?

    - by Dave
    My .NET application creates HTML files, and as such, the structure looks like variable myData BuildHomePage() variable graph = new BuildGraphPage(myData) variable table = BuildTablePage(myData) BuildGraphPage and BuildTablePage both require access data, the myData object. In the above example, I've passed the myData object to 2 constructors. This is what I'm doing now, in my current project. The myData object, and it's properties are all readonly. The problem is, the number of pages which will require this object has grown. In the real project, there are currently 4, but the new spec is to have about 20. Passing this object to the constructor of each new object and assigning it to a field is a little time consuming, but not a hardship! This poses the question whether it's better practice to continue as I have, or to refactor and create a new static class for myData which can be referenced from any where in my project. I guess my abilities to use Google are poor, because I did try and find an appropriate pattern as I am sure this type of design must be common place but my results returned nothing. Is there a pattern which is suited, or do best practices lean towards one implementation over another.

    Read the article

  • BizTalk HL7 Receive Pipeline Exception

    - by Paul Petrov
    If you experience sequence of errors below with BizTalk HL7 MLLP receive ports you may need to request a hotfix from Microsoft. Knowledge base article number is 2454887 but it’s still not available on the KB site. The hotfix is recently released and you may need to open support ticket to get to it. It requires three other hotfixes installed: ·         970492 (DASM 3.7.502.2) ·         973909 (additional ACK codes) ·         981442 (Microsoft.solutions.btahl7.mllp.dll 3.7.509.2) If the exceptions below repeatedly appear in the event log you most likely would be helped by the hotfix: Fatal error encountered in 2XDasm. Exception information is Cannot access a disposed object. Object name: 'CEventingReadStream'. There was a failure executing the receive pipeline: "BTAHL72XPipelines.BTAHL72XReceivePipeline, BTAHL72XPipelines, Version=1.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Source: "BTAHL7 2.X Disassembler" Receive Port: "ReceivePortName" URI: "IPAddress:portNumber" Reason: Cannot access a disposed object. Object name: 'CEventingReadStream'. The Messaging Engine received an error from transport adapter "MLLP" when notifying the adapter with the BatchComplete event. Reason "Object reference not set to an instance of an object." We’ve been through a lot of troubleshooting with Microsoft Product Support and they did a great job finding an issue and releasing a fix.

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • can anyone help me through the preparation of Eclipse IDE for android developer in ubuntu 12.04?

    - by csbl
    I'm new to to Linux, in this particular case, to Ubuntu. I have a small android project I have to finnish until this Friday and I'm still stuck with the install and preparing of the development envrironment. The only thing I did was install the Eclipse IDE. I'm still missing the SDK, JAVA and anything else that might be needed. Can someone help me through this? It's only because I'm running out of time to develop, or else I would embarc on a deeper investigation of this OS. I tried the step to install android platforms, through Eclispe-Help-Install New Software, and I got the follwoing error messages, at the end of the process: [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory Can anyone help, please??

    Read the article

  • EBS Seed Data Comparison Reports Now Available

    - by Steven Chan (Oracle Development)
    Earlier this year we released a reporting tool that reports on the differences in E-Business Suite database objects between one release and another.  That's a very useful reference, but EBS defaults are delivered as seed data within the database objects themselves. What about the differences in this seed data between one release and another? I'm pleased to announce the availability of a new tool that provides comparison reports of E-Business Suite seed data between EBS 11.5.10.2, 12.0.4, 12.0.6, 12.1.1, and 12.1.3.  This new tool complements the information in the data model comparison tool.  You can download the new seed data comparison tool here: EBS ATG Seed Data Comparison Report (Note 1327399.1) The EBS ATG Seed Data Comparison Report provides report on the changes between different EBS releases based upon the seed data changes delivered by the product data loader files (.ldt extension) based on EBS ATG loader control (.lct extension) files.  You can use this new tool to report on the differences in the following types of seed data: Concurrent Program definitions Descriptive Flexfield entity definitions Application Object Library profile option definitions Application Object Library (AOL) key flexfield, function, lookups, value set definitions Application Object Library (AOL) menu and responsibility definitions Application Object Library messages Application Object Library request set definitions Application Object Library printer styles definitions Report Manager / WebADI component and integrator entity definitions Business Intelligence Publisher (BI Publisher) entity definitions BIS Request Set Generator entity definitions ... and more Your feedback is welcomeThis new tool was produced by our hard-working EBS Release Management team, and they're actively seeking your feedback.  Please feel free to share your experiences with it by posting a comment here.  You can also request enhancements to this tool via the distribution list address included in Note 1327399.1.Related Articles Oracle E-Business Suite Release 12.1.3 Now Available New Whitepaper: Upgrading EBS 11i Forms + OA Framework Personalizations to EBS 12 EBS 12.0 Minimum Requirements for Extended Support Finalized Five Key Resources for Upgrading to E-Business Suite Release 12 E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >