Search Results

Search found 5554 results on 223 pages for 'for attribute'.

Page 64/223 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Subterranean IL: Custom modifiers

    - by Simon Cooper
    In IL, volatile is an instruction prefix used to set a memory barrier at that instruction. However, in C#, volatile is applied to a field to indicate that all accesses on that field should be prefixed with volatile. As I mentioned in my previous post, this means that the field definition needs to store this information somehow, as such a field could be accessed from another assembly. However, IL does not have a concept of a 'volatile field'. How is this information stored? Attributes The standard way of solving this is to apply a VolatileAttribute or similar to the field; this extra metadata notifies the C# compiler that all loads and stores to that field should use the volatile prefix. However, there is a problem with this approach, namely, the .NET C++ compiler. C++ allows methods to be overloaded using properties, like volatile or const, on the parameters; this is perfectly legal C++: public ref class VolatileMethods { void Method(int *i) {} void Method(volatile int *i) {} } If volatile was specified using a custom attribute, then the VolatileMethods class wouldn't be compilable to IL, as there is nothing to differentiate the two methods from each other. This is where custom modifiers come in. Custom modifiers Custom modifiers are similar to custom attributes, but instead of being applied to an IL element separately to its declaration, they are embedded within the field or parameter's type signature itself. The VolatileMethods class would be compiled to the following IL: .class public VolatileMethods { .method public instance void Method(int32* i) {} .method public instance void Method( int32 modreq( [mscorlib]System.Runtime.CompilerServices.IsVolatile)* i) {} } The modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) is the custom modifier. This adds a TypeDef or TypeRef token to the signature of the field or parameter, and even though they are mostly ignored by the CLR when it's executing the program, this allows methods and fields to be overloaded in ways that wouldn't be allowed using attributes. Because the modifiers are part of the signature, they need to be fully specified when calling such a method in IL: call instance void Method( int32 modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)*) There are two ways of applying modifiers; modreq specifies required modifiers (like IsVolatile), and modopt specifies optional modifiers that can be ignored by compilers (like IsLong or IsConst). The type specified as the modifier argument are simple placeholders; if you have a look at the definitions of IsVolatile and IsLong they are completely empty. They exist solely to be referenced by a modifier. Custom modifiers are used extensively by the C++ compiler to specify concepts that aren't expressible in IL, but still need to be taken into account when calling method overloads. C++ and C# That's all very well and good, but how does this affect C#? Well, the C++ compiler uses modreq(IsVolatile) to specify volatility on both method parameters and fields, as it would be slightly odd to have the same concept represented using a modifier or attribute depending on what it was applied to. Once you've compiled your C++ project, it can then be referenced and used from C#, so the C# compiler has to recognise the modreq(IsVolatile) custom modifier applied to fields, and vice versa. So, even though you can't overload fields or parameters with volatile using C#, volatile needs to be expressed using a custom modifier rather than an attribute to guarentee correct interoperability and behaviour with any C++ dlls that happen to come along. Next up: a closer look at attributes, and how certain attributes compile in unexpected ways.

    Read the article

  • SQL SERVER – What are Actions in SSAS and How to Make a Reporting Action

    - by Pinal Dave
    Actions are used for customized browsing and drilling of data for the end-user. It’s an event that a user can raise while accessing the cube data. They are used in cube browsers like excel and are triggered when a user in a client tool clicks on a particular member, level, dimension, cells or may be the cube itself.  For example a user might be able to see a reporting services report, open a web page or drill through to detailed information related to the cube data. Analysis server supports 3 types of actions :- Report Drill-through Standard Actions In this blog post, I will explain the Reporting  action. The objective of this action is to return a report with details of the product where the sales amount is greater than 1000 in cube browser analysis. You need to create a basic cube first with the facts and dimensions you want in the analysis. Following are the steps to create reporting action. Go to SQL server data tools and open the analysis services project. Navigate to actions and click on new reporting action. 2.) Specify the name of the action and choose target type as attribute members since we have to create the action on members for a attribute. 3.) Specify the Target object of your report action. Target object would be the dimension or attribute on which you want the report to appear. In our case it is product name. 4.) Next you have to define the condition on which you want the report link to appear. However, this is an optional feature. In this example we are specifying a condition, which will check if the sales amount is greater than 10,000. So, that the link appears only for those products where the defined condition is met. 5.) Next you have to specify the server name on which the report is present, report path  and the report format in which you want the report to appear. 6.) Additionally you can specify the parameters. As with conditional expression, the parameters should be a valid MDX expression. The parameter name should be same as the one defined in the report. 7.) Deploy your solution after you are done with specifying parameters and go to the cube browser. 8.) Click on the analyze in excel button, this will open your cube in excel 9.) Make an analysis which shows product names and their sales amount. 10.) Right click on a product where sales amount is greater than 10000 you will see the reporting action link. Click on that and you will be taken to your reporting services report. 11.) Clicking on the link will take you to the URL of the report. I created this report using report project wizard in SQL server data tools. So, this is how we can launch reports from a cube browser. Similarly you can open web pages, run applications and a number of  other tasks. Koenig Solutions offers SSAS training which contains all Analysis Services including Reporting in great detail. In my next blog post I will talk about drill-through actions. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • forEach and Facelets - a bugfarm just waiting for harvest

    - by Duncan Mills
    An issue that I've encountered before and saw again today seems worthy of a little write-up. It's all to do with a subtle yet highly important difference in behaviour between JSF 2 running with JSP and running on Facelets (.jsf pages). The incident I saw today can be seen as a report on the ADF EMG bugzilla (Issue 53) and in a blog posting by Ulrich Gerkmann-Bartels who reported the issue to the EMG. Ulrich's issue nicely shows how tricky this particular gochya can be. On the surface, the problem is squarely the fault of MDS but underneath MDS is, in fact, innocent. To summarize the problem in a simpler testcase than Ulrich's example, here's a simple fragment of code: <af:forEach var="item" items="#{itemList.items}"> <af:commandLink id="cl1" text="#{item.label}" action="#{item.doAction}"  partialSubmit="true"/> </af:forEach> Looks innocent enough right? We see a bunch of links printed out, great. The issue here though is the id attribute. Logically you can kind of see the problem. The forEach loop is creating (presumably) multiple instances of the commandLink, but only one id is specified - cl1. We know that IDs have to be unique within a JSF component tree, so that must be a bad thing?  The problem is that JSF under JSP implements some hacks when the component tree is generated to transparently fix this problem for you. Behind the scenes it ensures that each instance really does have a unique id. Really nice of it to do so, thank you very much. However, (you could see this coming), the same is not true when running with Facelets  (this is under 11.1.2.n)  in that case, what you put for the id is what you get, and JSF does not mess around in the background for you. So you end up with a component tree that contains duplicate ids which are only created at runtime.  So subtle chaos can ensue.  The symptoms are wide and varied, from something pretty obscure such as the combination Ulrich uncovered, to something as frustrating as your ActionListener just not being triggered. And yes I've wasted hours on just such an issue.  The Solution  Once you're aware of this one it's really simple to fix it, there are two options: Remove the id attribute on components that will cause some kind of submission within the forEach loop altogether and let JSF do the right thing in generating them. Then you'll be assured of uniqueness. Use the var attribute of the loop to generate a unique id for each child instance.  for example in the above case: <af:commandLink id="cl1_#{item.index}" ... />.  So one to watch out for in your upgrades to JSF 2 and one perhaps, for your coding standards today to prepare you for. For completeness, here's the reference to the underlying JSF issue that's at the heart of this: JAVASERVERFACES-1527

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • In hindsight, is basing XAML on XML a mistake or a good approach?

    - by romkyns
    XAML is essentially a subset of XML. One of the main benefits of basing XAML on XML is said to be that it can be parsed with existing tools. And it can, to a large degree, although the (syntactically non-trivial) attribute values will stay in text form and require further parsing. There are two major alternatives to describing a GUI in an XML-derived language. One is to do what WinForms did, and describe it in real code. There are numerous problems with this, though it’s not completely advantage-free (a question to compare XAML to this approach). The other major alternative is to design a completely new syntax specifically tailored for the task at hand. This is generally known as a domain-specific language. So, in hindsight, and as a lesson for the future generations, was it a good idea to base XAML on XML, or would it have been better as a custom-designed domain-specific language? If we were designing an even better UI framework, should we pick XML or a custom DSL? Since it’s much easier to think positively about the status quo, especially one that is quite liked by the community, I’ll give some example reasons for why building on top of XML might be considered a mistake. Basing a language off XML has one thing going for it: it’s much easier to parse (the core parser is already available), requires much, much less design work, and alternative parsers are also much easier to write for 3rd party developers. But the resulting language can be unsatisfying in various ways. It is rather verbose. If you change the type of something, you need to change it in the closing tag. It has very poor support for comments; it’s impossible to comment out an attribute. There are limitations placed on the content of attributes by XML. The markup extensions have to be built "on top" of the XML syntax, not integrated deeply and nicely into it. And, my personal favourite, if you set something via an attribute, you use completely different syntax than if you set the exact same thing as a content property. It’s also said that since everyone knows XML, XAML requires less learning. Strictly speaking this is true, but learning the syntax is a tiny fraction of the time spent learning a new UI framework; it’s the framework’s concepts that make the curve steep. Besides, the idiosyncracies of an XML-based language might actually add to the "needs learning" basket. Are these disadvantages outweighted by the ease of parsing? Should the next cool framework continue the tradition, or invest the time to design an awesome DSL that can’t be parsed by existing tools and whose syntax needs to be learned by everyone? P.S. Not everyone confuses XAML and WPF, but some do. XAML is the XML-like thing. WPF is the framework with support for bindings, theming, hardware acceleration and a whole lot of other cool stuff.

    Read the article

  • Philosophy of [WebInvoke(ResponseFormat = WebMessageFormat.Json)]

    - by Mikey Cee
    Hi everyone, I'm writing what I'm referring to as a POJ (Plain Old JSON) WCF web service - one that takes and emits standard JSON with none of the crap that ASP.NET Ajax likes to add to it. It seems that there are three steps to accomplish this: Change to in the endpoint's tag Decorate the method with [WebInvoke(ResponseFormat = WebMessageFormat.Json)] Add an incantation of [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] to the service contract This is all working OK for me - I can pass in and am being returned nice plain JSON. If I remove the WebInvoke attribute, then I get XML returned instead, so it is certainly doing what it is supposed to do. But it strikes me as odd that the option to specify JSON output appears here and not in the configuration file. Say I wanted to expose my method as an XML endpoint too - how would I do this? Currently the only way I can see would be to have a second method that does exactly the same thing but does not have WebMethodFormat.Json specified. Then rinse and repeat for every method in my service? Yuck. Specifying that the output should be serialized to JSON in the attribute seems to be completely contrary to the philosophy of WCF, where the service is implemented is a transport and encoding agnostic manner, leaving the nasty details of how the data will be moved around to the configuration file. Is there a better way of doing what I want to do? Or are we stuck with this awkward attribute? Or do I not understanding WCF deeply enough?

    Read the article

  • ASP.NET MVC 2.0 Validation and ErrorMessages

    - by Raj Aththanayake
    I need to set the ErrorMessage property of the DataAnnotation's validation attribute in MVC 2.0. For example I should be able to pass an ID instead of the actual error message for the Model property, for example... [StringLength(2, ErrorMessage = "EmailContentID")] [DataType(DataType.EmailAddress)] public string Email { get; set; } Then use this ID ("EmailContentID") to retrieve some content(error message) from a another service e.g database. Then the error error message is displayed to the user instead of the ID. In order to do this I need to set the DataAnnotation validation attribute’s ErrorMessage property. It seems like a stright forward task by just overriding the DataAnnotationsModelValidatorProvider‘s protected override IEnumerable GetValidators(ModelMetadata metadata, ControllerContext context, IEnumerable attributes) However it is complicated now.... A. MVC DatannotationsModelValidator’s ErrorMessage property is readonly. So I cannot set anything here B. System.ComponentModel.DataAnnotationErrorMessage property(get and set) which is already set in MVC DatannotationsModelValidator so I cannot set it again. If I try to set it I get “The property cannot set more than once…” error message. public class CustomDataAnnotationProvider : DataAnnotationsModelValidatorProvider { protected override IEnumerable<ModelValidator> GetValidators(ModelMetadata metadata, ControllerContext context, IEnumerable<Attribute> attributes) { IEnumerable<ModelValidator> validators = base.GetValidators(metadata, context, attributes); foreach (ValidationAttribute validator in validators.OfType<ValidationAttribute>()) { messageId = validator.ErrorMessage; validator.ErrorMessage = "Error string from DB And" + messageId ; } //...... } } Can anyone please give me the right direction on this? Thanks in advance.

    Read the article

  • JAX-WS client with Axis service

    - by Jon
    I'm relatively new to web services, but I need to integrate a call to an existing service in my application. Ideally, I'd like to use JAX-WS because I'm looking for the simplest, quickest-to-develop solution on my end, and MyEclipse is able to generate a JAX-WS client from a WSDL. Unfortunately, the WSDL I've inherited was built from what appears to be Axis using RPC. Will this still work? When trying to generate the code, I get these errors, and the web searches I've found seem to say that it's the service end that needs to upgrade: <restriction base="soapenc:Array"> <attribute ref="soapenc:arrayType" wsdl:arrayType="impl:MyTypeList[]" /> </restriction> WS-I: (BP2108) An Array declaration uses - restricts or extends - the soapEnc:Array type, or the wsdl:arrayType attribute is used in the type declaration WS-I: (BP2122) A wsdl:types element contained a data type definition that is not an XML schema definition <wsdlsoap:body encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" namespace="http://ws.host.com" use="encoded" / WS-I: (BP2406) The use attribute of a soapbind:body, soapbind:fault, soapbind:header and soapbind:headerfault does not have value of "literal".

    Read the article

  • RotatingFileHandler throws an exception when delay parameter is set

    - by Eli Courtwright
    When I run the following code under Python 2.6 import logging from logging.handlers import RotatingFileHandler rfh = RotatingFileHandler("testing.log", delay=True) logging.getLogger().addHandler(rfh) logging.warning("Boo!") then the last line throws AttributeError: RotatingFileHandler instance has no attribute 'level'. So I add the line rfh.setLevel(logging.DEBUG) before the call to addHandler, and then the last line throws AttributeError: RotatingFileHandler instance has no attribute 'filters'. So if I manually set filters to be an empty list, then it complains about not having the attribute lock, etc. When I remove the delay=True to leave it as the default value of False as documented here, the problem completely goes away. Am I missing something? How do I properly use the delay parameter of the RotatingFileHandler class? EDIT: Upon further analysis (presented in my own answer below), this looks like a bug, but I can't find a bug report on this in the Python bug tracker, even trying different search terms, so I guess I'll report it. However, if someone can locate the actual bug report, then I can avoid submitting a duplicate reporting and wasting the time of the Python developers. I'll hold off on reporting the bug for a few hours, and if someone posts an answer that has the current bug report, then I'll accept that answer for this question.

    Read the article

  • Usercontrol losing Viewstate across Postback

    - by Robert W
    I have a user control which uses objects as inner properties (some code is below). I am having trouble with setting the attribute of the Step class programmatically, when set programmatically it is being lost across postback which would indicate something to do with Viewstate (?). When setting the property of the Step class declaratively it's working fine. Does anybody have any ideas of what this code be/what's causing it to lose the state across postback? public partial class StepControl : System.Web.UI.UserControl { [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] [NotifyParentProperty(true)] public Step Step1 { get; set; } [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] [NotifyParentProperty(true)] public Step Step2 { get; set; } protected void Page_Init(object sender, EventArgs e) { AddSteps(); } private void AddSteps() { } } [Serializable()] [ParseChildren(true)] [PersistChildren(false)] public class Step { [PersistenceMode(PersistenceMode.Attribute)] public string Title { get; set; } [PersistenceMode(PersistenceMode.Attribute)] public string Status { get; set; } [PersistenceMode(PersistenceMode.InnerProperty)] [TemplateInstance(TemplateInstance.Single)] [TemplateContainer(typeof(StepContentContainer))] public ITemplate Content { get; set; } public class StepContentContainer : Control, INamingContainer { } }

    Read the article

  • Generating %pc relative address of constant data

    - by Hudson
    Is there a way to have gcc generate %pc relative addresses of constants? Even when the string appears in the text segment, arm-elf-gcc will generate a constant pointer to the data, load the address of the pointer via a %pc relative address and then dereference it. For a variety of reasons, I need to skip the middle step. As an example, this simple function: const char * filename(void) { static const char _filename[] __attribute__((section(".text"))) = "logfile"; return _filename; } generates (when compiled with arm-elf-gcc-4.3.2 -nostdlib -c -O3 -W -Wall logfile.c): 00000000 <filename>: 0: e59f0000 ldr r0, [pc, #0] ; 8 <filename+0x8> 4: e12fff1e bx lr 8: 0000000c .word 0x0000000c 0000000c <_filename.1175>: c: 66676f6c .word 0x66676f6c 10: 00656c69 .word 0x00656c69 I would have expected it to generate something more like: filename: add r0, pc, #0 bx lr _filename.1175: .ascii "logfile\000" The code in question needs to be partially position independent since it will be relocated in memory at load time, but also integrate with code that was not compiled -fPIC, so there is no global offset table. My current work around is to call a non-inline function (which will be done via a %pc relative address) to find the offset from the compiled location in a technique similar to how -fPIC code works: static intptr_t __attribute__((noinline)) find_offset( void ) { uintptr_t pc; asm __volatile__ ( "mov %0, %%pc" : "=&r"(pc) ); return pc - 8 - (uintptr_t) find_offset; } But this technique requires that all data references be fixed up manually, so the filename() function in the above example would become: const char * filename(void) { static const char _filename[] __attribute__((section(".text"))) = "logfile"; return _filename + find_offset(); }

    Read the article

  • Multilingual spellcheck on WPF richtextbox

    - by sub-jp
    I need to turn spellcheck on for a richtextbox, and set the language to one the user has picked from a drop down. For now, I'm just testing it by building the richtextbox in xaml and providing a language to the xaml language attribute. I've read two different resources and one says I need to set the language attribute, and the other says I need to set the xml:lang attribute. Neither seems to work. I've tried setting either one to "es" for Spanish, and I've also tried setting both to "es". I've also tried french by setting them to "fr-FR", without success. The only thing that happens is that english words aren't marked, but the other language words are marked as misspelled. I also read that I need to change the keyboard language. This would be a problem for my application as the language within the application needs to be switched on the fly, so having the end user go to their keyboard settings just so spellcheck will work is a problem. However, I've changed my keyboard settings, and spell check still does not work properly. This time it doesn't mark anything as misspelled, even misspelled english words. What am I missing? Edit: some links to my references above http://msdn.microsoft.com/en-us/library/system.windows.controls.spellcheck(v=VS.100).aspx http://www.dev102.com/2008/03/25/customize-spellcheck-on-wpf-text-controls/ http://books.google.com/books?id=clLc5BBHqRMC&pg=PA121&lpg=PA121&dq=C%23+wpf+enable+spellcheck&source=bl&ots=_r59pZRDjP&sig=yHMBc39EHKK5gaRMzxlBaEsY890&hl=en&ei=oXnIS8zWH4G88gaq48yGBw&sa=X&oi=book_result&ct=result&resnum=6&ved=0CBMQ6AEwBQ#v=onepage&q&f=false

    Read the article

  • g++ C++0x enum class Compiler Warnings

    - by Travis G
    I've been refactoring my horrible mess of C++ type-safe psuedo-enums to the new C++0x type-safe enums because they're way more readable. Anyway, I use them in exported classes, so I explicitly mark them to be exported: enum class __attribute__((visibility("default"))) MyEnum : unsigned int { One = 1, Two = 2 }; Compiling this with g++ yields the following warning: type attributes ignored after type is already defined This seems very strange, since, as far as I know, that warning is meant to prevent actual mistakes like: class __attribute__((visibility("default"))) MyClass { }; class __attribute__((visibility("hidden"))) MyClass; Of course, I'm clearly not doing that, since I have only marked the visibility attributes at the definition of the enum class and I'm not re-defining or declaring it anywhere else (I can duplicate this error with a single file). Ultimately, I can't make this bit of code actually cause a problem, save for the fact that, if I change a value and re-compile the consumer without re-compiling the shared library, the consumer passes the new values and the shared library has no idea what to do with them (although I wouldn't expect that to work in the first place). Am I being way too pedantic? Can this be safely ignored? I suspect so, but at the same time, having this error prevents me from compiling with Werror, which makes me uncomfortable. I would really like to see this problem go away.

    Read the article

  • PHP XML Validation

    - by efritz
    What's the best way to validate an XML file (or a portion of it) against multiple XSD files? For example, I have the following schema for a configuration loader: <xsd:schema xmlns="http://www.kauriproject.org/schema/configuration" xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.kauriproject.org/schema/configuration" elementFormDefault="qualified"> <xsd:element name="configuration" type="configuration" /> <xsd:complexType name="configuration"> <xsd:choice maxOccurs="unbounded"> <xsd:element name="import" type="import" minOccurs="0" maxOccurs="unbounded" /> <xsd:element name="section" type="section" /> </xsd:choice> </xsd:complexType> <xsd:complexType name="section"> <xsd:sequence> <xsd:any minOccurs="0" maxOccurs="unbounded" processContents="lax" /> </xsd:sequence> <xsd:attribute name="name" type="xsd:string" use="required" /> <xsd:attribute name="type" type="xsd:string" use="required" /> </xsd:complexType> <xsd:complexType name="import" mixed="true"> <xsd:attribute name="resource" type="xsd:string" /> </xsd:complexType> </xsd:schema> As the Configuration class exists now, it lets one add a <section> tag with a define concrete parser class (much like custom configuration sections in ASP.NET). However, I'm unsure of how to validate the section being parsed. If it possible to validate just this section of code with an XSD file/string without writing it back to a file?

    Read the article

  • Spring-hibernate mapping problem

    - by James
    I have a spring-hibernate application which is failing to map an object properly: basically I have 2 domain objects, a Post and a User. The semantics are that every Post has 1 corresponding User. The Post domain object looks roughly as follows: class Post { private int pId; private String attribute; ... private User user; //getters and setters here } As you can see, Post contains a reference to User. When I load a Post object, I want to corresponding User object to be loaded (lazily - only when its needed). My mapping looks as follows: <class name="com...Post" table="post"> <id name="pId" column="PostId" /> <property name="attribute" column="Attribute" type="java.lang.String" /> <one-to-one name="User" fetch="join" class="com...User"></one-to-one> </class> And of course I have a basic mapping for User set up. As far as my table schema is concerned, I have a table called post with a foreign UserId which links to the user table. I thought this setup should work, BUT when I load a page that forces the lazy loading of the User object, I notice the following Hiberate query being generated: Select ... from post this_ left outer join user user2_ on this.PostId=user2_.UserId ... Obviously this is wrong: it should be joining UserId from post with UserId from user, but instead its incorrectly joining PostId from post (its primary key) with UserId from user. Any ideas? Thanks!

    Read the article

  • How to escape the character entities in XML?

    - by Chetan Vaity
    I want to pass XML as a string in an XML attribute. <activity evt="&lt;FHS&gt; &lt;act&gt; &lt;polyline penWidth=&quot;2&quot; points=&quot;256,435 257,432 &quot;/&gt; &lt;/act&gt; &lt;/FHS&gt; /> Here the "evt" attribute is the XML string, so escaping all the less-than, greater-than, etc characters by the appropriate character entities works fine. The problem is I want a fragment to be interpreted as is - the character entities themselves should be treated as simple strings. When the "evt" attribute is read and an XML is generated from it, it should look like <FHS> <act> &lt;polyline penWidth=&quot;2&quot; points=&quot;256,435 257,432 &quot;/&gt; </act> </FHS> Essentially, I want to escape the character entities. How is this possible?

    Read the article

  • Custom model validation of dependent properties using Data Annotations

    - by Darin Dimitrov
    Since now I've used the excellent FluentValidation library to validate my model classes. In web applications I use it in conjunction with the jquery.validate plugin to perform client side validation as well. One drawback is that much of the validation logic is repeated on the client side and is no longer centralized at a single place. For this reason I'm looking for an alternative. There are many examples out there showing the usage of data annotations to perform model validation. It looks very promising. One thing I couldn't find out is how to validate a property that depends on another property value. Let's take for example the following model: public class Event { [Required] public DateTime? StartDate { get; set; } [Required] public DateTime? EndDate { get; set; } } I would like to ensure that EndDate is greater than StartDate. I could write a custom validation attribute extending ValidationAttribute in order to perform custom validation logic. Unfortunately I couldn't find a way to obtain the model instance: public class CustomValidationAttribute : ValidationAttribute { public override bool IsValid(object value) { // value represents the property value on which this attribute is applied // but how to obtain the object instance to which this property belongs? return true; } } I found that the CustomValidationAttribute seems to do the job because it has this ValidationContext property that contains the object instance being validated. Unfortunately this attribute has been added only in .NET 4.0. So my question is: can I achieve the same functionality in .NET 3.5 SP1? UPDATE: It seems that FluentValidation already supports clientside validation and metadata in ASP.NET MVC 2. Still it would be good to know though if data annotations could be used to validate dependent properties.

    Read the article

  • Android ignores scrollbarsize

    - by Maragues
    Hi, I'm trying to modify a ListView scrollbar's width without success <ListView android:id="@+id/android:list" android:layout_width="fill_parent" android:layout_height="wrap_content" android:choiceMode="singleChoice" android:scrollbars="vertical" android:scrollbarTrackVertical="@drawable/scrollbar_vertical_track" android:scrollbarThumbVertical="@drawable/scrollbar_vertical_thumb" android:scrollbarSize="4px" android:clickable="true"/> First I tried using a drawable image 4px wide, but the .png was resized. Then I tried using a shape extracted from SamplesApi, without success. <shape xmlns:android="http://schemas.android.com/apk/res/android" android:width="40px"> <gradient android:startColor="#505050" android:endColor="#C0C0C0" android:angle="0"/> <corners android:radius="0dp" /> I've tried with and without the android:width attribute. There's a question on the same topic (http://stackoverflow.com/questions/2565083/width-of-a-scroll-bar-in-android), but it doesn't try anything different that what I'm already trying. As far as I know, creating my own theme shouldn't change the output. There's an example in SamplesApi (Views/ScrollBars). I tried modifying the scrollbarSize attribute without result. I know about ninepatch images, but there's an attribute which should do what I want. Any hint? Thanks in advance.

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

  • Params order in Foo.new(params[:foo]), need one before the other (Rails)

    - by Jeena
    I have a problem which I don't know how to fix. It has to do with the unsorted params hash. I have a object Reservation which has a virtual time= attribute and a virtual eating_session= attribute when I set the time= I also want to validate it via an external server request. I do that with help of the method times() which makes a lookup on a other server and saves all possible times in the @times variable. The problem now is that the method times() needs the eating_session attribute to find out which times are valid, but rails sometimes calls the times= method first, before there is any eating_session in the Reservation object when I just do @reservation = Reservation.new(params[:reservation]) class ReservationsController < ApplicationController def new @reservation = Reservation.new(params[:reservation]) # ... end end class Reservation < ActiveRecord::Base include SoapClient attr_accessor :date, :time belongs_to :eating_session def time=(time) @time = times.find { |t| t[:time] == time } end def times return @times if defined? @times @times = [] response = call_soap :search_availability { # eating_session is sometimes nil :session_id => eating_session.code, # <- HERE IS THE PROBLEM :dining_date => date } response[:result].each do |result| @times << { :time => "#{DateTime.parse(result[:time]).strftime("%H:%M")}", :correlation_data => result[:correlation_data] } end @times end end I have no idea how to fix this, any help is apriciated.

    Read the article

  • What is the PIXELFORMATDESCRIPTOR parameter in SetPixelFormat() used for?

    - by Mads Elvheim
    Usually when setting up OpenGL contexts, I've simply filled out a PIXELFORMATDESCRIPTOR structure with the necessary information and called ChoosePixelFormat(), followed by a call to SetPixelFormat() with the returned matching pixelformat from ChoosePixelFormat(). Then I've simply passed the initial descriptor without giving much thought of why. But now I use wglChoosePixelFormatARB() instead if ChoosePixelFormat() because I need some extended traits like sRGB and multisampling. It takes an attribute list of integers, just like XLib/GLX on Linux, not a PIXELFORMATDESCRIPTOR structure. So, do I really have to fill in a descriptor for SetPixelFormat() to use? What does SetPixelFormat() use the descriptor for when it already has the pixelformat descriptor index? Why do I have to specify the same pixelformat attributes in two different places? And which one takes precedence; the attribute list to wglChoosePixelFormatARB(), or the PIXELFORMATDESCRIPTOR attributes passed to SetPixelFormat()? Here are the function prototypes, to make the question more clear: /* Finds a best match based on a PIXELFORMATDESCRIPTOR, and returns the pixelformat index */ int ChoosePixelFormat(HDC hdc, const PIXELFORMATDESCRIPTOR *ppfd); /* Finds a best match based on an attribute list of integers and floats, and returns a list of indices of matches, with the best matches at the head. Also supports extended pixelformat traits like sRGB color space, floating-point framebuffers and multisampling. */ BOOL wglChoosePixelFormatARB(HDC hdc, const int *piAttribIList, const FLOAT *pfAttribFList, UINT nMaxFormats, int *piFormats, UINT *nNumFormats ); /* Sets the pixelformat based on the pixelformat index */ BOOL SetPixelFormat(HDC hdc, int iPixelFormat, const PIXELFORMATDESCRIPTOR *ppfd);

    Read the article

  • SSIS Lookup with Lookup Component Vs Script Component.

    - by Nev_Rahd
    Hello, I need to load Dimensions from EDW Tables (which does maintain historical records) and is of type Key-Value-Parameter. My scenario is ok if got a record in EDW as below Key1 Key2 Code Value EffectiveDate EndDate CurrentFlag 100 555 01 AAA 2010-01-01 11.00.00 9999-12-31 Y 100 555 02 BBB 2010-01-01 11.00.00 9999-12-31 Y This need to be loaded into DM by pivoting it as key1 and key2 combinations makes Natural key for DM SK NK 01 02 EffectiveDate EndDate CurrentFlag 1 100-555 AAA BBB 2010-01-01 11.00.00 9999-12-31 Y My ssis package does this all good pivoting... looking up the incoming NK in DIM.. if new will insert .. else with further lookup with effective date and determine if the incoming for same natural key got any new (change) in attribute.. if so updates the current record byy setting its end date and insert the new one with new attribute value and pulling the recent records values for other attributes. My problem is if the same natural key comes twice with same attribute in single extract my first lookup which on natural key .. will let both records pass and try to insert.. where its fails. If i get distinct records on NK the second is not picked and need to run package again. So my question how can i configure lookup or alernative way to handle this scenario when same NK comes twice in single extract, would be able to insert first record if not exists in Dim table and for second one should be able to updated with the changes with reference to one inserted above. Not sure this makes sense what am trying to explain. Will attached the screenshot once back to work desk (on monday). Thanks

    Read the article

  • Class/Model Level Validation (as opposed to Property Level)? (ASP.NET MVC 2.0)

    - by Erx_VB.NExT.Coder
    Basically, what the title says. I have several properties that combine together to really make one logical answer, and i would like to run a server-side validation code (that i write) which take these multiple fields into account and hook up to only one validation output/error message that users see on the webpage. I looked at scott guthries method of extending an attribute and using it in yoru dataannotations declarations, but, as i can see, there is no way to declare a dataannotations-style attribute on multiple properties, and you can only place the declarations (such as [Email], [Range], [Required]) over one property :(. i have looked at the PropertiesMustMatchAttribute in the default mvc 2.0 project that appears when you start a new project, this example is as useful as using a pair of pins to check your motor oil - useless! i have tried this method, however, creating a class level attribute, and have no idea how to display the error from this in my aspx page. i have tried html.ValidationMessage("ClassNameWhereAttributeIsAdded") and a variety of other thing, and it has not worked. and i should mention, there is NOT ONE blog post on doing validation at this level - despite this being a common need in any project or business logic scenario! can anyone help me in having my message displayed in my aspx page, and also if possible a proper document or reference explaining validation at this level?

    Read the article

  • How to parse xml in sql server to process NULL value in DateTime DataType.

    - by Shantanu Gupta
    I have created a sample query in sql server to parse data from xml and to display it right now. Although I will be inserting this data in my table but before that I am facing a simple problem. I want to insert NULL in datetime field ADDED_DATE="NULL" as shown in xml given below. But when I executes this query. It gives me error Conversion failed when converting datetime from character string. What mistake am i doing. Please highlight my mistake. declare @xml varchar(1000) set @xml= ' <ROOT> <TX_MAP FK_GUEST_ID="1" FK_CATEGORY_ID="2" ATTRIBUTE="Test" DESCRIPTION="TestDesc" IS_ACTIVE="1" ADDED_BY="NULL" ADDED_DATE="NULL" MODIFIED_BY="NULL" MODIFIED_DATE="NULL"></TX_MAP> <TX_MAP FK_GUEST_ID="2" FK_CATEGORY_ID="1" ATTRIBUTE="Test2" DESCRIPTION="TestDesc2" IS_ACTIVE="1" ADDED_BY="NULL" ADDED_DATE="NULL" MODIFIED_BY="NULL" MODIFIED_DATE="NULL"></TX_MAP> </ROOT> ' declare @handle int exec sp_xml_preparedocument @handle output, @xml select * from OPENXML(@handle,'/ROOT/TX_MAP',1) with ( FK_GUEST_ID INT ,FK_CATEGORY_ID VARCHAR(10) ,ATTRIBUTE VARCHAR(100) ,[DESCRIPTION] VARCHAR(100) ,IS_ACTIVE VARCHAR(10) ,ADDED_BY VARCHAR(100) ,ADDED_DATE DATETIME NULL ,MODIFIED_BY VARCHAR(100) ,MODIFIED_DATE DATETIME NULL ) I am using Sql Server 2005.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >