Search Results

Search found 11975 results on 479 pages for 'vs templates'.

Page 440/479 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • INotifyPropertyChanged Setter Style

    - by Ivovic
    In order to reflect changes in your data to the UI you have to implement INotifyPropertyChanged, okay. If I look at examples, articles, tutorials etc most of the time the setters look like something in that manner: public string MyProperty { //get [...] set { if (_correspondingField == value) { return; } _correspondingField = value; OnPropertyChanged("MyProperty"); } } No problem so far, only raise the event if you have to, cool. But you could rewrite this code to this: public string MyProperty { //get [...] set { if (_correspondingField != value) { _correspondingField = value; OnPropertyChanged("MyProperty"); } } } It should do the same (?), you only have one place of return, it is less code, it is less boring code and it is more to the point ("only act if necessary" vs "if not necessary do nothing, the other way round act"). So if the second version has its pros compared to the first one, why I see this style rarely? I don't consider myself being smarter than those people that write frameworks, published articles etc, therefore the second version has to have drawbacks. Or is it wrong? Or do I think too much? Thanks in advance

    Read the article

  • Is it possible to wrap an asynchronous event and its callback in a function that returns a boolean?

    - by Rob Flaherty
    I'm trying to write a simple test that creates an image element, checks the image attributes, and then returns true/false. The problem is that using the onload event makes the test asynchronous. On it own this isn't a problem (using a callback as I've done in the code below is easy), but what I can't figure out is how to encapsulate this into a single function that returns a boolean. I've tried various combinations of closures, recursion, and self-executing functions but have had no luck. So my question: am I being dense and overlooking something simple, or is this in fact not possible, because, no matter what, I'm still trying to wrap an asynchronous function in synchronous expectations? Here's the code: var supportsImage = function(callback) { var img = new Image(); img.onload = function() { //Check attributes and pass true or false to callback callback(true); }; img.src = 'data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs='; }; supportsImage(function(status){ console.log(status); }); To be clear, what I want is to be able to wrap this in something such that it can be used like: if (supportsImage) { //Do some crazy stuff } Thanks! (Btw, I know there are a ton of SO questions regarding confusion about synchronous vs. asynchronous. Apologies if this can be reduced to something previously answered.)

    Read the article

  • Managed WMI Event class is not an event class???

    - by galets
    I am using directions from here: http://msdn.microsoft.com/en-us/library/ms257351(VS.80).aspx to create a managed event class. Here's the code that I wrote: [ManagementEntity] [InstrumentationClass(InstrumentationType.Event)] public class MyEvent { [ManagementKey] public string ID { get; set; } [ManagementEnumerator] static public IEnumerable<MyEvent> EnumerateInstances() { var e = new MyEvent() { ID = "9A3C1B7E-8F3E-4C54-8030-B0169DE922C6" }; return new MyEvent[] { e }; } } class Program { static void Main(string[] args) { var thisAssembly = typeof(Program).Assembly; var wmi_installer = new AssemblyInstaller(thisAssembly, null); wmi_installer.Install(null); wmi_installer.Commit(null); InstrumentationManager.RegisterAssembly(thisAssembly); Console.Write("Press Enter..."); Console.ReadLine(); var e = new MyEvent() { ID = "A6144A9E-0667-415B-9903-220652AB7334" }; Instrumentation.Fire(e); Console.Write("Press Enter..."); Console.ReadLine(); wmi_installer.Uninstall(null); } } I can run a program, and it properly installs. Using wbemtest.exe I can browse to the event, and "show mof": [dynamic: ToInstance, provider("WmiTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null")] class MyEvent { [read, key] string ID; }; Notice, the class does not inherit from __ExtrinsicEvent, which is weird... I can also run select * from MyEvent, and get the result. Instrumentation.Fire() also returns no error. However, when I'm trying to subscribe to event using "Notification Query" option, I'm getting 0x80041059 Number: 0x80041059 Facility: WMI Description: Class is not an event class. What am I doing wrong, and is there a correct way to create managed WMI event?

    Read the article

  • In the meantime, to be or not to be ... productive!

    - by Jan Kuboschek
    I just moved back to Europe from the US after living there for 7 years. Apart from major adjustment issues, I'm currently looking for a job over here. I'm mainly interested in (IT) Consulting and, since these jobs typically require programming knowledge, such as Java, I'm trying to think of something productive to write (perhaps to demo my skills) while I'm waiting for my interviews (starting in two weeks. Folks here are a bit slower than in the US apparently...). I graduated from college about a year and a half ago and have a 4 year degree in international management/economics and about 3/4 of a 2 year degree in computer science finished. I've written my fair share of web software over the years, but nothing concrete that I could show, especially not in Java. Now, I've never had the problem of not having any idea what to write. Basic games I could write, but I'm not sure how well that'll come over when I walk into my interview and say "hey, I was bored. Take a look at my multiplayer space invaders game! Wanna try beating me??". Any thoughts? I browsed SourceForge the other day to find a nice little project to contribute to, but decided that I don't want to commit to someone else's project at this time. Any ideas, perhaps from someone who has been, or currently is, in a similar position would be much appreciated. Oh, and lastly: Instead of developing a program or two to demonstrate my skills, I could spend my time brushing up on UML and Perl. Any suggestions regarding that? Writing a demo vs. learning something new?

    Read the article

  • Conditionally colour data points outside of confidence bands in R

    - by D W
    I need to colour datapoints that are outside of the the confidence bands on the plot below differently from those within the bands. Should I add a separate column to my dataset to record whether the data points are within the confidence bands? Can you provide an example please? Example dataset: ## Dataset from http://www.apsnet.org/education/advancedplantpath/topics/RModules/doc1/04_Linear_regression.html ## Disease severity as a function of temperature # Response variable, disease severity diseasesev<-c(1.9,3.1,3.3,4.8,5.3,6.1,6.4,7.6,9.8,12.4) # Predictor variable, (Centigrade) temperature<-c(2,1,5,5,20,20,23,10,30,25) ## For convenience, the data may be formatted into a dataframe severity <- as.data.frame(cbind(diseasesev,temperature)) ## Fit a linear model for the data and summarize the output from function lm() severity.lm <- lm(diseasesev~temperature,data=severity) jpeg('~/Desktop/test1.jpg') # Take a look at the data plot( diseasesev~temperature, data=severity, xlab="Temperature", ylab="% Disease Severity", pch=16, pty="s", xlim=c(0,30), ylim=c(0,30) ) title(main="Graph of % Disease Severity vs Temperature") par(new=TRUE) # don't start a new plot ## Get datapoints predicted by best fit line and confidence bands ## at every 0.01 interval xRange=data.frame(temperature=seq(min(temperature),max(temperature),0.01)) pred4plot <- predict( lm(diseasesev~temperature), xRange, level=0.95, interval="confidence" ) ## Plot lines derrived from best fit line and confidence band datapoints matplot( xRange, pred4plot, lty=c(1,2,2), #vector of line types and widths type="l", #type of plot for each column of y xlim=c(0,30), ylim=c(0,30), xlab="", ylab="" )

    Read the article

  • How to query an .NET assembly's required framework (not CLR) version?

    - by Bonfire Burns
    Hi, we are using some kind of plug-in architecture in one of our products (based on .NET). We have to consider our customers or even 3rd party devs writing plug-ins for the product. The plug-ins will be .NET assemblies that are loaded by our product at run-time. We have no control about the quality or capabilities of the external plug-ins (apart from checking whether they implement the correct interfaces). So we need to implement some kind of safety check while loading the plug-ins to make sure that our product (and the hosting environment) can actually host the plug-in or deliver a meaningful error message ("The plug-in your are loading needs .NET version 42.42 - the hosting system is only on version 33.33."). Ideally the plug-ins would do this check internally, but our experience regarding their competence is so-so and in any case our product will get the blame, so we want to make sure that this "just works". Requiring the plug-in developers to provide the info in the metadata or to explicitly provide the information in the interface is considered "too complicated". I know about the Assembly.ImageRuntimeVersion property. But to my knowledge this tells me only the needed CLR version, not the framework version. And I don't want to check all of the assembly's dependencies and match them against a table of "framework version vs. available assemblies". Do you have any ideas how to solve this in a simple and maintainable fashion? Thanks & regards, Bon

    Read the article

  • Should .net comments start with a capital letter and end with a period?

    - by Hamish Grubijan
    Depending on the feedback I get, I might raise this "standard" with my colleagues. This might become a custom StyleCop rule. is there one written already? So, Stylecop already dictates this for summary, param, and return documentation tags. Do you think it makes sense to demand the same from comments? On related note: if a comment is already long, then should it be written as a proper sentence? For example (perhaps I tried too hard to illustrate a bad comment): //if exception quit vs. // If an exception occurred, then quit. If figured - most of the time, if one bothers to write a comment, then it might as well be informative. Consider these two samples: //if exception quit if (exc != null) { Application.Exit(-1); } and // If an exception occurred, then quit. if (exc != null) { Application.Exit(-1); } Arguably, one does not need a comment at all, but since one is provided, I would think that the second one is better. Please back up your opinion. Do you have a good reference for the art of commenting, particularly if it relates to .Net? Thanks.

    Read the article

  • Windows Workflow Foundation 4.0 Designer Rehosting with Custom Activities

    - by Robert
    I have several WF 4.0 workflows that I have created for an application my company is developing. Some of these workflows are simple, and some are very complex (i.e. many steps, several different types of activities, custom activities). For many of these workflows, I have created several custom code activities to support some internal process types. The workflows work great and we have had very few problems when it comes to maintaining them within VS 2010. We now want to move that responsibility off to our business users, so I have created a WPF application to rehost the WF designer (according to the MS samples). My problem is that when I open one of the workflows that contains custom code activities, those activities are represented as red boxes with the error message of "Activity could not be loaded because of errors in XAML." I have done research and have found several posts that mention that this is usually a problem with namespacing and referencing. The rehosted designer is in a namespace similar to this: Company.Application.Workflow.Designer And the custom code activities are contained within a separate custom workflow library, which I have included as a reference in the designer project. The library's namespace is similar to this: Company.Application.Workflow.Data.Activities As I have mentioned, the library is set as a reference in the designer's project, and I see it being copied to the output when I build the project. I have also included the reference in the XAML of the main designed application. What am I missing?

    Read the article

  • .Net Hash Codes no longer persistent?

    - by RobV
    I have an API where various types have custom hash codes. These hash codes are based on getting the hash of a string representation of the object in question. Various salting techniques are used so that as far as possible Hash Codes do not collide and that Objects of different types with equivalent string representations have different Hash Codes. Obviously since the Hash Codes are based on strings there are some collisions (infinite strings vs the limited range of 32 bit integers). I use hashes based on string representations since I need the hashes to persist over sessions and particularly for use in database storage of objects. Suddenly today my code has started generating different hash codes for Objects which is breaking all kinds of things. It was working earlier today and I haven't touched any of the code involved in Hash Code generation. I'm aware that the .Net documentation allows for implementation of hash codes between .Net framework versions to change (and between 32 and 64 bit versions) but I haven't changed the framework version and there has been no framework updates recently as far as I can remember Any ideas because this seems really weird? Edit Hash Codes are generated like follows: //Compute Hash Code this._hashcode = (this._nodetype + this.ToString() + PlainLiteralHashCodeSalt).GetHashCode();

    Read the article

  • Can g++ fill uninitialized POD variables with known values?

    - by Bob Lied
    I know that Visual Studio under debugging options will fill memory with a known value. Does g++ (any version, but gcc 4.1.2 is most interesting) have any options that would fill an uninitialized local POD structure with recognizable values? struct something{ int a; int b; }; void foo() { something uninitialized; bar(uninitialized.b); } I expect uninitialized.b to be unpredictable randomness; clearly a bug and easily found if optimization and warnings are turned on. But compiled with -g only, no warning. A colleague had a case where code similar to this worked because it coincidentally had a valid value; when the compiler upgraded, it started failing. He thought it was because the new compiler was inserting known values into the structure (much the way that VS fills 0xCC). In my own experience, it was just different random values that didn't happen to be valid. But now I'm curious -- is there any setting of g++ that would make it fill memory that the standard would otherwise say should be uninitialized?

    Read the article

  • When to use reinterpret_cast?

    - by HeretoLearn
    I am little confused with the applicability of reinterpret_cast vs static_cast. From what I have read the general rules are to use static cast when the types can be interpreted at compile time hence the word static. This is the cast the C++ compiler uses internally for implicit casts also. reinterpret_cast are applicable in two scenarios, convert integer types to pointer types and vice versa or to convert one pointer type to another. The general idea I get is this is unportable and should be avoided. Where I am a little confused is one usage which I need, I am calling C++ from C and the C code needs to hold on to the C++ object so basically it holds a void*. What cast should be used to convert between the void * and the Class type? I have seen usage of both static_cast and reinterpret_cast? Though from what I have been reading it appears static is better as the cast can happen at compile time? Though it says to use reinterpret_cast to convert from one pointer type to another?

    Read the article

  • how to make it easy for users to register at my site?

    - by rob
    I want to make it dirt simple for users coming to my site to register so they can post comments, vote on things, etc. I would like for them to be able to use their facebook id, twitter id, yahoo mail id, gmail id, AIM id, msn id, or whatever else people are likely to have (not necessarily all of those, but the more the better). I want my mom to be able to do it in 30 seconds or less. (that is, no "enter your open id url here" type thing that would confuse her). I prefer they not have to pick a unique name, as that gets annoying as the site gets more users and it gets hard to find one that is unique. What is the best option here? I'm not quite sure about OpenId vs. OAuth, and whether there are other options. And I'd like it to be as simple for me, the developer, as possible (of course!). I don't want to spend forever learning some protocol, nor have to structure my whole app around this. It would be great if there was a site with sample code that is pretty easy to drop in. BTW, StackOverflow is a good example of a site that was easy for me to register for.

    Read the article

  • Desine time XAML serialization problem in VS2010 Designer

    - by Reporting Avatar
    The wired problem is, in VS 2008, everything works fine. In VS2010 while serializing, it is missing the "ReportDimensionElements" so I'm unable to get the values back from the serialized value back from the XAML. It says, "'ReportDimensionElements' is null" am I missing anything silly. Note: I have marked the ReportDimensionElements class with [DefaultValue(null)] for avoiding {x:Null} being serialized. Will it be causing this by any way? Serialized XAML .Net 3.5 <Report> <Report.CategoricalAxis> <CategoricalAxis> <CategoricalAxis.ReportDimensionElements> <ReportDimensionElements Capacity="4"> <ReportDimensionElement DimensionName="Customer" HierarchyName="Customer Geography" LevelName="Country" /> </ReportDimensionElements> </CategoricalAxis.ReportDimensionElements> </CategoricalAxis> </Report.CategoricalAxis> </Report> .Net 4.0 <Report> <Report.CategoricalAxis> <CategoricalAxis> <CategoricalAxis.ReportDimensionElements> <ReportDimensionElement DimensionName="Customer" HierarchyName="Customer Geography" LevelName="Country" /> </CategoricalAxis.ReportDimensionElements> </CategoricalAxis> </Report.CategoricalAxis> </Report> Great Thanks

    Read the article

  • Compiling my Boost/NTL program with c++ on Linux.

    - by Martin Lauridsen
    Hi SO, I wrote a client program and a server program, that uses the NTL library and Boost::Asio, to do client/server communication for an integer factorization application, in C++. Both sides consist of several headers and cpp files. Both project compile fine individually on Windows in Visual Studio. All I did, was add the include path of NTL and Boost to both projects: Additional include paths: "D:\Downloads\WinNTL-5_5_2\include";D:\boost_1_42_0 Furthermore, for both projects, I added the two library paths to both projects in VS: Additional library directories: D:\boost_1_42_0\stage\lib;"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" And added under Additional dependencies: ntl.lib As said, it compiles fine on Windows. But when I put the code on a Linux machine provided by university, I try to compile with the following statement c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include client_protocol.cpp mpqs_client.cpp mpqs_sieve.cpp mpqs_helper.cpp -o mpqs_helper -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -static Upon doing this, I get a huuuge error, which I posted here. Any idea how to fix this, please??

    Read the article

  • Listening UDP or switch to TCP in a MFC application

    - by Alexander.S
    I'm editing a legacy MFC application, and I have to add some basic network functionalities. The operating side has to receive a simple instruction (numbers 1,2,3,4...) and do something based on that. The clients wants the latency to be as fast as possible, so naturally I decided to use datagrams (UDP). But reading all sorts of resources left me bugged. I cannot listen to UDP sockets (CAsyncSocket) in MFC, it's only possible to call Receive which blocks and waits. Blocking the UI isn't really a smart. So I guess I could use some threading technique, but since I'm not all that experienced with MFC how should that be implemented? The other part of the question is should I do this, or revert to TCP, considering reliability and implementation issues. I know that UDP is unreliable, but just how unreliable is it really? I read that it is up to 50% faster, which is a lot for me. References I used: http://msdn.microsoft.com/en-us/library/09dd1ycd(v=vs.80).aspx

    Read the article

  • Unexpected behaviour of Process.MainWindowHandle

    - by Ed Guiness
    I've been trying to understand Process.MainWindowHandle. According to MSDN; "The main window is the window that is created when the process is started. After initialization, other windows may be opened, including the Modal and TopLevel windows, but the first window associated with the process remains the main window." (Emphasis added) But while debugging I noticed that MainWindowHandle seemed to change value... which I wasn't expecting, especially after consulting the documentation above. To confirm the behaviour I created a standalone WinForms app with a timer to check the MainWindowHandle of the "DEVENV" (Visual Studio) process every 100ms. Here's the interesting part of this test app... IntPtr oldHWnd = IntPtr.Zero; void GetMainwindowHandle() { Process[] processes = Process.GetProcessesByName("DEVENV"); if (processes.Length!=1) return; IntPtr newHWnd = processes[0].MainWindowHandle; if (newHWnd != oldHWnd) { oldHWnd = newHWnd; textBox1.AppendText(processes[0].MainWindowHandle.ToString("X")+"\r\n"); } } private void timer1Tick(object sender, EventArgs e) { GetMainwindowHandle(); } You can see the value of MainWindowHandle changing when you (for example) click on a drop-down menu inside VS. Perhaps I've misunderstood the documentation. Can anyone shed light?

    Read the article

  • Is there a Designer for MFC in Visual Studio like for windows forms in .NET?

    - by claws
    I'm a .NET programmer. I've never developed anything in MFC. Currently I had to write a C++ application (console) for some image processing task. I finished writing it. But the point is I need to design GUI also for this. Well, there won't be anything complex. Just a window with few Buttons, RadioButtons, Check Boxes, PicturesBox & few sliders. thats it. I'm using VS 2008 and was expecting a .NET style form designer. Just to test, I created a MFC project (with all default configuration) and these files were created by default: ChildFrm.cpp MainFrm.cpp mfc.cpp mfcDoc.cpp mfcView.cpp stdafx.cpp Now, I'm unable to find a Designer. There is no View Designer. I've opened all the above *.cpp and in the code editor right clicked to see "Designer View". ToolBox is just empty because I'm in code editor mode. When I built the project. This is the window I get. How to open a designer?

    Read the article

  • Visual Studio 2010 is not allowing me to debug my code

    - by Tejs
    So, this interesting issue has been plaguing me for the past couple of hours. Visual Studio 2010 Ultimate no longer attaches the debugger and lets me debug my code. If I use the built in development server, then everything works fine. If I switch to Use Local IIS Web Server (http://localhost/), then all it does it attach to w3wp.exe, but no DLLs or PDBs are loaded for anything. I can go to Debug Windows Modules, and literally nothing is loaded in this window. Conversely, when using the built in development server, the Modules window displays all the DLLs and shows that the symbols for my DLLs have been loaded. Something is obviously amiss. The VS installation is completely bone stock. In IIS, my website is configured with ASP.NET 2.0 (because no 3.5 exists to select from the drop down), along with read / log visits / index this resource options checked on the "Home Directory" tab. Some of my failed ideas: 1) If I attach to process on the iexplore.exe instance where the website is displayed, it loads Internet Explorer's DLLs, but not mine. 2) I've restarted the computer multiple times 3) I've invoked devenv.exe /resetuserdata once 4) I've confirmed that every project is indeed set to debug and not release. 5) Deleted all \bin contents and rebuilt the solution. 6) Deleted entire solution and repulled from Source Control. Can someone tell me what is wrong with this thing? I'm going to have an aneurism from the headache this is causing me.

    Read the article

  • Should constant contructor aguments be passed by reference or value?

    - by Mike
    When const values are passed to an object construct should they be passed by reference or value? If you pass by value and the arguments are immediately fed to initializes are two copies being made? Is this something that the compiler will automatically take care of. I have noticed that all textbook examples of constructors and intitializers pass by value but this seems inefficient to me. class Point { public: int x; int y; Point(const int _x, const int _y) : x(_x), y(_y) {} }; int main() { const int a = 1, b = 2; Point p(a,b); Point q(3,5); cout << p.x << "," << p.y << endl; cout << q.x << "," << q.y << endl; } vs. class Point { public: int x; int y; Point(const int& _x, const int& _y) : x(_x), y(_y) {} }; Both compile and do the same thing but which is correct?

    Read the article

  • OpenSceneGraph C++ Access Violation reading location 0x00421000

    - by Kobojunkie
    Working with OpenSceneGraph, and I keep running into this violation issue that I would appreciate some help with. The problem is with the particular line below which happens to be the first in my main function. osg::ref_ptr<osg::Node> bench = osgDB::readNodeFile("Models/test.IVE"); I have my models folder right in my directory. The error is as below. Unhandled exception at 0x68630A6C (msvcr100.dll) in OSG3D.exe: 0xC0000005: Access violation reading location 0x00421000. And this is where the problem seems to be coming up. /** Read an osg::Node from file. * Return valid osg::Node on success, * return NULL on failure. * The osgDB::Registry is used to load the appropriate ReaderWriter plugin * for the filename extension, and this plugin then handles the request * to read the specified file.*/ inline osg::Node* readNodeFile(const std::string& filename) { return readNodeFile(filename,Registry::instance()->getOptions()); } I would appreciate details on how best to tackle this kind of exception message in the future. Are there tools that make this easy to debug or are there ways to capture the exact issues and fix them? I would appreciate any help with this. My ultimate goal is to learn how to better debug C++ related issues please. With this, it means reading through the compiler error list http://msdn.microsoft.com/en-us/library/850cstw1(v=vs.71).aspx is not enough

    Read the article

  • Could I return a FileStream as a generic interface to a file?

    - by Eric
    I'm writing a class interface that needs to return references to binary files. Typically I would provide a reference to a file as a file path. However, I'm considering storing some of the files (such as a small thumbnail) in a database directly rather then on a file system. In this case I don't want to add the extra step of reading the thumbnail out of the database onto the disc and then returning a path to the file for my program to read. I'd want to stream the image directly out of the database into my program and avoid writing anything to the disc unless the user explicit wants to save something. Would having my interface return a FileStreamor even a Imagemake sense? Then it would be up to the implementing class to determine if the source of the FileStream or Image is a file on a disc or binary data in a database. public interface MyInterface { string Thumbnail {get;} string Attachment {get;} } vs public interface MyInterface { Image Thumbnail {get;} FileStream Attachment {get;} }

    Read the article

  • Is there a quality, file-size, or other benefit to JPEG sizes being multiples of 8px or 16px?

    - by davebug
    The JPEG compression encoding process splits a given image into blocks of 8x8 pixels, working with these blocks in future lossy and lossless compressions. [source] It is also mentioned that if the image is a multiple 1MCU block (defined as a Minimum Coded Unit, 'usually 16 pixels in both directions') that lossless alterations to a JPEG can be performed. [source] I am working with product images and would like to know both if, and how much benefit can be derived from using multiples of 16 in my final image size (say, using an image with size 480px by 360px) vs. a non-multiple of 16 (such as 484x362). In this example I am not interested in further alterations, editing, or recompression of the final image. To try to get closer to a specific answer where I know there must be largely generalities: Given a 480x360 image that is 64k and saved at maximum quality in Photoshop [example]: Can I expect any quality loss from an image that is 484x362 What amount of file size addition can I expect (for this example, the additional space would be white pixels) Are there any other disadvantages to growing larger than the 8px grid? I know it's arbitrary to use that specific example, but it would still be helpful (for me and potentially any others pondering an image size) to understand what level of compromise I'd be dealing with in breaking the non-8px grid. The key issue here is a debate I've had is whether 8-pixel divisible images are higher quality than images that are not divisible by 8-pixels.

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • Using external SOAP service in Workflow service

    - by whirlwin
    I am using the .NET 4 framework and have made a WCF Workflow Service Application. I want to use a SOAP web service (.NET 3.5) I have running in another instance of VS. The only method that is exposed is the following: [WebMethod] public string Reverse(string input) { char[] chars = input.ToCharArray(); Array.Reverse(chars); return new string(chars); } I have used the following steps to add the service in my Workflow: Add Service Reference Provided the WSDL (the operation shows in the Operations box as expected) Clicked OK Build the solution to ensure that the service shows in my toolbox Drag the service from the toolbox into the workflow However, when I look at the properties of the service in the workflow, there is no way to specify the input argument or where to store the result of the invocation of the service. I only have the option of specifying some obscure parameters such as Body:InArgument<ReverseRequestBody and outBody:OutArgument<ReverseResponseBody (none of which are strings). Here is a screenshot depicting the properties of the service in the workflow: My question is therefore: Is it possible at all to use the SOAP service by specifying a string as the input argument (like it is meant to be used), and also assign the result to a workflow variable?

    Read the article

  • C preprocessor problem in Microsoft Visual Studio 2010

    - by Remo.D
    I've encountered a problem with the new Visual C++ in VS 2010. I've got a header with the following defines: #define STC(y) #y #define STR(y) STC(\y) #define NUM(y) 0##y The intent is that you can have some constant around like #define TOKEN x5A and then you can have the token as a number or as a string: NUM(TOKEN) -> 0x5A STR(TOKEN) -> "\x5A" This is the expected behavior under the the substitution rules of macros arguments and so far it has worked well with gcc, open watcom, pellesC (lcc), Digital Mars C and Visual C++ in VS2008 Express. Today I recompiled the library with VS2010 Express only to discover that it doesn't work anymore! Using the new version I would get: NUM(TOKEN) -> 0x5A STR(TOKEN) -> "\y" It seems that the new preprocessor treats \y as an escape sequence even within a macro body which is a non-sense as escape sequences only have a meaning in literal strings. I suspect this is a gray area of the ANSI standard but even if the original behavior was mandated by the standard, MS VC++ is not exactly famous to be 100% ANSI C compliant so I guess I'll have to live with the new behavior of the MS compiler. Given that, does anybody have a suggestion on how to re-implement the original macros behavior with VS2010?

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >