Search Results

Search found 106 results on 5 pages for 'cheeso'.

Page 3/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • What's the compelling reason to upgrade to Visual Studio 2010 from VS2008?

    - by Cheeso
    Are there new features in Visual Studio 2010 that are must-haves? If so, which ones? For me, the big draws for VS2008 as compared to VS2005 were LINQ, .NET Framework multitargeting, WCF (REST + Syndication), and general devenv.exe reliability. Granted, some of these features are framework things, and not tool things. For the purposes of this discussion, I'm willing to combine them into one bucket. What is the list of must-have features for VS2010 versus VS2008? Are there any? I am particularly interested in C#. Update: I know how to google, so I can get the official list from Microsoft. I guess what I really wanted was, the assessment from people using it, as to which things are really notable. Microsoft went on for 3 pages about 2008/3.5 features, and many people sort of boiled it down to LINQ, and a few other things. What is that short list for VS2010? Summary so far, what people think is cool or compelling: Visual Studio engine multi-monitor support new extensibility model based on WPF, prettier and more usable new TFS stuff, incl automated test tools parallel debugging .NET Framework parallel extensions for .NET C# 4.0 generic variance optional and named params easier interop with non-managed environments, like COM or Javascript VB 10.0 collection and array literals / initializers automatic properties anonymous methods / statement lambdas I read up on these at Zander's blog. He described these and other features. Nobody on this list said anything about: Visual Studio engine F# support Javascript code-completion JQuery is now included UML better Sharepoint capabilities C++ moves to msbuild project files

    Read the article

  • Advantage of using Thread.Start vs QueueUserWorkItem

    - by Cheeso
    In multithreaded .NET programming, what are the decision criteria for using ThreadPool.QueueUserWorkItem versus starting my own thread via new Thread() and Thread.Start()? In a server app (let's say, an ASP.NET app or a WCF service) I think the ThreadPool is always there and available. What about in a client app, like a WinForms or WPF app? Is there a cost to spin up the thread pool? If I just want 3 or 4 threads to work for a short period on some computation, is it better to QUWI or to Thread.Start().

    Read the article

  • jQuery: recommendations on the jQuery Ribbon plugins out there?

    - by Cheeso
    I see there are several jQuery plugins out there that attempt to reproduce the Ribbon (Fluent) UI that Microsoft introduced with Word 2007: The ones I found include: http://code.google.com/p/jquery-ui-ribbon/ http://dev.mikaelsoderstrom.se/scripts/jquery/ribbon/ Any experiences with either of these? recommendations for or against?

    Read the article

  • .NET: Is it possible to get HttpWebRequest to automatically decompress gzip'd responses?

    - by Cheeso
    In this answer, I described how I resorted to wrappnig a GZipStream around the response stream in a HttpWebResponse, in order to decompress it. The relevant code looks like this: HttpWebRequest hwr = (HttpWebRequest) WebRequest.Create(url); hwr.CookieContainer = PersistentCookies.GetCookieContainerForUrl(url); hwr.Accept = "text/xml, */*"; hwr.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip, deflate"); hwr.Headers.Add(HttpRequestHeader.AcceptLanguage, "en-us"); hwr.UserAgent = "My special app"; hwr.KeepAlive = true; var resp = (HttpWebResponse) hwr.GetResponse(); using(Stream s = resp.GetResponseStream()) { Stream s2 = s; if (resp.ContentEncoding.ToLower().Contains("gzip")) s2 = new GZipStream(s2, CompressionMode.Decompress); else if (resp.ContentEncoding.ToLower().Contains("deflate")) s2 = new DeflateStream(s2, CompressionMode.Decompress); ... use s2 ... } Is there a way to get HttpWebResponse to provide a de-compressing stream, automatically? In other words, a way for me to eliminate the following from the above code: Stream s2 = s; if (resp.ContentEncoding.ToLower().Contains("gzip")) s2 = new GZipStream(s2, CompressionMode.Decompress); else if (resp.ContentEncoding.ToLower().Contains("deflate")) s2 = new DeflateStream(s2, CompressionMode.Decompress); Thanks.

    Read the article

  • Is there a Base64Stream for .NET? where?

    - by Cheeso
    If I want to produce a Base64-encoded output, how would I do that in .NET? I know that since .NET 2.0, there is the ICryptoTransform interface, and the ToBase64Transform() and FromBase64Transform() implementations of that interface. But those classes are embedded into the System.Security namespace, and require the use of a TransformBlock, TransformFinalBlock, and so on. Is there an easier way to base64 encode a stream of data in .NET?

    Read the article

  • Java XHTML Doclet: fatal exception

    - by Cheeso
    Has anyone used XHTML Doclet, and can you provide some hints as to how to get it to work successfully? I run it like this: \sunjdk\bin\javadoc -doclet net.sourceforge.xhtmldoclet.Doclet -docletpath c:\sw\java\XHTML_Doclet_0.4.jar -d <output> [class files here] (all on one line) When I run it I get this: javadoc: error - In doclet class net.sourceforge.xhtmldoclet.Doclet, method validOptions has thrown an exception java.lang.reflect.InvocationTargetException java.lang.Error: Fatal: Resource (net.sourceforge.xhtmldoclet.resources.doclet) for javadoc doclets is missing. at com.sun.tools.doclets.internal.toolkit.util.MessageRetriever.getText(MessageRetriever.java:110) at com.sun.tools.doclets.internal.toolkit.util.MessageRetriever.getText(MessageRetriever.java:92) at com.sun.tools.doclets.internal.toolkit.util.MessageRetriever.getText(MessageRetriever.java:81) at com.sun.tools.doclets.internal.toolkit.Configuration.getText(Configuration.java:634) at com.sun.tools.doclets.internal.toolkit.Configuration.generalValidOptions(Configuration.java:515) at net.sourceforge.xhtmldoclet.Config.validOptions(Unknown Source) at net.sourceforge.xhtmldoclet.Doclet.validOptions(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:269) at com.sun.tools.javadoc.DocletInvoker.validOptions(DocletInvoker.java:198) at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:317) at com.sun.tools.javadoc.Start.begin(Start.java:128) at com.sun.tools.javadoc.Main.execute(Main.java:41) at com.sun.tools.javadoc.Main.main(Main.java:31) 1 error It seems like it ought to just work. What am I doing wrong?

    Read the article

  • What would it take to get auto-revert-mode to actually work in my dired buffer?

    - by Cheeso
    Apparently auto-revert-mode is supposed to work in dired buffers. I had never heard of this, but the doc says it works. Then I read a little more and found some fine print: Auto-reverting Dired buffers currently works on GNU or Unix style operating systems. It may not work satisfactorily on some other systems. ...and... [dired buffers] do not auto-revert when information about a particular file changes (e.g. when the size changes) or when inserted subdirectories change. To be sure that all listed information is up to date, you have to manually revert using g, even if auto-reverting is enabled in the Dired buffer. source Well, uh, gee.... That doesn't sound like autorevert to me. What would it take to get auto-revert for dired to actually work? Even on (gasp) non-Unix operating systems. Could I just modify auto-revert-handler to call revert-buffer on dired buffers?

    Read the article

  • Why would I use Assembly.LoadFile in lieu of Assembly.LoadFrom?

    - by Cheeso
    It's my impression that Assembly.LoadFrom uses the ApplicationBase and PrivateBinPath. It also my impression that Assembly.LoadFile does not. Why would anyone want to use LoadFile? In other words, if my understanding is correct, why would anyone want to NOT use the ApplicationBase and PrivateBinPath? I'm working with some existing code, which uses LoadFile, and I don't understand why it would do so. LoadFile apparently does not load dependencies from the same directory. The LoadFrom method does load dependencies (From the doc: The load-from context...allows dependencies on that path to be found and loaded because the path information is maintained by the context.) I'd like to convert it from using LoadFile, to use LoadFrom. What is likely to break, if anything, if I replace LoadFile with LoadFrom? Even if it iss benign, it may be that I cannot do the replacement, just based on project schedules. If I cannot replace LoadFile with LoadFrom, is there a way to convince assemblies loaded with LoadFile to load dependencies? Is there a packaging trick I can use (embedded assembly, ILMerge, an AssemblyResolve event, something like that) that can allow an assembly loaded with LoadFile to also load its dependencies?

    Read the article

  • How can I implement a site with ASP.NET MVC without using Visual Studio?

    - by Cheeso
    I have seen ASP.NET MVC Without Visual Studio, which asks, Is it possible to produce a website based on ASP.NET MVC, without using Visual Studio? And the accepted answer is, yes. Ok, next question: how? Here's an analogy. If I want to create an ASP.NET Webforms page, I load up my favorite text editor, create a file named Something.aspx. Then I insert into that file, some boilerplate: <%@ Page Language="C#" Debug="true" Trace="false" Src="Sourcefile.cs" Inherits="My.Namespace.ContentsPage" %> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Title goes here </title> <link rel="stylesheet" type="text/css" href="css/style.css"></link> <style type="text/css"> #elementid { font-size: 9pt; color: Navy; ... more css ... } </style> <script type="text/javascript" language='javascript'> // insert javascript here. </script> </head> <body> <asp:Literal Id='Holder' runat='server'/> <br/> <div id='msgs'></div> </body> </html> Then I also create the Sourcefile.cs file: namespace My.Namespace { using System; using System.Web; using System.Xml; // etc... public class ContentsPage : System.Web.UI.Page { protected System.Web.UI.WebControls.Literal Holder; void Page_Load(Object sender, EventArgs e) { // page load logic here } } } And that is a working ASPNET page, created in a text editor. Drop it into an IIS virtual directory, and it's working. What do I have to do, to make a basic, hello, World ASPNET MVC app, in a text editor? (without Visual Studio) Suppose I want a basic MVC app with a controller, one view, and a simple model. What files would I need to create, and what would go into them?

    Read the article

  • .NET: Is there a way to finagle a default namespace in an XPath 1.0 query?

    - by Cheeso
    I'm building a tool that performs xpath 1.0 queries on XHTML documents. The requirement to use a namespace prefix in the query is killing me. The query looks like this: html/body/div[@class='contents']/div[@class='body']/ div[@class='pgdbbyauthor']/h2[a[@name][starts-with(.,'Quick')]]/ following-sibling::ul[1]/li/a (all on one line) ...which is bad enough, except because it's xpath 1.0, I need to use an explicit namespace prefix on each QName, so it looks like this: ns1:html/ns1:body/ns1:div[@class='contents']/ns1:div[@class='body']/ ns1:div[@class='pgdbbyauthor']/ns1:h2[ns1:a[@name][starts-with(.,'Quick')]]/ following-sibling::ns1:ul[1]/ns1:li/ns1:a To set up the query, I do something like this: var xpathDoc = new XPathDocument(new StringReader(theText)); var nav = xpathDoc.CreateNavigator(); var xmlns = new XmlNamespaceManager(nav.NameTable); foreach (string prefix in xmlNamespaces.Keys) xmlns.AddNamespace(prefix, xmlNamespaces[prefix]); XPathNodeIterator selection = nav.Select(xpathExpression, xmlns); But what I want is for the xpathExpression to use the implicit default namespace. Is there a way for me to transform the unadorned xpath expression, after it's been written, to inject a namespace prefix for each element name in the query? I'm thinking, anything between two slashes, I could inject a prefix there. Excepting of course axis names like "parent::" and "preceding-sibling::" . And wildcards. That's what I mean by "finagle a default namespace". Is this hack gonna work? Addendum Here's what I mean. suppose I have an xpath expression, and before passing it to nav.Select(), I transform it. Something like this: string FixupWithDefaultNamespace(string expr) { string s = expr; s = Regex.Replace(s, "^(?!::)([^/:]+)(?=/)", "ns1:$1"); // beginning s = Regex.Replace(s, "/([^/:]+)(?=/)", "/ns1:$1"); // stanza s = Regex.Replace(s, "::([A-Za-z][^/:*]*)(?=/)", "::ns1:$1"); // axis specifier s = Regex.Replace(s, "\\[([A-Za-z][^/:*\\(]*)(?=[\\[\\]])", "[ns1:$1"); // predicate s = Regex.Replace(s, "/([A-Za-z][^/:]*)(?!<::)$", "/ns1:$1"); // end s = Regex.Replace(s, "^([A-Za-z][^/:]*)$", "ns1:$1"); // edge case s = Regex.Replace(s, "([-A-Za-z]+)\\(([^/:\\.,\\)]+)(?=[,\\)])", "$1(ns1:$2"); // xpath functions return s; } This actually works for simple cases I tried. To use the example from above - if the input is the first xpath expression, the output I get is the 2nd one, with all the ns1 prefixes. The real question is, is it hopeless to expect this Regex.Replace approach to work, as the xpath expressions get more complicated?

    Read the article

  • How can I implement ASP.NET MVC without using Visual Studio?

    - by Cheeso
    I have seen ASP.NET MVC Without Visual Studio, which asks, Is it possible to produce a website based on ASP.NET MVC, without using Visual Studio? And the accepted answer is, yes. Ok, next question: how? Here's an analogy. If I want to create an ASP.NET Webforms page, I load up my favorite text editor, create a file named Something.aspx. Then I insert into that file, some boilerplate: <%@ Page Language="C#" Debug="true" Trace="false" Src="Sourcefile.cs" Inherits="My.Namespace.ContentsPage" %> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Title goes here </title> <link rel="stylesheet" type="text/css" href="css/style.css"></link> <style type="text/css"> #elementid { font-size: 9pt; color: Navy; ... more css ... } </style> <script type="text/javascript" language='javascript'> // insert javascript here. </script> </head> <body> <asp:Literal Id='Holder' runat='server'/> <br/> <div id='msgs'></div> </body> </html> Then I also create the Sourcefile.cs file: namespace My.Namespace { using System; using System.Web; using System.Xml; // etc... public class ContentsPage : System.Web.UI.Page { protected System.Web.UI.WebControls.Literal Holder; void Page_Load(Object sender, EventArgs e) { // page load logic here } } } And that is a working ASPNET page, created in a text editor. Drop it into an IIS virtual directory, and it's working. What do I have to do, to make a basic, hello, World ASPNET MVC app, in a text editor? (without Visual Studio)

    Read the article

  • What exactly is the GNU tar ././@LongLink "trick"?

    - by Cheeso
    I read that a tar entry type of 'L' (76) is used by gnu tar and gnu-compliant tar utilities to indicate that the next entry in the archive has a "long" name. In this case the header block with the entry type of 'L' usually encodes the name ././@LongLink . My question is: where is the format of the next block described? The format of a tar archive is very simple: it is just a series of 512-byte blocks. In the normal case, each file in a tar archive is represented as a series of blocks. The first block is a header block, containing the file name, entry type, modified time, and other metadata. Then the raw file data follows, using as many 512-byte blocks as required. Then the next entry. If the filename is longer than will fit in the space allocated in the header block, gnu tar apparently uses what's known as "the ././@LongLink trick". I can't find a precise description for it. When the entry type is 'L', how do I know how long the "long" filename is? Is the long name limited to 512 bytes, in other words, whatever fits in one block? Most importantly: where is this documented?

    Read the article

  • No-overflow cast on x64

    - by Cheeso
    I have an existing C codebase that works on x86. I'm now compiling it for x64. What I'd like to do is cast a size_t to a DWORD, and throw an exception if there's a loss of data. Q: Is there an idiom for this? Here's why I'm doing this: A bunch of Windows APIs accept DWORDs as arguments, and the code currently assumes sizeof(DWORD)==sizeof(size_t). That assumption holds for x86, but not for x64. So when compiling for x64, passing size_t in place of a DWORD argument, generates a compile-time warning. In virtually all of these cases the actual size is not going to exceed 2^32. But I want to code it defensively and explicitly. This is my first x64 project, so... be gentle.

    Read the article

  • How would you code an efficient Circular Buffer in Java or C#

    - by Cheeso
    I want a simple class that implements a fixed-size circular buffer. It should be efficient, easy on the eyes, generically typed. EDIT: It need not be MT-capable, for now. I can always add a lock later, it won't be high-concurrency in any case. Methods should be: .Add and I guess .List, where I retrieve all the entries. On second thought, Retrieval I think should be done via an indexer. At any moment I will want to be able to retrieve any element in the buffer by index. But keep in mind that from one moment to the next Element[n] may be different, as the Circular buffer fills up and rolls over. This isn't a stack, it's a circular buffer. Regarding "overflow": I would expect internally there would be an array holding the items, and over time the head and tail of the buffer will rotate around that fixed array. But that should be invisible from the user. There should be no externally-detectable "overflow" event or behavior. This is not a school assignment - it is most commonly going to be used for a MRU cache or a fixed-size transaction or event log.

    Read the article

  • What's the correct type to use for pointer subtraction on x64?

    - by Cheeso
    I'm just starting out with x64 compilation. I have a couple of char*'s, and I'm subtracting them. With a 32-bit compile, this works: char * p1 = .... char * p3 = ... int delta = p3 - p1; But if I compile for x64 I get a warning: warning C4244: 'initializing' : conversion from '__int64' to 'int', possible loss of data What is the correct type to use, to represent a difference between two pointers, that works in both x86 and x64 compiles?

    Read the article

  • Is there a good reason Uni courses still use "academic" languages like modula2?

    - by Cheeso
    This question prompts me to ask - why do universities still teach in languages like Modula2, when improved modern languages are available for free? Are there uni's that still teach Pascal, for example? I mean, it was good 30 years ago, but... now? Why? Why not Java, C#, Haskell? Related: Is it backwards to still teach LISP? Is this a duplicate question? If not, I think it ought to be a community wiki topic.

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >