Search Results

Search found 43110 results on 1725 pages for 'noob question'.

Page 387/1725 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • Relative path from an ASP.NET user control NavigateUrl

    - by Daniel Ballinger
    I have a user control that contains a GridView. The GridView has both a HyperLinkField column and a template column that contains a HyperLink control. The ASP.NET project is structured as follows, with the Default.aspx page in each case using the user control. Application Root Controls UserControl with GridView SystemAdminFolder Default.aspx Edit.aspx OrganisationAdminFolder Default.aspx Edit.aspx StandardUserFolder Default.aspx Edit.aspx Note: The folders are being used to ensure the user has the correct role. I need to be able to set the DataNavigateUrlFormatString for the HyperLinkField and the NavigateUrl for the HyperLink to resolve to the Edit.aspx page in the corresponding folder. If I set the navigate URL to "Edit.aspx" the URL in the browser appears as 'http://Application Root/Controls/Edit.aspx' regardless of the originating directory. I can't use the Web application root operator (~/) as the path needs to be relative to the current page, not the application root. How can I use the same user control in multiple folders and resolve the URL to another page in the same folder? Note: The question is strongly based off a similar question by azhar2000s on the asp.net forums that matches my problem.

    Read the article

  • Decisions in teaching someone else to program: language selection

    - by Dinah
    My friend would like for me to guide her into learning programming. She's already proven enormous aptitude for thinking like a programmer but is scared of the idea of programming since in her mind it's relegated to some magical realm accessible only to smart people and trained computer scientists (ironically, I am neither but that's beside the point). My main question is the age-old and irritating question: which language should I chose? I've limited it down to these: PHP: dead simple to start with and I remember enough of the language to answer all novice questions. However, I can think of a million reasons why I wouldn't recommend this as a first language. The most diplomatic of which is that there's no desktop app option to which I would feel comfortable subjecting a novice. Python: supposed to be wonderful for beginners and generally everything I've heard about it screams that this is the correct choice. That's the problem: everything I've heard about it. I don't know it yet and have a lot of projects going on right now so I don't feel like learning it yet -- but I'm going to be the tech-support when any little thing goes wrong. I know there are tons of online resources but in the frustration of the moment, it's always going to be just me. C#: this is the language I'm most comfortable with so I know I can be good tech support. I also love this language and its versatility and community. The big drawback here is that I remember when I first learned it after doing mainly PHP, Perl, and JavaScript and I found the experience overwhelming. You are simultaneously learning: programming concepts, C# syntax, strong typing, OOP, and a complex powerful IDE with a bazillion options and buttons all over it.

    Read the article

  • Compiling a DLL which includes Ogre3D gives an assertion error when used

    - by samaursa
    Hi, I have a framework that I am building and is being compiled into a static library to be used by other projects. The library works perfectly without issues. The problem is that the link time is very long for the projects that use the library so I thought I will make a DLL project of the same framework. I started with baby steps and created an MFC DLL project through visual studio. The project has the following header: /// -------------------------------------------- #ifndef OGRECORE_H #define OGRECORE_H #ifdef OGREFW_EXPORT #define OGREFW_DLL __declspec(dllexport) #else #define OGREFW_DLL __declspec(dllimport) #endif class OgreRoot; namespace OgreFW { class OGREFW_DLL OgreCore// : public OIS::KeyListener, public OIS::MouseListener { public: OgreCore(); ~OgreCore(); }; }; #endif // OGRECORE_H and this is the source #include "stdafx.h" #include "OgreCore.h" //#include "Ogre.h" //#include "OgreRoot.h" //#include "OgreRenderWindow.h" //#include "OgreLog.h" //#include "OgreLogManager.h" //#include "OgreOverlay.h" //#include "OgreViewport.h" //#include "OgreRenderWindow.h" //#include "OgreFrameListener.h" //#include "OgreWindowEventUtilities.h" //#include "OgreSceneNode.h" //#include "OgreEntity.h" //#include "OgreManualObject.h" //#include "OgreMeshManager.h" //#include "OgreConfigFile.h" //#include "OgreOverlayContainer.h" //#include "OgreOverlayManager.h" namespace OgreFW { OGREFW_DLL OgreCore::OgreCore() { } // ------------------------ OGREFW_DLL OgreCore::~OgreCore() { } } As you can see I have commented out Ogre includes. When a project uses the compiled DLL and constructs this (OgreCore) class, it works perfectly fine. As soon as uncomment one of the Ogre includes and compile the DLL again, the project that uses the DLL now gives an assertion error. The full details can be found here in the Ogre forum post. I posted the question there first but since its not really an Ogre specific question I thought I will try here as well. The link to the Ogre post is: http://www.ogre3d.org/forums/viewtopic.php?f=2&t=58403 Thank you in advance

    Read the article

  • Average function without overflow exception

    - by Ron Klein
    .NET Framework 3.5. I'm trying to calculate the average of some pretty large numbers. For instance: using System; using System.Linq; class Program { static void Main(string[] args) { var items = new long[] { long.MinValue + 100, long.MinValue + 200, long.MinValue + 300 }; try { var avg = items.Average(); Console.WriteLine(avg); } catch (OverflowException ex) { Console.WriteLine("can't calculate that!"); } Console.ReadLine(); } } Obviously, the mathematical result is 9223372036854775607 (long.MaxValue - 200), but I get an exception there. This is because the implementation (on my machine) to the Average extension method, as inspected by .NET Reflector is: public static double Average(this IEnumerable<long> source) { if (source == null) { throw Error.ArgumentNull("source"); } long num = 0L; long num2 = 0L; foreach (long num3 in source) { num += num3; num2 += 1L; } if (num2 <= 0L) { throw Error.NoElements(); } return (((double) num) / ((double) num2)); } I know I can use a BigInt library (yes, I know that it is included in .NET Framework 4.0, but I'm tied to 3.5). But I still wonder if there's a pretty straight forward implementation of calculating the average of integers without an external library. Do you happen to know about such implementation? Thanks!! UPDATE: The previous example, of three large integers, was just an example to illustrate the overflow issue. The question is about calculating an average of any set of numbers which might sum to a large number that exceeds the type's max value. Sorry about this confusion. I also changed the question's title to avoid additional confusion. Thanks all!!

    Read the article

  • Using UpdatePanels inside of a ListView

    - by Jim B
    Hey everyone, I'm wondering if anybody has run across something similar to this before. Some quick pseudo-code to get started: <UpdatePanel> <ContentTemplate> <ListView> <LayoutTemplate> <UpdatePanel> <ContentTemplate> <ItemPlaceholder /> </ContentTemplate> </UpdatePanel> </LayoutTemplate> <ItemTemplate> Some stuff goes here </ItemTemplate> </ListView> </ContentTemplate> </UpdatePanel> The main thing to take away from the above is that I have an update panel which contains a listview; and then each of the listview items is contained in its own update panel. What I'm trying to do is when one of the ListView update panels triggers a postback, I'd want to also update one of the other ListView item update panels. A practical implementation would be a quick survey, that has 3 questions. We'd only ask Question #3 if the user answered "Yes" to Question #1. When the page loads; it hides Q3 because it doesn't see "Yes" for Q1. When the user clicks "Yes" to Q1, I want to refresh the Q3 update panel so it now displays. I've got it working now by refreshing the outer UpdatePanel on postback, but this seems inefficient because I don't need to re-evaluate every item; just the ones that would be affected by the prerequisite i detailed out above. I've been grappling with setting up triggers, but i keep coming up empty mainly because I can't figure out a way to set up a trigger for the updatepanel for Q3 based off of the postback triggered by Q1. Any suggestions out there? Am I barking up the wrong tree?

    Read the article

  • Problem in IE8 with GET Parameters in opening a new windows with javascript.

    - by amfa95
    Hi, I have a problem with IE8 and the opening of a new window with javascript and submitting parameters with special characters. <a href="javascript:oWin('/html/de/4664286/printregistrationcontent.html?12-security question&#61;Wie hei&#223;t Ihr Lieblingsrestaurant','PRINT',800,600);" class="print">Seite drucken</a> The Problem is the letter 'ß' (sharp S). As you can see the string above is encodes due to anti XSS. This link works in FF and IE6 but IE8 is transmitting the URL Parameter as character with code 65*** (don't know the exaxt value). In the opening window you will only see a square (because character with 65000+ is not printable). I also tried to use URL Encoding instead of HTML encoding <a href="javascript:oWin('/html/de/4664286/printregistrationcontent.html?12-security question%3DWie hei%C3%9Ft Ihr Lieblingsrestaurant','PRINT',800,600);" class="print">Seite drucken</a> If i click on this Link in FF or IE6 it works as expected, but IE8 will fail to transmit the "ß" to the server and therefor will also get it back in the wrong way. If i paste this url to the IE8 it will work too, but not if the window is opened by javascript. The Javascript function oWin is defined as follows function oWin(url,title,sizeH,sizeV) { winHandle = top.open(url,title,'toolbar=no,directories=no,status=yes,scrollbars=yes,menubar=no,resizable=no,width='+sizeH+',height='+sizeV); if(navigator.appVersion.indexOf("MSIE 3",0)==-1) id = setTimeout('winHandle.focus()',1000); } If someone has an idea where to look for the reason please answer to this. Thank you amfa

    Read the article

  • Why does Tfs2010 build my Wix project before anything else?

    - by bwerks
    Hi all, A similar question was asked and answered about a year ago, but was either a different issue (everything was in beta) or misdiagnosed. It's located here: http://stackoverflow.com/questions/688162/msbuild-task-fails-because-any-cpu-solution-is-built-out-of-order. My issue is that I have a wix installer project, and after upgrading to Tfs2010 on monday, the build fails on linking because it can't find the build product of the Wpf application in the project. After some digging, it's because it hasn't been built yet. Building in Vs2010 works as normal. The wix project is set to depend on the Wpf project, and when viewing Project Build Order in the IDE, everything looks as normal. The problem was originally encountered with only two platform definitions in the solution; x86 and x64. There are also two flavors, Debug and Release, and TFSBuild.proj is set to build all four combinations. There was no occurence of AnyCPU anywhere. Per the referenced question above, I tried changing the Wpf project to use AnyCPU so that it would be built first. At this point, the wix project used the exact configuration and the Wpf project used the flavor with AnyCPU. However, doing so didn't seem to change anything. I'm using the Tfs2010 RTM, Vs2010 RTM, and the most recent version of Wix, which at the time of this writing is 3.5.1602.0, from 2010-04-02. Anyone else running into this?

    Read the article

  • branch prediction

    - by Alexander
    Consider the following sequence of actual outcomes for a single static branch. T means the branch is taken. N means the branch is not taken. For this question, assume that this is the only branch in the program. T T T N T N T T T N T N T T T N T N Assume a two-level branch predictor that uses one bit of branch history—i.e., a one-bit BHR. Since there is only one branch in the program, it does not matter how the BHR is concatenated with the branch PC to index the BHT. Assume that the BHT uses one-bit counters and that, again, all entries are initialized to N. Which of the branches in this sequence would be mis-predicted? Use the table below. Now I am not asking answers to this question, rather than guides and pointers on this. What does a two level branch predictor means and how does it works? What does the BHR and BHT stands for?

    Read the article

  • VS2010 / Target Framework = 3.5 / Building on Continuous Integration Server

    - by granadaCoder
    I'm checking into upgrading to VS2010. Our production servers only have 3.5 Framework and it will be 6-9 months before they are updated. We also have a Continuous Integration Server, running CruiseControl.NET (CC.NET). It has the 3.5 Framework on it as well. Our implementation of CC.NET mainly calls msbuild.exe MySolution.msbuild. (We encapsulate most of the build logic into .msbuild files fyi) Inside the .msbuild file, the following is the "Build" syntax: < Target Name="Build" DependsOnTargets="Checkout" < MSBuild Projects="$(WorkingCheckout)\MySolution.sln" Targets="Build" Properties="Configuration=$(Configuration)" < Output TaskParameter="TargetOutputs" ItemName="TargetOutputsItemName"< /Output < /MSBuild < /Target (A few spaces added to make it display here) =========== I know the VS2010 can "Target" the 3.5 Framework. My question is what happens when I have a VS2010 dev machine, and I check the VS2010 .sln and .csproj(s) files into source control (svn, btw).....will the CC.NET machine ~~which only have the 3.5 Framework installed on it........be able to build the .sln ? I guess I could test it, but the catch22 is that I don't have VS2010 (yet). So I'm asking before I try (the trial or a real install. ............. Any ideas what will happen? I guess the crux question is, what will happen. c:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe "MyVS2010SolutionFile.sln" ?? My hopeful goal would be, allow the developers to have VS2010 (now!), and it still be "ok" for the CC.NET machine and the Production Servers which will only have the 3.5 Framework on them for the foreseeable future. Just to be clear, developers NEVER create deployable builds. Only the CC.NET machine produces builds that will be pushed as production builds. Any help?

    Read the article

  • Can YAML have inheritance?

    - by Jason
    This question involves a lot of symfony but it should be easy enough for someone to follow who only knows YAML and not symfony. My symfony models come from a three-step process: First, I create the tables in MySQL. Second, I run a symfony command (symfony doctrine:build-schema) to convert my table structure into a YAML file. Third, I run another symfony command (symfony doctrine:build-model) to convert the YAML file into PHP code. Here's the problem: there are some tables in the database that I don't want to end up in my symfony code. For example, let's say I have two tables: one called my_table and another called wordpress. The YAML file I end up with might look like this: MyTable: connection: doctrine tableName: my_table Wordpress: connection: doctrine tableName: wordpress That's great except the wordpress table has nothing to do with my symfony models. The result is that every single time I make a change to my database and generate this YAML file, I have to manually remove wordpress. It's annoying! I'd like to be able to create a file called baseConfig.php or something that looks like this: $config = array( 'MyTable' => array( 'connection' => 'doctrine', 'tableName' => 'my_table', ), 'Wordpress' => array( 'connection' => 'doctrine', 'tableName' => 'wordpress', ), ); And then I could have a separate file called config.php or something where I could make modifications to the base config: unset($config['Wordpress']); So my question is: is there any way to convert YAML into executable PHP code (as opposed to load YAML INTO PHP code like what sfYaml::load() does) to achieve this sort of thing? Or is there maybe some other way to achieve YAML inheritance? Thanks, Jason

    Read the article

  • DDD principlers and ASP.NET MVC project design

    - by kaivalya
    Two part questions I have a product aggregate that has; Prices PackagingOptions ProductDescriptions ProductImages etc I have modeled one product repository and did not create individual repositories for any of the child classes. All db operations are handled through product repository. Am I understanding the DDD concept correctly so far? Sometimes the question comes to my mind that having a repository for lets say packaging options could make my life easier by directly fetching a the packaging option from the DB by using its ID instead of asking the product repository to find it in its PackagingOptions collection and give it to me.. Second part is managing the edit create operations using ASP.MVC frame work I am currently trying to manage all add edit remove of these child collections of product through product controller(sound right?). One challenge I am now facing is; If I edit a specific packaging option of product through mydomain/product/editpackagingoption/10 I have access to the id of the packaging option But I don't have the ID of the product it self and this forces me to write a query to first find the product that has this specific packaging option then edit that product and the revelant packaging option. I can do this as all packaging option have their unique ID but this would fail if I have collections that don't have unique ID. That feels very wrong.. The next option I thought of is sending both the product and packaging option IDs on the url like; mydomain/product/editpackagingoption/3/10 But I am not sure if that is a good design either. So I am at a point that I am a bit confused. might be having fundamental misunderstandings around all of this... I would appreciate if you bear with the long question and help me put this together. thanks!

    Read the article

  • Can a call to WaitHandle.SignalAndWait be ignored for performance profiling purposes?

    - by Dan Tao
    I just downloaded the trial version of ANTS Performance Profiler from Red Gate and am investigating some of my team's code. Immediately I notice that there's a particular section of code that ANTS is reporting as eating up to 99% CPU time. I am completely unfamiliar with ANTS or performance profiling in general (that is, aside from self-profiling using what I'm sure are extremely crude and frowned-upon methods such as double timeToComplete = (endTime - startTime).TotalSeconds), so I'm still fiddling around with the application and figuring out how it's used. But I did call the developer responsible for the code in question and his immediate reaction was "Yeah, that doesn't surprise me that it says that; but that code calls SignalAndWait [which I could see for myself, thanks to ANTS], which doesn't use any CPU, it just sits there waiting for something to do." He advised me to simply ignore that code and look for anything ELSE I could find. My question: is it true that SignalAndWait requires NO CPU overhead (and if so, how is this possible?), and is it reasonable that a performance profiler would view it as taking up 99% CPU time? I find this particularly curious because, if it's at 99%, that would suggest that our application is often idle, wouldn't it? And yet its performance has become rather sluggish lately. Like I said, I really am just a beginner when it comes to this tool, and I don't know anything about the WaitHandle class. So ANY information to help me to understand what's going on here would be appreciated.

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more then one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 𝕥 𝟶 𠂊 You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them Notepad can't deal with them correctly (delete for example) File names editing in Window dialogs in broken All QT3 applications can't deal with them. StackOverflow seems to remove these characters if edited directly in as Unicode characters, and only seems to allow them as HTML Unicode escapes. So... This was very simple test. Do you think that UTF-16 should be considered harmful?

    Read the article

  • conditionals for C++ using MSBuild/vsbuild?

    - by redtuna
    I have a C++ project in Visual Studio 2008 and I'd like to be able to compile several versions from the command line, defining conditional variables (aka #define). If it were just a single file to compile I'd use something like cl /D, but this is complex enough that I would like to be able to use VS's other features like the build order etc. I've seen a similar question asked in stackoverflow and the answer was to use /p:DefineConstants="var1;var2". This doesn't seem to work with C++ though. The other problem with that answer is that it replaces the conditional variables, instead of adding to them. The vcproj files for C++ look quite different. If msbuild (or vsbuild) had a way to change Configurations/Tool[name="VCCLCompilerTool"] we'd be golden. But I haven't found such an option. The vcproj files are under source control so I'd rather not have a script mess with them. I've considered doubling the number of configurations (one with the #define, one without). That'd be annoying, and I'm especially unhappy with having to modify these configurations in tandem every time I need to modify anything there. A previous similar question found no solution. I'm hoping that has changed since? How would you go about building those variants (with and without define) from the command line? Thanks!

    Read the article

  • Mutability design patterns in Objective C and C++

    - by Mac
    Having recently done some development for iPhone, I've come to notice an interesting design pattern used a lot in the iPhone SDK, regarding object mutability. It seems the typical approach there is to define an immutable class NSFoo, and then derive from it a mutable descendant NSMutableFoo. Generally, the NSFoo class defines data members, getters and read-only operations, and the derived NSMutableFoo adds on setters and mutating operations. Being more familiar with C++, I couldn't help but notice that this seems to be a complete opposite to what I'd do when writing the same code in C++. While you certainly could take that approach, it seems to me that a more concise approach is to create a single Foo class, mark getters and read-only operations as const functions, and also implement the mutable operations and setters in the same class. You would then end up with a mutable class, but the types Foo const*, Foo const& etc all are effectively the immutable equivalent. I guess my question is, does my take on the situation make sense? I understand why Objective-C does things differently, but are there any advantages to the two-class approach in C++ that I've missed? Or am I missing the point entirely? Not an overly serious question - more for my own curiosity than anything else.

    Read the article

  • What is the difference between AF_INET and PF_INET constants?

    - by Denilson Sá
    Looking at examples about socket programming, we can see that some people use AF_INET while others use PF_INET. In addition, sometimes both of them are used at the same example. The question is: Is there any difference between them? Which one should we use? If you can answer that, another question would be... Why there are these two similar (but equal) constants? What I've discovered, so far: The socket manpage In (Unix) socket programming, we have the socket() function that receives the following parameters: int socket(int domain, int type, int protocol); The manpage says: The domain argument specifies a communication domain; this selects the protocol family which will be used for communication. These families are defined in <sys/socket.h>. And the manpage cites AF_INET as well as some other AF_ constants for the domain parameter. Also, at the NOTES section of the same manpage, we can read: The manifest constants used under 4.x BSD for protocol families are PF_UNIX, PF_INET, etc., while AF_UNIX etc. are used for address families. However, already the BSD man page promises: "The protocol family generally is the same as the address family", and subsequent standards use AF_* everywhere. The C headers The sys/socket.h does not actually define those constants, but instead includes bits/socket.h. This file defines around 38 AF_ constants and 38 PF_ constants like this: #define PF_INET 2 /* IP protocol family. */ #define AF_INET PF_INET Python The Python socket module is very similar to the C API. However, there are many AF_ constants but only one PF_ constant (PF_PACKET). Thus, in Python we have no choice but use AF_INET. I think this decision to include only the AF_ constants follows one of the guiding principles: "There should be one-- and preferably only one --obvious way to do it." (The Zen of Python)

    Read the article

  • Where should I declare my CDI resources?

    - by Laird Nelson
    JSR-299 (CDI) introduces the (unfortunately named) concept of a resource: http://docs.jboss.org/weld/reference/1.0.0/en-US/html/resources.html#d0e4373 You can think of a resource in this nomenclature as a bridge between the Java EE 6 brand of dependency injection (@EJB, @Resource, @PersistenceContext and the like) and CDI's brand of dependency injection. The general gist seems to be that somewhere (and this will be the root of my question) you declare what amounts to a bridge class: it contains fields annotated both with Java EE's @EJB or @PersistenceContext or @Resource annotations and with CDI's @Produces annotations. The net effect is that Java EE 6 injects a persistence context, say, where it's called for, and CDI recognizes that injected PersistenceContext as a source for future injections down the line (handled by @Inject). My question is: what is the community's consensus--or is there one--on: what this bridge class should be named where this bridge class should live whether it's best to localize all this stuff into one class or make several of them ...? Left to my own devices, I was thinking of declaring a single class called CDIResources and using that as the One True Place to link Java EE's DI with CDI's DI. Many examples do something similar, but I'm not clear on whether they're "just" examples or whether that's a good way to do it. Thanks.

    Read the article

  • Can I use a plaintext diff algorithm for tracking XML changes?

    - by rinogo
    Hi all! Interesting question for you here. I'm working in Flex/AS3 on (for simplicity) an XML editor. I need to provide undo/redo functionality. Of course, one solution is to store the entire source text with each edit. However, to conserve memory, I'd like to store the diffs instead (these diffs will also be used to transmit updates to the server for auto-saving). My question is - can I use a plaintext diff algorithm for tracking these XML changes? My research on the internet indicates that I cannot do so. However, I'm obviously missing something. Plaintext diff provides functionality that is purportedly: diff(text, text') - diffs patch(text, diffs) - text' XML is simply text, so why can't I just use diff() and patch() to transform the text reliably? For example: Let's say that I'm a poet. When I write poetry, I use lots of funky punctuation... You know, like <, /, and . (You might see where I'm going with this...) If I'm writing my poetry in an application that uses diffs to provide undo/redo functionality, does my poetry become garbled when I undo/redo my edits? It's just text! Why does it make a difference to the algorithm? I obviously don't get something here...Thanks for explaining! :) -Rich

    Read the article

  • jQuery How do you get an image to fade in on load?

    - by Cool Hand Luke UK
    All I want to do is fade my logo in on the page loading. I am new today to jQuery and I can't managed to fadeIn on load please help. Sorry if this question has already been answered I have had a look and try to adapt other answers for different question but nothing seems to work and its starting to frustrate me. Thanks. Code: <script type="text/javascript"> $(function () { .load(function () { // set the image hidden by default $('#logo').hide();.fadeIn(3000); }} </script> <link rel="stylesheet" href="challenge.css"/> <title>Acme Widgets</title> </head> <body> <div id="wrapper"> <div id="header"> <img id="logo" src="logo-smaller.jpg" /> </div> <div id="nav"> navigation </div> <div id="leftCol"> left col </div> <div id="rightCol"> <div id="header2"> header 2 </div> <div id="centreCol"> body text </div> <div id="rightCol2"> right col </div> </div> <div id="footer"> footer </div> </div> </body> </html>

    Read the article

  • Primitive type 'short' - casting in Java

    - by gemm
    Hello, I have a question about the primitive type 'short' in Java. I am using JDK 1.6. If I have the following: short a = 2; short b = 3; short c = a + b; the compiler does not want to compile - it says that it "cannot convert from int to short" and suggests that I make a cast to short, so this: short c = (short) (a + b); really works. But my question is why do I need to cast? The values of a and b are in the range of short - the range of short values is {-32,768, 32767}. I also need to cast when I want to perform the operations -, *, / (I haven't checked for others). If I do the same for primitive type int, I do not need to cast aa+bb to int. The following works fine: int aa = 2; int bb = 3; int cc = aa +bb; I discovered this while designing a class where I needed to add two variables of type short, and the compiler wanted me to make a cast. If I do this with two variables of type int, I don't need to cast. Thank you very much in advance. A small remark: the same thing also happens with the primitive type byte. So, this workes: byte a = 2; byte b = 3; byte c = (byte) (a + b); but this not: byte a = 2; byte b = 3; byte c = a + b; For long, float, double, and int, there is no need to cast. Only for short and byte values.

    Read the article

  • Using MSADO15.DLL and C++ with MinGW/GCC on Windows Vista

    - by Eugen Mihailescu
    INTRODUCTION Hi, I am very new to C++, is my 1st statement. I have started initially with VC++ 2008 Express, I've notice that GCC becomes kind of standard so I am trying to make the right steps event from the beginning. I have written a piece of code that connects to MSSQL Server via ADO, on VC++ it's working like a charm by importing MSADO15.dll: #import "msado15.dll" no_namespace rename("EOF", "EndOfFile") Because I am going to move from VC++ I was looking for an alternative (eventually multi-platform) IDE, so I stick (for this time) with Code::Block (I'm using last nightly buil, SVN 6181). As compiler I choose to use GCC 3.4.5 (ported via MinGW 5.1.6), under Vista. I was trying to compile a simple "hello world" application with GCC that use/import the same msado15.dll (#import "c:\Program Files\Common Files\System\ADO\msado15.dll" no_namespace rename("EOF", "EndOfFile")) and I was surprised to see a lot of compile-time errors. I was expected that the #import compiler's directive will generate a library from "msado15.dll" so it can link to it later (link-edit time or whatever). Instead it was trying to read it as a normal file (like a header file,if you like) because it was trying to interprete each line in the DLL (which has a MZ signature): Example: Compiling: main.cpp E:\MyPath\main.cpp:2:64: warning: extra tokens at end of #import directive In file included from E:\MyPath\main.cpp:2: c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\144' in program In file included from E:\MyPath\main.cpp:2: c:\Program Files\Common Files\System\ADO\msado15.dll:1:4: warning: null character(s) ignored c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\3' in program c:\Program Files\Common Files\System\ADO\msado15.dll:1:6: warning: null character(s) ignored c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\4' in program ... and so on. MY QUESTION Well, it is obvious that under this version of GCC the #import directive does not do the expected job (perhaps #import is not supported anymore by GCC), so finally my question: how to use the ADO to access MSSQL database on a C++ program compiled with GCC (v3.4.5)?

    Read the article

  • Pythonic mapping of an array (Beginner)

    - by scott_karana
    Hey StackOverflow, I've got a question related to a beginner Python snippet I've written to introduce myself to the language. It's an admittedly trivial early effort, but I'm still wondering how I could have written it more elegantly. The program outputs NATO phoenetic readable versions of an argument, such "H2O" - "Hotel 2 Oscar", or (lacking an argument) just outputs the whole alphabet. I mainly use it for calling in MAC addresses and IQNs, but it's useful for other phone support too. Here's the body of the relevant portion of the program: #!/usr/bin/env python import sys nato = { "a": 'Alfa', "b": 'Bravo', "c": 'Charlie', "d": 'Delta', "e": 'Echo', "f": 'Foxtrot', "g": 'Golf', "h": 'Hotel', "i": 'India', "j": 'Juliet', "k": 'Kilo', "l": 'Lima', "m": 'Mike', "n": 'November', "o": 'Oscar', "p": 'Papa', "q": 'Quebec', "r": 'Romeo', "s": 'Sierra', "t": 'Tango', "u": 'Uniform', "v": 'Victor', "w": 'Whiskey', "x": 'Xray', "y": 'Yankee', "z": 'Zulu', } if len(sys.argv) < 2: for n in nato.keys(): print nato[n] else: # if sys.argv[1] == "-i" # TODO for char in sys.argv[1].lower(): if char in nato: print nato[char], else: print char, As I mentioned, I just want to see suggestions for a more elegant way to code this. My first guess was to use a list comprehension along the lines of [nato[x] for x in sys.argv[1].lower() if x in nato], but that doesn't allow me to output any non-alphabetic characters. My next guess was to use map, but I couldn't format any lambdas that didn't suffer from the same corner case. Any suggestions? Maybe something with first-class functions? Messing with Array's guts? This seems like it could almost be a Code Golf question, but I feel like I'm just overthinking :)

    Read the article

  • ACL architechture for a Software As a service in Spring 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • identation control while developing a small python like language

    - by sap
    Hello, Im developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for identation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop thats inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 end end #after break 2 it would jump right here but since i dont have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: i would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then everytime i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the identation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesnt use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appretiated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on google and had no luck). thanks in advance.

    Read the article

  • FOR BOUNTY: "QFontEngine(Win) GetTextMetrics failed ()" error on 64-bit Windows

    - by David Murdoch
    I'll add a large bounty to this when Stack Overflow lets me. I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >