Search Results

Search found 4771 results on 191 pages for 'aspnet compiler'.

Page 8/191 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • why it is up to the compiler to decide what value to assign when assigning an out-of-range value to

    - by Allopen
    in C++ Primer 4th edition 2.1.1, it says "when assigning an out-of-range value to a signed type, it is up to the compiler to decide what value to assign". I can't understand it. I mean, if you have code like "char 5 = 299", certainly the compiler will generate asm code like "mov BYTE PTR _sc$[ebp], 43"(VC) or "movb $43, -2(%ebp)"(gcc+mingw), it IS decided by the compiler. but what if we assign a value that is given by the user input? like, via command line? and the asm code generated will be "movb %al, -1(%ebp)"(gcc+mingw) and " mov cl, BYTE PTR _i$[ebp] mov BYTE PTR _sc$[ebp], cl "(VC), so now how can compiler decide what will happen? I think now it is decided by the CPU. Can you give me a clear explanation?

    Read the article

  • Need help regarding one LALR(1) parsing.

    - by AppleGrew
    I am trying to parse a context-free language, called Context Free Art. I have created its parser in Javascript using a YACC-like JS LALR(1) parser generator JSCC. Take the example of following CFA (Context Free Art) code. This code is a valid CFA. startshape A rule A { CIRCLE { s 1} } Notice the A and s in above. s is a command to scale the CIRCLE, but A is just a name of this rule. In the language's grammar I have set s as token SCALE and A comes under token STRING (I have a regular expression to match string and it is at the bottom of of all tokens). This works fine, but in the below case it breaks. startshape s rule s { CIRCLE { s 1} } This too is a perfectly valid code, but since my parser marks s after rule as SCALE token so it errors out saying that it was expecting STRING. Now my question is, if there is any way to re-write the production rules of the parser to account for this? The related production rule is:- rule: RULE STRING '{' buncha_replacements '}' [* rule(%2, 1) *] | RULE STRING RATIONAL '{' buncha_replacements '}' [* rule(%2, 1*%3) *] ; One simple solution I can think of is create a copy of above rule with STRING replaced by SCALE, but this is just one of the many similar rules which would need such fixing. Furthermore there are many other terminals which can get matched to STRING. So that means way too many rules!

    Read the article

  • Benefits of 'Optimize code' option in Visual Studio build

    - by gt
    Much of our C# release code is built with the 'Optimize code' option turned off. I believe this is to allow code built in Release mode to be debugged more easily. Given that we are creating fairly simple desktop software which connects to backend Web Services, (ie. not a particularly processor-intensive application) then what if any sort of performance hit might be expected? And is any particular platform likely to be worse affected? Eg. multi-processor / 64 bit.

    Read the article

  • When to use certain optimizations such as -fwhole-program and -fprofile-generate with several shared libraries

    - by James
    Probably a simple answer; I get quite confused with the language used in the GCC documentation for some of these flags! Anyway, I have three libraries and a programme which uses all these three. I compile each of my libraries seperately with individual (potentially) different sets of warning flags. However, I compile all three libraries with the same set of optimisation flags. I then compile my main programme linking in these three libraries with its own set of warning flags and the same optimisation flags used during the libraries' compilation. 1) Do I have to compile the libraries with optimisation flags present or can I just use these flags when compiling the final programme and linking to the libraries? If the latter, will it then optimise all or just some (presumably that which is called) of the code in these libraries? 2) I would like to use -fwhole-program -flto -fuse-linker-plugin and the linker plugin gold. At which stage do I compile with these on ... just the final compilation or do these flags need to be present during the compilation of the libraries? 3) Pretty much the same as 2) however with, -fprofile-generate -fprofile-arcs and -fprofile-use. I understand one first runs a programme with generate, and then with use. However, do I have to compile each of the libraries with generate/use etc. or just the final programme? And if it is just the last programme, when I then compeil with -fprofile-use will it also optimise the libraries functionality? Many thanks, James

    Read the article

  • Delphi: All constants are constant, but some are more constant than others?

    - by Ian Boyd
    Consider: clHotlight: TColor = $00FF9933; clLink = clHotLight; //alias of clHotlight [Error] file.pas: Constant expression expected and the alternate wording that works: clHotlight = TColor($00FF9933); clLink = clHotLight; //alias of clHotlight Explain. Then consider: AdministratorGUID: TGUID = '{DE44EEA0-6712-11D4-ADD4-0006295717DA}'; SuperuserGUID = AdministratorGUID; //alias of AdministratorGUID [Error] file.pas: Constant expression expected And fix.

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • 'Lexical' scoping of type parameters in C#

    - by leppie
    I have 2 scenarios. This fails: class F<X> { public X X { get; set; } } error CS0102: The type 'F' already contains a definition for 'X' This works: class F<X> { class G { public X X { get; set; } } } The only logical explanation is that in the second snippet the type parameter X is out of scope, which is not true... Why should a type parameter affect my definitions in a type? IMO, for consistency, either both should work or neither should work. Any other ideas? PS: I call it 'lexical', but it probably is not not the correct term.

    Read the article

  • Printing the address of a struct object

    - by bdhar
    I have a struct like this typedef struct _somestruct { int a; int b; }SOMESTRUCT,*LPSOMESTRUCT; I am creating an object for the struct and trying to print it's address like this int main() { LPSOMESTRUCT val = (LPSOMESTRUCT)malloc(sizeof(SOMESTRUCT)); printf("0%x\n", val); return 0; } ..and I get this warning warning C4313: 'printf' : '%x' in format string conflicts with argument 1 of type 'LPSOMESTRUCT' So, I tried to cast the address to int like this printf("0%x\n", static_cast<int>(val)); But I get this error: error C2440: 'static_cast' : cannot convert from 'LPSOMESTRUCT' to 'int' What am I missing here? How to avoid this warning? Thanks.

    Read the article

  • the problem only happens when i try create a release...

    - by ace
    I'm sorry if im not presenting this right, but i trully cannot understand what the problem is. i have a project to hand in, a code of 600 lines defined within a main, .cpp, and header file. if i compile the project with just a debugger and no release, it's fine. when i create it with the release, the following error occurs, for every function!!! 1st error: |36|multiple definition of `countLines(int&, std::vector const&)'| 2nd error: |36|first defined here| if someone will allow me and i can send them the entire code, that would be awesome - i have to have this done within 3 hours.

    Read the article

  • Break in Class Module vs. Break on Unhandled Errors (VB6 Error Trapping, Options Setting in IDE)

    - by Erx_VB.NExT.Coder
    Basically, I'm trying to understand the difference between the "Break in Class Module" and "Break on Unhandled Errors" that appear in the Visual Basic 6.0 IDE under the following path: Tools --> Options --> General --> Error Trapping The three options appear to be: Break on All Errors Break in Class Module Break on Unhandled Errors Now, apparently, according to MSDN, the second option (Break in Class Module) really just means "Break on Unhandled Errors in Class Modules". Also, this option appears to be set by default (ie: I think its set to this out of the box). What I am trying to figure out is, if I have the second option selected, do I get the third option (Break on Unhandled Errors) for free? In that, does it come included by default for all scenarios outside of the Class Module spectrum? To advise, I don't have any Class Modules in my currently active project. I have .bas modules though. Also, is it possible that by Class Mdules they may be referring to normal .bas Modules as well? (this is my second sub-question). Basically, I just want the setting to ensure there won't be any surprises once the exe is released. I want as many errors to display as possible while I am developing, and non to be displayed when in release mode. Normally, I have two types of On Error Resume Next on my forms where there isn't explicit error handling, they are as follows: On Error Resume Next ' REQUIRED On Error Resume Next ' NOT REQUIRED The required ones are things like, checking to see if an array has any length, if a call to its UBound errors out, that means it has no length, if it returns a value 0 or more, then it does have length (and therefore, exists). These types of Error Statements need to remain active even while I am developing. However, the NOT REQUIRED ones shouldn't remain active while I am developing, so I have them all commented out to ensure that I catch all the errors that exist. Once I am ready to release the exe, I do a CTRL+H to find all occurrences of: 'On Error Resume Next ' NOT REQUIRED (You may have noticed they are commented out)... And replace them with: On Error Resume Next ' NOT REQUIRED ... The uncommented version, so that in release mode, if there are any leftover errors, they do not show to users. For more on the description by MSDN on the three options (which I've read twice and still don't find adequate) you can visit the following link: http://webcache.googleusercontent.com/search?q=cache:yUQZZK2n2IYJ:support.microsoft.com/kb/129876&hl=en&lr=lang_en%7Clang_tr&gl=au&tbs=lr:lang_1en%7Clang_1tr&prmd=imvns&strip=1 I’m also interested in hearing your thoughts if you feel like volunteering them (and this would be my tentative/totally optional third sub-question, that being, your thoughts on fall-back error handling techniques). Just to summarize, the first two questions were, do we get option 3 included in all non-class scenarios if we choose option 2? And, is it possible that when they use the term "Class Module" they may be referring to .bas Modules as well? (Since a .bad Module is really just a class module that is pre-instantiated in the background during start-up). Thank you.

    Read the article

  • Which compiler option I should choose?

    - by Surjya Narayana Padhi
    Hi Geeks, I have to use the third party static library for my qt application to run on windows. The third party provides me a .lib and .h file for use. These libraries are compiled with MSVC compiler. My qt Creator is using MinGW compiler to compile my application. I copied the .h and .lib file to my qt project directory and then added those in .pro file as follows QT += core gui TARGET = MyTest TEMPLATE = app LIBS += C:\Qt\2010.05\qt\MyTest\newApi.lib SOURCES += main.cpp\ mainwindow.cpp HEADERS += mainwindow.h \ newApi.h FORMS += mainwindow.ui Now I am getting some runtime error like this - Starting C:\Qt\2010.05\qt\MyTest-build-desktop\debug\MyTest.exe... C:\Qt\2010.05\qt\MyTest-build-desktop\debug\MyTest.exe exited with code -1073741515 Can any body suggest is this runtime error is due to mismatch of compiler? (because of my .lib file I added is comipled in MSVC compiler and my qt app is compiled using MinGW compiler) If not what may be the reason? Am I missing anything in adding the .h and .lib file to my qt project? If my MinGW compiler will not support the .lib file generated in MSVC compiler what may be the work-arround? Can I create the .lib files in MinGW compiler? or this format is supported only by MSVC compiler only? Please suggest...

    Read the article

  • why won't Eclipse use the compiler I specify for my project?

    - by codeman73
    I'm using Eclipse 3.3. In my project, I've set the compiler compliance level to 5.0 In the build path for the project. I've added the Java 1.5 JDK in the Installed JREs section and am referencing that System Library in my project build path. However, I'm getting compile errors for a class that implements PreparedStatement for not implementing abstract methods that only exist in Java 1.6 PreparedStatement. Specifically, the methods setAsciiStream(int, InputStream, long) and setAsciiStream(int, InputStream) Strangely enough, it worked when we were compiling it against Java 1.4, which it was originally written for. We added the JREs for Java 1.4 and referenced that system library in the project, and set the project's compiler level to 1.4, and it works fine. But when I do the same changes to try to point to Java 5.0, it instead uses Java 6. Any ideas why? I wrote a similar question earlier, here: http://stackoverflow.com/questions/2540548/how-do-i-get-eclipse-to-use-a-different-compiler-version-for-java I know how you're supposed to choose a different compiler but it seems Eclipse isn't taking it. It seems to be defaulting to Java 6, even though I have deleted all Java 6 JDKs and JREs that I could find. I've also updated the -vm option in my eclipse.ini to point to the Java5 JDK.

    Read the article

  • I want tell the VC++ Compiler to compile all code. Can it be done?

    - by KGB
    I am using VS2005 VC++ for unmanaged C++. I have VSTS and am trying to use the code coverage tool to accomplish two things with regards to unit tests: See how much of my referenced code under test is getting executed See how many methods of my code under test (if any) are not unit tested at all Setting up the VSTS code coverage tool (see the link text) and accomplishing task #1 was straightforward. However #2 has been a surprising challenge for me. Here is my test code. class CodeCoverageTarget { public: std::string ThisMethodRuns() { return "Running"; } std::string ThisMethodDoesNotRun() { return "Not Running"; } }; #include <iostream> #include "CodeCoverageTarget.h" using namespace std; int main() { CodeCoverageTarget cct; cout<<cct.ThisMethodRuns()<<endl; } When both methods are defined within the class as above the compiler automatically eliminates the ThisMethodDoesNotRun() from the obj file. If I move it's definition outside the class then it is included in the obj file and the code coverage tool shows it has not been exercised at all. Under most circumstances I want the compiler to do this elimination for me but for the code coverage tool it defeats a significant portion of the value (e.g. finding untested methods). I have tried a number of things to tell the compiler to stop being smart for me and compile everything but I am stumped. It would be nice if the code coverage tool compensated for this (I suppose by scanning the source and matching it up with the linker output) but I didn't find anything to suggest it has a special mode to be turned on. Am I totally missing something simple here or is this not possible with the VC++ compiler + VSTS code coverage tool? Thanks in advance, KGB

    Read the article

  • Port Compiler Options (8 replies)

    I currently own a license for Crossworks for ARM and would like to compile the port using it. With the code for the CLR now available, is it possible to compile the port with any ARM compiler or are we still restricted to the Keil ARM gcc compilers?

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • Port Compiler Options (8 replies)

    I currently own a license for Crossworks for ARM and would like to compile the port using it. With the code for the CLR now available, is it possible to compile the port with any ARM compiler or are we still restricted to the Keil ARM gcc compilers?

    Read the article

  • Compiler Mono sous Fedora, par Romain Puyfoulhoux

    Citation: Mono est une implémentation libre du framework .Net, disponible pour Linux, Windows et Mac OS X. Cet article explique comment compiler Mono ainsi que l'IDE MonoDevelop à partir des sources. Cette méthode est en effet bien souvent nécessaire si l'on veut installer la dernière version du framework ou de l'IDE. c'est par ici n'hésitez pas à laisser vos remarques et commentaires dans ce thread...

    Read the article

  • How do I determine which C/C++ compiler to use?

    - by Adam Siddhi
    Greetings, I am trying to figure out which C/C++ compiler to use. I found this list of C/C++ compilers at Wikipedia: http://en.wikipedia.org/wiki/List_of_compilers#C.2FC.2B.2B_compilers I am fairly certain that I want to go with an open source compiler. I feel that if it is open source then it will be a more complete compiler since many programmer perspectives are used to make it better. Please tell me if you disagree. I should mention that I plan on learning C/C++ mainly to program 2D/3D game applications that will be compatible with Windows, Linux, MAC and iPhone operating systems. I am currently using Windows Vista x64 OS. Thanks, Adam

    Read the article

  • is jQuery 1.4.2 compatible with Closure Compiler?

    - by Mohammad
    According to the official release statement version 1.4 has been re-written to be compressed with Closure Compiler yet when I use the online version of closure compiler I get 130 warnings. This is the code I use. // ==ClosureCompiler== // @compilation_level ADVANCED_OPTIMIZATIONS // @output_file_name default.js // @code_url http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.js // ==/ClosureCompiler== And as far as I know you get the real benefit of Closure Compiler if you include the library with your code also, so it removes the unused functions. Yet my testing show that I can't get any further than compressing the library itself.. What am I doing wrong? Any kind of insight will be much appreciated.

    Read the article

  • Implicit declaration when using a function before it is defined in C, why can't the compiler figure this out?

    - by rolls
    As the title says, I know what causes this error but I want to know why the compiler gives it in this circumstance. Eg : main.c void test(){ test1(); } void test1(){ ... } Would give an implicit declaration warning as the compiler would reach the call to test1() before it has read its declaration, I can see the obvious problems with this (not knowing return type etc), but why can't the compiler do a simple pass to get all function declarations, then compile the code removing these errors? It just seems so simple to do and I don't believe I've seen similar warnings in other languages. Does anyone know if there is a specific purpose for this warning in this situation that I am overlooking?

    Read the article

  • ASP.NET access a folder as ASPNET even though impersonation is set

    - by Ron Harlev
    I have my ASP.NET web.config set with impersonation <identity impersonate="true" userName="domainName\userName" password="userPassword" /> I'm running some a method like IO.Directory.GetFiles(somePath) And monitoring the file system access with Process Monitor I keep getting all the access requests from the aspnet_wp.exe process to the folder, as the ASPNET user. Why am I not seeing the access as the impersonated user?

    Read the article

  • where to use route-name of routing in aspnet mvc

    - by FosterZ
    hi,i'm new to routing in aspnet mvc.. i have following code: Action Controller public ActionResult SchoolIndex() { return View(SchoolRepository.GetAllSchools()); } here is the routing routes.MapRoute( "School", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "School", action = "SchoolIndex", id = "" } ); // Parameter defaults when i enter "localhost/school" in addressbar, it is giving 404 error instead it should route to my "schoolIndex" action i have given route-name as "School" where it is used ?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >