Search Results

Search found 17860 results on 715 pages for 'virtual pc'.

Page 648/715 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • echo -e acts differently when run in a script by root on ubuntu

    - by ekrub
    When running a bash script on ubuntu 9.10, I get different behavior from bash echo's "-e" option depending on whether or not I'm running as root. Consider this script: $ cat echo-test if [ "`whoami`" = "root" ]; then echo "Running as root" fi echo Testing /bin/echo -e /bin/echo -e "foo\nbar" echo Testing bash echo -e echo -e "foo\nbar" When run as non-root user, I see this output: $ ./echo-test Testing /bin/echo -e foo bar Testing bash echo -e foo bar When run as root, I see this output: $ sudo ./echo-test Running as root Testing /bin/echo -e foo bar Testing bash echo -e -e foo bar Notice the "-e" being echoed in the last case ("-e foo" instead of "foo" on the second-to-last line). When running a script as root, the echo command runs as if "-e" was given and, if -e is given, the option itself is echoed. I can understand some subtle differences in behavior between /bin/echo and bash echo, but I would expect bash echo to behave the same no matter which user invokes it. Anyone know why this is the case? Is this a bug in bash echo? FYI -- I'm running GNU bash, version 4.0.33(1)-release (x86_64-pc-linux-gnu)

    Read the article

  • Unhandled Exception error message

    - by Joshua Green
    Does anyone know why including a term such as: t = PL_new_term_ref(); would cause an Unhandled Exception error message: 0xC0000005: Access violation reading location 0x0000000c. (Visual Studio 2008) I have a header file: class UserTaskProlog : public ArAction { public: UserTaskProlog( const char* name = " sth " ); ~UserTaskProlog( ); AREXPORT virtual ArActionDesired *fire( ArActionDesired currentDesired ); private: term_t t; }; and a cpp file: UserTaskProlog::UserTaskProlog( const char* name ) : ArAction( name, " sth " ) { char** argv; argv[ 0 ] = "libpl.dll"; PL_initialise( 1, argv ); PlCall( "consult( 'myProg.pl' )" ); } UserTaskProlog::~UserTaskProlog( ) { } ArActionDesired *UserTaskProlog::fire( ArActionDesired currentDesired ) { cout << " something " << endl; t = PL_new_term_ref( ); } Without t=PL_new_term_ref() everything works fine, but when I start adding my Prolog code (declarations first, such as t=PL_new_term_ref), I get this Access Violation error message. I'd appreciate any help. Thanks,

    Read the article

  • What happens to existing workspaces after upgrading to TFS 2010

    - by e-mre
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • Setup SVN/LAMP/Test Server/ on linux, where to start?

    - by John Isaacks
    I have a ubuntu machine I have setup. I installed apache2 and php5 on it. I can access the web server from other machines on the network via http://linux-server. I have subversion installed on it. I also have vsftpd installed on it so I can ftp to it from another computer on the network. Myself and other users currently use dreamweaver to checkin-checkout files directly from our live site to make changes. I want the connect to the linux server from pc. make the changes on the test server until ready and then pushed to the live site. I want to use subversion also into this workflow as well. but not sure what the best workflow is or how to set this up. I have no experience with linux, svn, or even using a test server, the checkin/out we are currently doing is the way I have always done it. I have hit many snags already just getting what I have setup because of my lack of knowledge in the area. Dreamweaver 5 has integration with subversion but I can't figure out how to get it to work. I want to setup and create the best workflow possible. I dont expect anyone to be able to give me an answer that will enlighten me enough to know everthing I need to know to do what I want to do (altough if possible that would be great) instead I am looking for maybe a knowledge path like answer. Like a general outline of what I need to do accompanied with links to learn how to do it. like read this book to learn linux, then read this article to learn svn, etc., then you should know what to do. I would be happy just getting it all setup, but I would like to know what I am actually doing while setting it up too.

    Read the article

  • Examples using Active Directory/LDAP groups for permissions \ roles in Rails App.

    - by Nick Gorbikoff
    Hello. I was wondering how other people implemented this scenario. I have an internal rails app ( inventory management, label printing, shipping,etc). I'm rewriting security on the system, cause the old way got to cumbersome to maintain ( users table, passwords, roles) - I used restful_authentication and roles. It was implemented about 3 years ago. I already implemented AuthLogic with ruby-ldap-net to authenticate users ( actually that was surprisingly easy, compared to how I struggled with other frameworks/languages before). Next step is roles. I already have groups defined in Active Directory - so I don't want to run a separate roles system in my rails app, I just want to reuse Active Directory groups - since that part of the system is already maintained for other purposes ( shared drives, backups, pc access, etc) So I was wondering if others had experience implementing permissions/roles in a rails app based on groups in Active Directory or LDAP. Also the roles requirements are pretty complex. Here is an example: For instance I have users that belong to the supervisors group in AD and to inventory dept, so I was that user to be able to run "advanced" tasks in invetory - adjust qty, run reports, however other "supervisors" from other departmanets, shouldn't be able to do this, also Top Management - should be able to use those reports (regardless weather they belong to the invetory or not), but not Middle Management, unless they are in inventory group. Admins of the system (Domain Admins) should have unrestricted access to the system , except for HR & Finances part unless they are in HR ( like you don't want all system admins (except for one authorized one) to see personal info of other employees). I looked at acl9, cancan, aegis. I was wondering if there are any advantaged/cons to using one versus the other for this particular use of system access based on AD. Suggest other systems if you had good experience. Thank you!!!

    Read the article

  • Inheritance inside a template - public members become invisible?

    - by Juliano
    I'm trying to use inheritance among classes defined inside a class template (inner classes). However, the compiler (GCC) is refusing to give me access to public members in the base class. Example code: template <int D> struct Space { struct Plane { Plane(Space& b); virtual int& at(int y, int z) = 0; Space& space; /* <= this member is public */ }; struct PlaneX: public Plane { /* using Plane::space; */ PlaneX(Space& b, int x); int& at(int y, int z); const int cx; }; int& at(int x, int y, int z); }; template <int D> int& Space<D>::PlaneX::at(int y, int z) { return space.at(cx, y, z); /* <= but it fails here */ }; Space<4> sp4; The compiler says: file.cpp: In member function ‘int& Space::PlaneX::at(int, int)’: file.cpp:21: error: ‘space’ was not declared in this scope If using Plane::space; is added to the definition of class PlaneX, or if the base class member is accessed through the this pointer, or if class Space is changed to a non-template class, then the compiler is fine with it. I don't know if this is either some obscure restriction of C++, or a bug in GCC (GCC versions 4.4.1 and 4.4.3 tested). Does anyone have an idea?

    Read the article

  • How do I get into a career as a programmer/development DBA?

    - by markle976
    About 8-9 years ago I started getting into programming as a hobby. I started with my TI-86 calculator, and then moved into using Visual Basic. After about a year I started playing around with HTML and JavaScript. Then I discovered Flash; I programmed with Actionscript 2.0 for about 2 years which lead me to start using Coldfusion. After a while I realized that A) I am not a designer, and B) with the way that things were going with AJAX, .NET, and PHP there wasn’t much future in Coldfusion/Actionscript. I had been working mostly as an administrative assistant, but about 3-4 years ago I got a position where I would be doing some web development, and assisting the system admin with supporting windows desktop PCs. I have gotten some decent experience over the past few years, but it has been spread out in somewhat disparate areas: I spend about 40% of my time writing PHP/MySQL and HTML/CSS, etc. I spend about 20% of my time helping users with PC questions. I spend about 20% of my time doing administrative things (mail-merges, excel, etc). I spend about 20% of my time managing / creating reports from our Access Database. I have also taught myself many things on my own, and now have a beginner’s level understanding of things like: Windows Server, Java, Linux, Objective-C, SQL Server, C#, C++, Ruby, Mac OSX, VBA, VBScript, and basic IP networks. I feel like I am in a bit of a rut – I want to get my career moving, but I am not sure what I need to do. If I practice with C# and SQL Server Express for a year will that be enough to get me in the door somewhere? Would it be easier to get a position if I teach myself Linux/Apache since I have more experience with PHP/MySQL?

    Read the article

  • AssemblyResolve event is not firing during compilation of a dynamic assembly for an aspx page.

    - by John
    This one is really pissing me off. Here goes: My goal is to load assemblies at run-time that contain embedded aspx,ascx etc. What I would also like is to not lock the assembly file on disk so I can update it at run-time without having to restart the application (I know this will leave the previous version(s) loaded). To that end I have written a virtual path provider that does the trick. I have subscribed to the CurrentDomain.AssemblyResolve event so as to redirect the framework to my assemblies. The problem is that the when the framework tries to compile the dynamic assembly for the aspx page I get the following: Compiler Error Message: CS0400: The type or namespace name 'Pages' could not be found in the global namespace (are you missing an assembly reference?) Source Error: public class app_resource_pages__version_1_0_0_0__culture_neutral__publickeytoken_null_default_aspx : global::Pages._Default, System.Web.SessionState.IRequiresSessionState, System.Web.IHttpHandle I noticed that if I load the assembly with Assembly.Load(AssemblyName) or Assembly.LoadFrom(filename) I dont get the above error. If I load it with Assembly.Load(byte[]) (so as to not lock it), the exception is thrown but my AssemblyResolve handler, when called is returning the assembly correctly (it is called once). So I am guessing that it is called once when the framework parses the asp markup but not when it tries to create the dynamic assembly for the aspx page.

    Read the article

  • 32/64 bit problems with Eclipse CDT on Ubuntu

    - by waffleShirt
    I have just recently started running Linux on my PC and I am trying to start learning OpenGL. I am using the latest version of Eclipse CDT as my IDE, and my system is Ubuntu 10.10, 64 bit version. The problem I am having is that whenever I try to run a build from within the IDE I get the error message "Launch Failed. Binary Not Found." Ive done a lot of looking around on the internet but I still cant solve the problem. I know for a fact that the binary is built, it can be run from a terminal window. According to posts I have seen the problem is that Eclipse tries to run a 32 bit binary, but GCC 4.4.5 defaults to 64 bit binaries on a 64 bit system. Ive seen a lot of information about using the -m32 flag in makefiles, but then I still get the following output in Eclipse: make all g++ -o HelloWorld2 main.o /usr/bin/ld: i386 architecture of input file `main.o' is incompatible with i386:x86-64 output /usr/bin/ld: final link failed: Invalid operation collect2: ld returned 1 exit status make: *** [HelloWorld2] Error 1 What I would like to know is how to either get Eclipse to launch the 64 bit binaries, or have Eclipse correctly compile 32 bit binaries.

    Read the article

  • Problems with LWUIT in J2ME on Nokia E72

    - by Andre Mariano
    Well, I'm developing a app in my cellphone that is going to connect to my PC, the problem is that everytime that I return a URLRequest to the cellphone, it shows the previous Form on the screen and not de actual one, for example this is what goes in my actionListener: public void actionPerformed(ActionEvent ae) { if (ae.getCommand() == guiaUtil.cSelecionar()) { LoginRemote loginRemote = new LoginRemote(); try { //This is the request, returns true or false, does not affect the form loginRemote.login(tLogin.getText(), tPassword.getText()); } catch (Exception e) { GuiaUtil.error(e); return; } guiaUtil.mainApp().startMenu(); } } Then in the "guiaUtil.mainApp().startMenu()" I have this public void startMenu() { if (itemsMenu == null) { itemsMenu = new List(); itemsMenu.setWidth(320); itemsMenu.addItem("Sincronize Spots"); itemsMenu.addItem("Find Spots"); itemsMenu.addItem("Work"); itemsMenu.setFocus(true); this.addComponent(itemsMenu); this.addCommandListener(this); this.addCommand(guiaUtil.cSelect()); Form form = new Form(); form.addComponent(itemsMenu); } form.show(); } Anyway, after the request returns, it shows my Login form again, instead of showing the Menu List

    Read the article

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Hide a single content block from search engines?

    - by jonas
    A header is automatically added on top of each content URL, but its not relevant for search and messing up the all the results beeing the first line of every page (in the code its the last line but visually its the first, which google is able to notice) Solution1: You could put the header (content to exculde from google searches) in an iframe with a static url domain.com/header.html and a <meta name="robots" content="noindex" /> ? - are there takeoffs of this solution? Solution2: You could deliver it conditionally by apache mod rewrite, php or javascript -takeoff(?): google does not like it? will google ever try pages with a standard users's useragent and compare? -takeoff: The hidden content will be missing in the google cache version as well... example: add-header.php: <?php $path = $_GET['path']; echo file_get_contents($_SERVER["DOCUMENT_ROOT"].$path); ?> apache virtual host config: RewriteCond %{HTTP_USER_AGENT} !.*spider.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yahoo.* [NC] RewriteCond %{HTTP_USER_AGENT} !Bing.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yandex.* [NC] RewriteCond %{HTTP_USER_AGENT} !Baidu.* [NC] RewriteCond %{HTTP_USER_AGENT} !.*bot.* [NC] RewriteCond %{SCRIPT_FILENAME} \.htm$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.html$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.php$ [NC] RewriteRule ^(.*)$ /var/www/add-header.php?path=%1 [L]

    Read the article

  • Android - Two different programs at the same time in an emulator

    - by Léa Massiot
    I'm new to Android development. My OS is WinXP. I'm trying to install two different applications on an Android Device Emulator in command line. I have two Android projects "ap1" and "ap2". In the "ap1" project directory, I ran "ant debug". I got an "ap1.apk" executable. In the "ap2" project directory, I ran "ant debug". I got an "ap2.apk" executable. I created an Android Virtual Device: cmd_line android create avd -n avd1 -t 1 --abi x86 I launched the emulator: cmd_line emulator -avd avd1 -verbose The "adb devices" command returns: List of devices attached emulator-5554 device I installed the first program on the emulator: cmd_line adb -s emulator-5554 install "ap1.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 = It worked. I installed the second program on the emulator: cmd_line adb -s emulator-5554 install "ap2.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg2.android/.AnotherActivity1 = It worked. All this works except that the second executable "replaced" of the first one. If I try to run the first executable, I get an error: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 Starting: Intent { act=android.intent.action.MAIN cmp=my.pkg.android/.Activity1 } Error type 3 Error: Activity class {my.pkg.android/my.pkg.android.Activity1} does not exist. It looks like I can't have the two apps at the same time in the emulator. What do you think? What do I have to do to have the two apps available (at the same time) in the emulator? Thank you for helping. Best regards.

    Read the article

  • Different behaviour of method overloading in C#

    - by Wondering
    Hi All, I was going through C# Brainteasers(http://www.yoda.arachsys.com/csharp/teasers.html) and came accross one question:what should be the o/p of below code class Base { public virtual void Foo(int x) { Console.WriteLine ("Base.Foo(int)"); } } class Derived : Base { public override void Foo(int x) { Console.WriteLine ("Derived.Foo(int)"); } public void Foo(object o) { Console.WriteLine ("Derived.Foo(object)"); } } class Test { static void Main() { Derived d = new Derived(); int i = 10; d.Foo(i); // it prints ("Derived.Foo(object)" } } but if I change the code to enter code here class Derived { public void Foo(int x) { Console.WriteLine("Derived.Foo(int)"); } public void Foo(object o) { Console.WriteLine("Derived.Foo(object)"); } } class Program { static void Main(string[] args) { Derived d = new Derived(); int i = 10; d.Foo(i); // prints Derived.Foo(int)"); Console.ReadKey(); } } I want to why the o/p is getting changde when we are inheriting vs not inheriting , why method overloading is behaving differently in both the cases

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • Tablesorter Pager not working in Safari or Chrome

    - by Zendog74
    Hi all. I am building an app using the tablesorter plug-in and it's pager plug-in. Things work perfectly fine in Firefox and IE, but in Safari (4.0.4 on a PC) and Chrome () I get errors when it hits the following code that binds the tablesorter pager. I took the pager binding out and it worked, so something is going wrong somewhere in those three lines of code. var tableSel = calendarportlet.ut.createIdSelector(calendarportlet.addNamespace("eventListTable")); var pagerSel = calendarportlet.ut.createIdSelector(calendarportlet.addNamespace("pager")); jQuery(tableSel).tablesorter({ widthFixed: true, headers: { 0: {sorter: false} }, sortList:[[2,1],[1,0]], widgets: ['zebra'] }).tablesorterPager({ <-- error happens in here container: jQuery(pagerSel), positionFixed: false }); Also, the errors only happen in Safari and Chrome when prototype.js is loaded AFTER jQuery. If they are loaded before jQuery, it works fine. However, this is a portlet and it has to play nice with other portlets, so we don't want to modify the header and loading order of the js libs. Anyone have any ideas on how to fix this?

    Read the article

  • URL equals and checking Internet access

    - by James P.
    On http://java.sun.com/j2se/1.5.0/docs/api/java/net/URL.html it states that: Compares this URL for equality with another object. If the given object is not a URL then this method immediately returns false. Two URL objects are equal if they have the same protocol, reference equivalent hosts, have the same port number on the host, and the same file and fragment of the file. Two hosts are considered equivalent if both host names can be resolved into the same IP addresses; else if either host name can't be resolved, the host names must be equal without regard to case; or both host names equal to null. Since hosts comparison requires name resolution, this operation is a blocking operation. Note: The defined behavior for equals is known to be inconsistent with virtual hosting in HTTP. According to this, equals will only work if name resolution is possible. Since I can't be sure that a computer has internet access at a given time, should I just use Strings to store addresses instead? Also, how do I go about testing if access is available when requested?

    Read the article

  • What Color is the Windows' System.Control? (Visual Studio Design View)

    - by jp2code
    In Visual Studio Design View, the selection of Form Colors in the Properties Pane are selectable from the "Custom", "Web", and "System" tabs. Of course, the color number can be used, too. When the "System" Tab is selected, the colors in the list depend on what type of Theme the Computer User has set on the PC. I'd like to stick with this, but I need to know how to "read in" the colors. I have controls that I create "on-the-fly" or often need to change a color back after getting the person's attention using a blink/flicker technique. How do I get the list of System Theme colors? Most forms have a BackColor that defaults to "Control", which looks like a very light gray under Windows 7, running the default Windows 7 Theme. I've managed to grab a color by physically reading the ARGB value in code, but I'd rather have a way to access the colors by their Theme Name, if that can be done. public Form1() { Color cControl = this.BackColor; Console.WriteLine(cControl.Name); // there is not always a name! } Does anyone know what I'm talking about?

    Read the article

  • Cannot connect Nexus One Phone to Android adb

    - by appbee
    I am running Android SDK 2.2 and am trying to get the adb to connect to the Google Nexus One phone. Its a new phone, shipped straight from Google - haven't installed any apps on it yet. (I have Windows XP) Here is what I have done so far: Followed the instructions on setting up the device for development as given on the Android Developer's site: http://developer.android.com/guide/developing/device.html added android:debuggable="true" to my application manifest USB debuggable is checked on the phone downloaded the Device Drivers For Windows Revision 3 (this supports Nexus One phones) Went through the Hardware Installation wizard to install the device - the device shows up as "Android Composite ADB Interface". When I run adb devices on the shell, the device appears for a moment, then disappears. On the Eclipse console, I get the following message: [2010-11-13 11:54:42 - DeviceMonitor]Failed to start monitoring I have rebooted the pc several times, uninstalled and reinstalled the drivers several times, but I get the same error each time. As I was researching this problem, someone had recommended rebooting the phone. I am a bit confused by that - is that a soft or hard reboot? Do I just power the phone off/on and is there something more complex involved? Do I have to hard reboot it to reset to factory version - even though its brand new? Has anyone run into a similar problem? Any help on this would be great. I can't test my application on the device if the adb cannot view the device. Thanks so much in advance.

    Read the article

  • Server.TransferRequest returns blank page on specific server

    - by jishi
    I'm facing an issue that seems to be related to configuration. I have a webapplication based on MonoRail, where we utilize the routing feature from MonoRail. On the first request after the application has started, the routing isn't initialized. To circumvent this, I have the following code in Application_OnError(): public virtual void Application_OnError() { if ( // identified as routing error ) Server.TransferRequest( Context.Request.RawUrl, false ); return; } Problem beeing that on our development server (which runs server 2008 R2, with IIS 7.5 and .NET 3.5) returns a blank page without headers, but on my workstation (which runs win7, IIS 7.5 and .NET 3.5) it works fine. What could be the cause of this? If the code in Application_OnError() throws an exception, what would be the expected output? I have verified the following: The access-log shows one entry, I'm not sure if a TransferRequest would show up as a second entry when invoked successfully The application actually do some work according to my internal logs, and it doesn't die since a subsequent requests works flawlessly (because routing will be active) Any hints on what to look for would be greatly appreciated!

    Read the article

  • .NET MissingMethodException occuring on one of thousands of end-user machines -- any insight?

    - by Yoooder
    This issue has me baffled, it's affecting a single user (to my knowledge) and hasn't been reproduced by us... The user is receiving a MissingMethodException, our trace file indicates it's occuring after we create a new instance of a component, when we're calling an Initialize/Setup method in preparation to have it do work (InitializeWorkerByArgument in the example) The Method specified by the error is an interface method, which a base class implements and classes derived from the base class can override as-needed The user has the latest release of our application All the provided code is shipped within a single assembly Here's a very distilled version of the component: class Widget : UserControl { public void DoSomething(string argument) { InitializeWorkerByArgument(argument); this.Worker.DoWork(); } private void InitializeWorkerByArgument(string argument) { switch (argument) { case "SomeArgument": this.Worker = new SomeWidgetWorker(); break; } // The issue I'm tracking down would have occured during "new SomeWidgetWorker()" // and would have resulted in a missing method exception stating that // method "DoWork" could not be found. this.Worker.DoWorkComplete += new EventHandler(Worker_DoWorkComplete); } private IWidgetWorker Worker { get; set; } void Worker_DoWorkComplete(object sender, EventArgs e) { MessageBox.Show("All done"); } } interface IWidgetWorker { void DoWork(); event EventHandler DoWorkComplete; } abstract class BaseWorker : IWidgetWorker { virtual public void DoWork() { System.Threading.Thread.Sleep(1000); RaiseDoWorkComplete(this, null); } internal void RaiseDoWorkComplete(object sender, EventArgs e) { if (DoWorkComplete != null) { DoWorkComplete(this, null); } } public event EventHandler DoWorkComplete; } class SomeWidgetWorker : BaseWorker { public override void DoWork() { System.Threading.Thread.Sleep(2000); RaiseDoWorkComplete(this, null); } }

    Read the article

  • VS2010 error: Unable to start debugging on the web server

    - by GarDavis
    I get error message "Unable to start debugging on the web server" in Visual Studio 2010. I clicked the Help button and followed the related suggestions without success. This happens with a newly created local ASP.Net project when modified to use IIS instead of Cassini (which works for debugging). It prompts to set debug="true" in the web.config and then immediately pops up the error. Nothing shows up in the Event Viewer. I am able to attach to w3wp to debug. It works but is not as convenient as F5. I also have a similar problem with VS2008 on the same PC. Debugging used to work for both. I have re-registered Framework 4 (aspnet_regiis -i). I ran the VS2010 repair (this is the RTM version). I am running on a Windows Server 2008 R2 x64 box. I do have Resharper V5 installed. There must be some configuration setting or registry value that survives the repair causing the problem. I'd appreciate any ideas. Thanks, Gary Davis ([email protected])

    Read the article

  • Using Kerberos authentication for SQL Server 2008

    - by vivek m
    I am trying to configure my SQL Server to use Kerberos authentication. My setup is like this - My setup is like this- I have 2 virtual PCs in a Windows XP Pro SP3 host. Both VPCs are Windows Server 2003 R2. One VPC acts as the DC, DNS Server, DHCP server, has Active Directory installed and the SQL Server default instance is also running on this VPC. The second VPC is the domain member and it acts as the SQL Server client machine. I configured the SPN on the SQL Server service account to get the Kerberos working. On the client VPC it seems like it is using Kerberos authentication (as desired)- C:\Documents and Settings\administrator.SHAREPOINTSVC>sqlcmd -S vm-winsrvr2003 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- KERBEROS (1 rows affected) 1> but on the server computer (where the SQL Server instance is actually running) it looks like it is still using NTLM authentication- . This is not a remote instance, the sql server is local to this machine. C:\Documents and Settings\Administrator>sqlcmd 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- NTLM (1 rows affected) 1> What can i do so that it uses Kerberos on the server computer as well ? (or is this something that I should not expect)

    Read the article

  • Trying to make a plugin system in C++/Qt

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • How good is the memory mapped Circular Buffer on Wikipedia?

    - by abroun
    I'm trying to implement a circular buffer in C, and have come across this example on Wikipedia. It looks as if it would provide a really nice interface for anyone reading from the buffer, as reads which wrap around from the end to the beginning of the buffer are handled automatically. So all reads are contiguous. However, I'm a bit unsure about using it straight away as I don't really have much experience with memory mapping or virtual memory and I'm not sure that I fully understand what it's doing. What I think I understand is that it's mapping a shared memory file the size of the buffer into memory twice. Then, whenever data is written into the buffer it appears in memory in 2 places at once. This allows all reads to be contiguous. What would be really great is if someone with more experience of POSIX memory mapping could have a quick look at the code and tell me if the underlying mechanism used is really that efficient. Am I right in thinking for example that the file in /dev/shm used for the shared memory always stays in RAM or could it get written to the hard drive (performance hit) at some point? Are there any gotchas I should be aware of? As it stands, I'm probably going to use a simpler method for my current project, but it'd be good to understand this to have it in my toolbox for the future. Thanks in advance for your time.

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >