Search Results

Search found 3168 results on 127 pages for 'directories'.

Page 81/127 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • ExternalInterface in flex calling javascript function works for mozilla/chrome but NOT IE

    - by Rees
    hello, i have a flex application that does a simple ExternalInterface.call("shareOptions"), which calls a shareOptions() javascript method and works absolutely fine with Mozilla and chrome, however when I test with IE i get the following error: Error: [object Error] at flash.external::ExternalInterface$/_toAS() at flash.external::ExternalInterface$/call() I looked at the adobe livedocs documentation but can't determine what the issue is with IE. is there something i'm missing?? if anyone knows, please let me know ASAP! thanks in advance. private function shareOptions(event:MouseEvent):void{ ExternalInterface.marshallExceptions = true; if (ExternalInterface.available){ ExternalInterface.call("shareOptions"); } } the javascript <script language="JavaScript" type="text/javascript"> function shareOptions() { myWin = window.open('http://www.mysite.shareOptions.php','yeee!','width=640,height=690,toolbar=no,location=0,directories=no,status=no,menubar=no,scrollbars=no,copyhistory=no,resizable=yes,x=500,y=500'); myWin.moveTo(300,300); } </script>

    Read the article

  • Installing flex on Mac Parallels

    - by Ali Syed
    Hello folks, I am trying to install Flex 3 on my Windows 7 Virtual machine (parallels desktop) on my Mac Pro. The problem seems to be some sort of conflict between the copy of Flex 3 Builder installed on Mac OS X. The installer tries to install Flex in x:/Program Files/Adobe/Flex Builder 3/ but since Parallels Desktop connects all directories, there resides the Flex Builder 3 installation of MAC. I get this error Log: !SESSION 2010-04-22 16:09:23.031 ----------------------------------------------- eclipse.buildId=unknown java.version=1.5.0_11 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=de_DE Framework arguments: -application org.eclipse.update.core.standaloneUpdate -command install -from file:\C:\Program Files\Adobe\Flex Builder 3 Windose\com.adobe.flexbuilder.update.site/ -featureId com.adobe.flexbuilder.feature.standalone -version 3.0.214193 Command-line arguments: -application org.eclipse.update.core.standaloneUpdate -command install -from file:\C:\Program Files\Adobe\Flex Builder 3 Windose\com.adobe.flexbuilder.update.site/ -featureId com.adobe.flexbuilder.feature.standalone -version 3.0.214193 !ENTRY org.eclipse.update.core 4 0 2010-04-22 16:09:29.187 !MESSAGE Cannot install featurecom.adobe.flexbuilder.feature.standalone 3.0.214193

    Read the article

  • Silverlight 3 Offline Mode

    - by GWLlosa
    Has anyone found that the userbase is more reluctant to install Silverlight 3 Apps in offline mode because they have no control over "where" the app is installed to? I've had a few issues of a similar nature in the past, with 'power users' getting upset that they can't specify install directories. Are there any workarounds? Any way to get the Silverlight offline installs to prompt for specific install locations and the like? The specific reason given is that users want to install apps to remote storage, like a USB stick or something.

    Read the article

  • How to access uploaded files in Ruby

    - by Jeff King
    I am trying use a Java Uploader in a ROR app (for its ease of uploading entire directories). The selected uploader comes with some PHP code that saves the files to the server. I am trying to translate this code to Ruby, but am stumped on this point: PHP has a very convenient superglobal – $_FILES – that contains a hash of all files uploaded to the current script via the HTTP POST method. It appears Ruby does not have a similar resource. Lacking that, what is the best way to access and save the uploaded files? I am using the JavaPowUpload uploader ( http://www.element-it.com/OnlineHelpJavaPowUpload/index.html ).

    Read the article

  • F# PowerPack 2.0.0.0 issue: The task ..."…\fslex.exe" is invalid

    - by Roman Kuzmin
    I have upgraded F# PowerPack today to the latest 2.0.0.0 and tried to rebuild the MiniCalc sample from here: http://achrissmith.blogspot.com/2010/04/fslex-and-fsyacc-examples-updated.html If I build it in VS 2010 it fails with the message: C:\Program Files\MSBuild\FSharp\1.0\FSharp.PowerPack.targets(32,3): error MSB6004: The specified task executable location "C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\fslex.exe" is invalid. If I build it from the command line by MSBuild it complains about missing C:\Windows\Microsoft.NET\Framework\v4.0.30319\fslex.exe The problem is kind of “fixed” if I copy fslex and fsyacc to that both directories, so after that I can build from the command line and from VS 2010. But it does not look like a right way to solve the problem. What is the right way? EDIT: The same issue is true for the PowerPack sample from sources: May2010\workyard\tests\LexAndYaccMiniProject. Now (after the trick I have done) it is built fine, too.

    Read the article

  • Receiving "Path 'OPTIONS' is forbidden." Exception in ASP.NET website

    - by Greg
    I am getting the error "Path 'OPTIONS' is forbidden." since we moved our website over to a new server setup. I am unable to recreate the error but I am receiving emails for this exception at least a few times a day. Any ideas what could be causing this and how I can fix it? EDIT: Stack Trace: at System.Web.HttpMethodNotAllowedHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) There are no directories or files named OPTIONS and I believe all permissions are correct. I am finding some information about a possible link to EXCEL getting data from the webserver, but nothing that full explains how or what is happening yet. EDIT AGAIN: Seems this has to do with Excel files opening in Internet Explorer..

    Read the article

  • Multiple skins in ASP.NET MVC using StringTemplate

    - by Jamie
    I am considering the StringTemplate view engine for my ASP.NET MVC application. This application will be built with one skin, but I then expect many more, often very similar skins to be developed for it. This is primiarly the reason for my choice of StringTemplate as the view engine, as there will be zero logic in the views. In my head, I envisaged having each skin in a directory under the 'Views' directory in the folder structure, then maintaining a data structure which maps hostnames to skin directories. An obvious disadvantage of this approach is that I will have to explicitly specify my view files - an alternative might be to hack the implementation of the View() method to change the /views/controller/action standard path and insert an extra layer. Is anyone aware of any examples of implementations along these lines using ASP.NET MVC and StringTemplate at present? Can anyone forsee any potential problems with my approach? Thanks in advance.

    Read the article

  • Two Questions on for Rsync - rsync by date and by file name

    - by paulj3000
    Hi, I have two questions with respect to rsync: 1: I have a bunch of files which are incremented by day of the year. Ex: file.txt.81, file.txt.82, etc. Now, these files are in different directories: data1/file.txt.81 data1/file.txt.82 data2/file2.txt.81 data2/file2.txt.82 How can I have rsync get only the *.82 files and not even touch the other files 2: Now I have a similar data directory structure as above. How can I rsync all files that have been modified on or after a specific day? Thanks

    Read the article

  • Silverlight Project - Setting Reference to Copy Local false not working.

    - by cmaduro
    Why is it that when my Silverlight project is built, the output directory contains a bunch of culture specific directories: ar\System.Windows.Controls.resources.dll bg\System.Windows.Controls.resources.dll ca\System.Windows.Controls.resources.dll etc etc etc Also the root of the build output contains: System.Xml.Linq.dll System.windows.Controls.dll I have gone through the projects in my solution and made sure that "Copy Local" is set to false for all the referances of the mentioned dll files. Those 2 files were set to true, but I did switch them to false. Despite my effort to google an answer, I remain stuck.

    Read the article

  • Python Image Library: How to combine 4 images into a 2 x 2 grid?

    - by Casey
    I have 4 directories with images for an animation. I would like to take the set of images and generate a single image with the 4 images arranged into a 2x2 grid for each frame of the animation. My code so far is: import Image fluid64 = "Fluid64_half_size/00" fluid128 = "Fluid128_half_size/00" fluid512 = "Fluid512_half_size/00" fluid1024 = "Fluid1024_half_size/00" out_image = "Fluid_all/00" for pic in range(1, 26): blank_image = Image.open("blank.jpg") if pic < 10: image_num = "0"+str(pic) else: image_num = str(pic) image64 = Image.open(fluid64+image_num+".jpg") image128 = Image.open(fluid128+image_num+".jpg") image512 = Image.open(fluid512+image_num+".jpg") image1024 = Image.open(fluid1024+image_num+".jpg") out = out_image + image_num + ".jpg" blank_image.paste(image64, (0,0)).paste(fluid128, (400,0)).paste(fluid512, (0,300)).paste(fluid1024, (400,300)).save(out) Not sure why it's not working. I'm getting the error: Traceback (most recent call last): File "C:\Users\Casey\Desktop\Image_composite.py", line 24, in <module> blank_image.paste(image64, (0,0)).paste(fluid128, (400,0)).paste(fluid512, ( ste(fluid1024, (400,300)).save(out) AttributeError: 'NoneType' object has no attribute 'paste' shell returned 1 Any help would be awesome. Thanks!

    Read the article

  • How do I use waf to build a shared library?

    - by James Morris
    I want to build a shared library using waf as it looks much easier and less cluttered than GNU autotools. I actually have several questions so far related to the wscript I've started to write: VERSION='0.0.1' APPNAME='libmylib' srcdir = '.' blddir = 'build' def set_options(opt): opt.tool_options('compiler_cc') pass def configure(conf): conf.check_tool('compiler_cc') conf.env.append_value('CCFLAGS', '-std=gnu99 -Wall -pedantic -ggdb') def build(bld): bld.new_task_gen( features = 'cc cshlib', source = '*.c', target='libmylib') The line containing source = '*.c' does not work. Must I specify each and every .c file instead of using a wildcard? How can I enable a debug build for example (currently the wscript is using the debug builds CFLAGS, but I want to make this optional for the end user). It is planned for the library sources to be within a sub directory, and programs that use the lib each in their own sub directories.

    Read the article

  • NSFileManager - Copying Files at Startup

    - by David Schiefer
    Hi, I need to copy a few sample files from my app's resource folder and place them in my app's document folder. I came up with the attached code, it compiles fine but it doesn't work. All the directories I refer to do exist. I'm not quite sure what I am doing wrong, could someone point me in the right direction please? NSFileManager*manager = [NSFileManager defaultManager]; NSString*dirToCopyTo = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSString*path = [[NSBundle mainBundle] resourcePath]; NSString*dirToCopyFrom = [path stringByAppendingPathComponent:@"Samples"]; NSError*error; NSArray*files = [manager contentsOfDirectoryAtPath:dirToCopyTo error:nil]; for (NSString *file in files) { [manager copyItemAtPath:[dirToCopyFrom stringByAppendingPathComponent:file] toPath:dirToCopyTo error:&error]; if (error) { NSLog(@"%@",[error localizedDescription]); } }

    Read the article

  • JPA 2.0, hiberante 3.5, jars & persistence.xml location

    - by phmr
    I'm building a desktop application using hibernate 3.5 & JPA 2.0. I have 2 jars, the lib, which defines every entity and DAO, packages looks like this : org.my.package.models org.my.package.models.dao org.my.package.models.utils In org.my.package.utils I defined my hibernate utility class for getting EM & EMF instances, which means the lib is bound to a Persistence Unit name but that's not a problem for now (anyway you can recommend me a better way to manage that) the second jars is built as follow: org.my.package.app META-INF is defined on the root of the project which means in my jar I can find this directories directly in the root: META-INF/ META-INF/persistence.xml org/ org/my/ ... org/my/package/app/Main.class META-INF/ When I run the app, hibernate doesn't managed to find persistence.xml it throws an exception something like "package or class for PersistenceUnitName not found". I googled a bit about the problem but I can't get the source code organisation right. Any help ?

    Read the article

  • C++ / error LNK2019 - XpokerEval library

    - by user1068115
    I have been struggling all the afternoon with the XPokerEval Library I downloaded here: http://www.codingthewheel.com/archives/poker-hand-evaluator-roundup#xpokereval I am trying to make the XPokerEval.PokerSim work, using Visual C++ 2010 express but i get the following errors: 1>psim_test.obj : error LNK2019: unresolved external symbol "void __cdecl SimulateHand(char const *,struct SimResults *,float,float,unsigned int)" (?SimulateHand@@YAXPBDPAUSimResults@@MMI@Z) referenced in function _wmain 1>psim_test.obj : error LNK2019: unresolved external symbol "unsigned int __cdecl RankHand(int const *)" (?RankHand@@YAIPBH@Z) referenced in function _wmain Do you think this can be due to a missing link to a library or a incompatibility with my IDE? The script uses the Pokersource Poker-Eval library which is also included in the zip file, I added to the project directories but it still does not work! I cannot figure out why and I am getting mad! Thanks in advance for any tips on this!

    Read the article

  • Git ignore sub folders

    - by Marcel
    I have a lot of projects in my .Net solution. I would like to exclude all "bin/Debug" and "bin/Release" folders (and their contents), but still include the "bin" folder itself and any dll's contained therein. .gitignore with "bin/" ignores "Debug" and "Release" folders, but also any dll's contained in the "bin" folder. "bin/Debug" or "bin/Release" in the .gitignore file does not exclude the directories, unless I fully qualify the ignore pattern as "Solution/Project/bin/Debug" - which I don't want to do as I will need to include this full pattern for each project in my solution, as well as add it for any new projects added. Any suggestions?

    Read the article

  • rpmbuild generates RPM in which subdirectory

    - by Adil
    rpmbuild generates RPM under which directory? I checked the RPMS directory:- [root@tom adil]# ls /usr/src/redhat/ BUILD RPMS SOURCES SPECS SRPMS [root@tom adil]# ls /usr/src/redhat/RPMS/ athlon i386 i486 i586 i686 noarch [root@tom adil]# How to decide rpmbuild outputs in which of the above sub-directories? Is it controlled by spec file? What is the default option? I thought uname -p but its not the case probable uname -i is used. Linked to my last question http://stackoverflow.com/questions/2565282/difference-between-machine-hardware-and-hardware-platform

    Read the article

  • _CopyWebApplication with web.config transformations

    - by Jeremy
    I am trying to have my web application automatically Publish when a Release build is performed. I'm doing this using the _CopyWebApplication target. I added the following to my .csproj file: <!-- Automatically Publish in Release build. --> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> <Target Name="AfterBuild"> <RemoveDir Directories="$(ProjectDir)..\Output\MyWeb" ContinueOnError="true" /> <MSBuild Projects="MyWeb.csproj" Properties="Configuration=Release;WebProjectOutputDir=$(ProjectDir)..\Output\MyWeb;OutDir=$(ProjectDir)bin\" Targets="ResolveReferences;_CopyWebApplication" /> </Target> This works but with one issue. The difference between this output, and the output generated when using the Publish menu item in Visual Studio, is that the Web.Release.config transformation is not applied to the Web.config file when using the MSBuild method. Instead, Web.config, Web.Release.config, and Web.Debug.config are all copied. Any ideas are appreciated.

    Read the article

  • Change present working directory of a calling shell from a ruby script

    - by Erik Kastman
    I'm writing a simple ruby sandbox command-line utility to copy and unzip directories from a remote filesystem to a local scratch directory in order to unzip them and let users edit the files. I'm using Dir.mktmpdir as the default scratch directory, which gives a really ugly path (for example: /var/folders/zz/zzzivhrRnAmviuee+++1vE+++yo/-Tmp-/d20100311-70034-abz5zj) I'd like the last action of the copy-and-unzip script to cd the calling shell into the new scratch directory so people can access it easily, but I can't figure out how to change the PWD of the calling shell. One possibility is to have the utility print out the new path to stdout and then run the script as part of a subshell (i.e. cd $(sandbox my_dir) ), but I want to print out progress on the copy-and-unzipping since it can take up to 10 minutes, so this won't work. Should I just have it go to a pre-determined, easy-to-find scratch directory? Does anyone have a better suggestion? Thanks in advance for your help. -Erik

    Read the article

  • How to share cookies across multiple Apache Virtual Hosts

    - by puk
    This question is generally phrased as "How to share cookies across multiple subdomains" and the answer is generally to use the leading dot like so setcookie( 'id', 0, time()+30*3600, '/', '.example.com' ); which means that the cookie is available to all subdomains of example.com. However, the / path I suspect adds the constraint that all subdomains must be physically under the same tree. The PHP documentation states path The path on the server in which the cookie will be available on. If set to '/', the cookie will be available within the entire domain. If set to '/foo/', the cookie will only be available within the /foo/ directory and all sub-directories such as /foo/bar/ of domain. The default value is the current directory that the cookie is being set in. Is it possible to share cookies if one has two (Apache) Virtual Hosts set up with document roots at, for example www.one.example.com ? /var/www/example1 www.two.example.com ? /var/www/example2

    Read the article

  • URL on apache server does not default to the .php file after / has been added

    - by jeffkee
    Generally a url that looks like this: http://www.domain.com/product.php/12/ will open up product.php and serve the /12/ as request parameters, which then my PHP script can process to pull out the right product info. However when I migrated this whole site, after developing it, to a new server, I get a 404 error, because on that server it's not defaulting to the mother directory/file in case of an absence of requested directories. I vaguely remember learning that this is generally a common apache function but I can't seem to recall how to set it up or how to manipulate it.. if there's an .htaccess method to achieve this that would be great.

    Read the article

  • Strange thing about .NET 4.0 filesystem enumeratation functionality

    - by codymanix
    I just read a page of "Whats new .NET Framework 4.0". I have trouble understanding the last paragraph: To remove open handles on enumerated directories or files Create a custom method (or function in Visual Basic) to contain your enumeration code. Apply the MethodImplAttribute attribute with the NoInlining option to the new method. For example: [MethodImplAttribute(MethodImplOptions.NoInlining)] Private void Enumerate() Include the following method calls, to run after your enumeration code: * The GC.Collect() method (no parameters). * The GC.WaitForPendingFinalizers() method. Why the attribute NoInlining? What harm would inlining do here? Why call the garbage collector manually, why not making the enumerator implement IDisposable in the first place? I suspect they use FindFirstFile()/FindNextFile() API calls for the imlementation, so FindClose() has to be called in any case if the enumeration is done.

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by ErikE
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running under doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for both the web server and the domain account so they are "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • LINK : fatal error LNK1104: cannot open file "Iphlpapi.lib"

    - by Rob
    So I'm using Visual C++ 6.0, and trying to compile some source code, but upon compilation I get this: Linking... LINK : fatal error LNK1104: cannot open file "Iphlpapi.lib" Error executing link.exe. I'm using the correct SDK, and the directories are correct. I've checked, double checked, and triple checked. The file is the specified directory. I can't figure out what the problem is. Any ideas? Service Pack 6 SDK for Windows Server 2003 SP1 //Sounds odd, since I'm running XP SP3, but this has worked for me in the past. Like I've said, it worked in the past for me, flawlessly. I don't understand why it won't work now.

    Read the article

  • Unlocking SVN working copy with unversioned resources

    - by Vijay Dev
    I have a repository which contains some unversioned directories and files. The server running svn was recently changed and since the checkout was done using the url svn://OLD-IP, I relocated my svn working copy, this time to the url svn://NEW-DOMAIN-NAME. Now since there are some unversioned resources, the switch did not happen properly and the working copy got locked. A cleanup operation did not work either because of these unversioned resources. I looked up in the net and found about svn ignore and tried that but to no use. I am unable to release all locks. Any ideas on solving the problem? Once I release the locks, I believe I can use svn ignore and carry on the relocate operation.

    Read the article

  • Does OpenRasta support Mono?

    - by Earlz
    Although Mono support is not a big deal for us, I figured OpenRasta supported it cause it has some commit messages about it.. Well, I tried to build it on Mono and got Ambiguous type references(after manually creating like 10 obj directories.) So I tried just using the prebuilt assemblies I had on hand and that gave me an Object Reference Not Set To Instance of an Object (the usual error I have with mono.. ) at OpenRasta.Diagnostics.DebuggerLoggingTraceListener.WriteAll (using xsp2) Is there official support for Mono or am I missing some sorta extra step for deployment?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >