Search Results

Search found 29574 results on 1183 pages for 'directory services'.

Page 490/1183 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Using thrift with PHP and Java

    - by Christopher McCann
    I am getting myself a bit confused about how to go about this. My plan is to use PHP to perform the final page construction and this PHP web app will contact multiple services, which i will also to develop, for the data. Lets say one of those services was done in Java. I would define a Java interface which was implemented by a concrete class. This is where I get confused - how does Thrift link the PHP web app with the java service or am I getting totally mixed up?? Thanks

    Read the article

  • Ruby New dnssd (bonjour zeroconf) service not appearing while browsing

    - by Poul
    Here is my simple zeroconf (aka bonjour dnssd) browser. If I have other services running when I start the browser I can see it (the 'resolved to' line prints to the screen). However, if I start up another service while this browser is running it will not appear. It just waits at the top of the block so I would expect it to enter the block once a new service is registered. Any ideas? require 'rubygems' require 'dnssd' browser = DNSSD::Service.new browser.browse '_http._tcp.' do |reply| #<-- code seems to wait here for more services DNSSD.resolve reply do |r| puts "resolved to: http://#{r.target}:#{r.port}" end end #example service register_service = DNSSD::register( "My Service","_http._tcp", nil, my_port) do puts "* Registering the service *" end

    Read the article

  • Flex Client application - HTTPRequest problem in the initialize function.

    - by Elad
    Hello all. I have a serious problem in my flex client applications. I have an apache server with php web services. the flex client makes an httpservice requests. I noticed that the httpservice requests that runs from the creationComplete event of the application does not always get data from the server. but HTTPservice requests called from user actions always work. I also noticed that when I run the flex client application directly from the Flex Builder 3 without upload it to the server, the problem occours less frequently. in the Application: mx:Application creationComplete="Init()" verticalScrollPolicy="off" horizontalScrollPolicy="off" xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" backgroundColor="#5d8eb1" private function Init():void { var http_request:HTTPService = new HTTPService(); http_request.url = "http://"+this.server_name+":"+this.server_port+"/services/client/client_result.php"; http_request.resultFormat = "e4x"; http_request.addEventListener("result",resultFunc); http_request.send(); http_request.disconnect(); }

    Read the article

  • File-level filesystem change notification in Mac OS X

    - by Paul J. Lucas
    I want my code to be notified when any file under (either directly or indirectly) a given directory is modified. By "modified", I mead I want my code to be notified whenever a file's contents are altered, it's renamed, or it's deleted; or if a new file is added. For my application, there can be thousands of files. I looked as FSEvents, but its Technology Overview says, in part: The important point to take away is that the granularity of notifications is at a directory level. It tells you only that something in the directory has changed, but does not tell you what changed. It also says: The file system events API is also not designed for finding out when a particular file changes. For such purposes, the kqueues mechanism is more appropriate. However, in order to use kqueue on a given file, one has to open the file to obtain a file descriptor. It's impractical to manage thousands of file descriptors (and would probably exceed the maximum allowable number of open file descriptors anyway). Curiously, under Windows, I can use the ReadDirectoryChangesW() function and it does precisely what I want. So how can one do what I want under Mac OS X? Or, asked another way: how would one go about writing the equivalent of ReadDirectoryChangesW() for Mac OS X in user-space (and do so very efficiently)?

    Read the article

  • MLS Integration (RETSBond RI or?)

    - by prodigitalson
    I'm looking Specifically for some detailed information on RETSBond Integrator (RI) or something similar. Has anyone used it? Drawbacks? benefits? What I need is something that provides a PHP API out of the box or some kind of RPC exposure. This thing seems to provide an API for batching the MLS server and putting in my own DB which is acceptable although ideally I'd prefer something totally external. Do any MLS services provide that that you are aware of? I realize this is somewhat subjective but I'm looking for a starting point on different services/vendors to research.

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • Problem Linking Boost Filesystem Library in Microsoft Visual C++

    - by Scott
    Hello. I am having trouble getting my project to link to the Boost (version 1.37.0) Filesystem lib file in Microsoft Visual C++ 2008 Express Edition. The Filesystem library is not a header-only library. I have been following the Getting Started on Windows guide posted on the official boost web page. Here are the steps I have taken: I used bjam to build the complete set of lib files using: bjam --build-dir="C:\Program Files\boost\build-boost" --toolset=msvc --build-type=complete I copied the /libs directory (located in C:\Program Files\boost\build-boost\boost\bin.v2) to C:\Program Files\boost\boost_1_37_0\libs. In Visual C++, under Project Properties Additional Library Directories I added these paths: C:\Program Files\boost\boost_1_37_0\libs C:\Program Files\boost\boost_1_37_0\libs\filesystem\build\msvc-9.0express\debug\link-static\threading-multi I added the second one out of desperation. It is the exact directory where libboost_system-vc90-mt-gd-1_37.lib resides. In Configuration Properties C/C++ General Additional Include Directories I added the following path: C:\Program Files\boost\boost_1_37_0 Then, to put the icing on the cake, under Tools Options VC++ Directories Library files, I added the same directories mentioned in step 3. Despite all this, when I build my project I get the following error: fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-gd-1_37.lib' Additionally, here is the code that I am attempting to compile as well as a screen shot of the aformentioned directory where the (assumedly correct) lib file resides: #include "boost/filesystem.hpp" // includes all needed Boost.Filesystem declarations #include <iostream> // for std::cout using boost::filesystem; // for ease of tutorial presentation; // a namespace alias is preferred practice in real code using namespace std; int main() { cout << "Hello, world!" << endl; return 0; } Can anyone help me out? Let me know if you need to know anything else. As always, thanks in advance.

    Read the article

  • Node.js/ v8: How to make my own snapshot to accelerate startup

    - by Anand
    I have a node.js (v0.6.12) application that starts by evaluating a Javascript file, startup.js. It takes a long time to evaluate startup.js, and I'd like to 'bake it in' to a custom build of Node if possible. The v8 source directory distributed with Node, node/deps/v8/src, contains a SconScript that can almost be used to do this. On line 302, we have LIBRARY_FILES = ''' runtime.js v8natives.js array.js string.js uri.js math.js messages.js apinatives.js date.js regexp.js json.js liveedit-debugger.js mirror-debugger.js debug-debugger.js '''.split() Those javascript files are present in the same directory. Something in the build process apparently evaluates them, takes a snapshot of state, and saves it as a byte string in node/out/Release/obj/release/snapshot.cc (on Mac OS). Some customization of the startup snapshot is possible by altering the SconScript. For example, I can change the definition of the builtin Date.toString by altering date.js. I can even add new global variables by adding startup.js to the list of library files, with contents global.test = 1. However, I can't put just any javascript code in startup.js. If it contains Date.toString = 1;, an error results even though the code is valid at the node repl: Build failed: -> task failed (err #2): {task: libv8.a SConstruct -> libv8.a} make: *** [program] Error 1 And it obviously can't make use of code that depends on libraries Node adds to v8. global.underscore = require('underscore'); causes the same error. I'd ideally like a tool, customSnapshot, where customSnapshot startup.js evaluates startup.js with node and then dumps a snapshot to a file, snapshot.cc, which I can put into the node source directory. I can then build node and tell it not to rebuild the snapshot.

    Read the article

  • casting void* to float* creates only zeros

    - by Paperflyer
    I am reading an audio file using CoreAudio (Extended Audio File Read Services). The audio data gets converted to 4-byte float and handed to me as a void* buffer. It can be played with Audio Queue Services, so its content is correct. Next, I want to draw a waveform and thus need access to the actual samples. So, I cast void* audioData to float*: Float32 *floatData = (Float32 *)audioData; When accessing this data however, I only get 0.0 regardless of the index. Float32 value = floatData[index]; // Is always zero for any index Am I doing something wrong with the cast?

    Read the article

  • Azure : The process cannot access the file "" because it is being used by another process.

    - by Shantanu
    Hi all, I am trying to get a matlab-compiled exe running on Azure cloud, and for that purpose need to get a v78.zip onto the local storage of the cloud and unzip it, before I can try to run an exe on the cloud. The program works fine when executed locally, but on deployment gives and error at line marked below in the code. The error is : The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. Exception Details: System.IO.IOException: The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. The code is given below: string localPath = RoleEnvironment.GetLocalResource("LocalStorage1").RootPath; Response.Write(localPath + " \n"); Directory.SetCurrentDirectory(localPath); CloudBlob mblob = GetProgramContainer().GetBlobReference("v78.zip"); CloudBlockBlob mbblob = mblob.ToBlockBlob; CloudBlob zipblob = GetProgramContainer().GetBlobReference("7z.exe"); string zipPath = Path.Combine(localPath, "7z.exe"); string matlabPath = Path.Combine(localPath, "v78.zip"); IEnumerable<ListBlockItem> blocklist = mbblob.DownloadBlockList(); BlobStream stream = mbblob.OpenRead(); FileStream fs = File.Create(matlabPath); (Exception occurs here) It'll be great help if someone could tell me where I'm going wrong. Thanks! Shan

    Read the article

  • WCF (REST) multiple host headers with one endpoint

    - by Maan
    I have an issue with a WCF REST service (.NET 4) which has multiple host headers, but one end point. The host headers are for example: xxx.yyy.net xxx.yyy.com Both host headers are configured in IIS over HTTPS and redirect to the same WCF service endpoint. I have an Error Handling behavior which logs some extra information in case of an error. The problem is that the logging behavior only works for one of both URLs. When I first call the .net URL, the logging is only working for requests on the .net URL. When I first call the .com URL (after a Worker Process recycle), it’s only working on requests on the .com URL. The configuration looks like this: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> <service name="XXX.RemoteHostService"> <endpoint address="" behaviorConfiguration="RemoteHostEndPointBehavior" binding="webHttpBinding" bindingConfiguration="HTTPSTransport" contract="XXX.IRemoteHostService" /> </service> </services> <extensions> <behaviorExtensions> <add name="errorHandling" type="XXX.ErrorHandling.ErrorHandlerBehavior, XXX.Services, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </behaviorExtensions> </extensions> <bindings> <webHttpBinding> <binding name="HTTPSTransport"> <security mode="Transport"> <transport clientCredentialType="None"/> </security> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="RemoteHostEndPointBehavior"> <webHttp /> <errorHandling /> </behavior> </endpointBehaviors> </behaviors> …. Should I configure multiple endpoints? Or in which way could I configure the WCF Service so the logging behavior is working for both URLs? I tried several things, also solutions mentioned earlier on StackOverflow. But no luck until now...

    Read the article

  • [Rais] OAuth with Digg API

    - by Karl
    I'm attempting to get Rails to play nice with the Digg API's OAuth. I'm using the oauth gem (ruby one, not the rails one). My code looks approximately like this: @consumer = OAuth::Consumer.new(API_KEY, API_SECRET, :scheme => :header, :http_method => :post, :oauth_callback => "http://locahost:3000", :request_token_url => 'http://services.digg.com/1.0/endpoint?method=oauth.getRequestToken', :access_token_url => 'http://services.digg.com/1.0/endpoint?method=oauth.getAccessToken', :authorize_url => 'http://digg.com/oauth/authorize') @request_token = @consumer.get_request_token session[:request_token] = @request_token.token session[:request_token_secret] = @request_token.secret redirect_to @request_token.authorize_url Which is by-the-book in terms of what the gem documentation gave me. However, Digg spits a "400 Bad Request" error back at me when @consumer.get_request_token is called. I can't figure out what I'm doing wrong. Any ideas?

    Read the article

  • How to generalise the endpoints in my channelfactory

    - by Meher
    Hi, I have a requirement to actually generalise the endpoints of different WCF services,create a proxy and invoke the action. We have like 4 pages and 4 services to serve those pages. For each n every page we have to call the specific service endpoint and invoke the action Example:: private IList<FunctionCodes> i_oFunctionList; ChannelFactory<IFunctionService> m_oFunctionFactory; IFunctionService m_oFunctionProxy; m_oFunctionFactory = new ChannelFactory("FunctionServiceEndPoint"); i_oFunctionList= m_oFunctionProxy.GetAllFunction(iFirstResult, iMaxPageSize, "", "", "", out iRows); BindGrid bindGrid = new BindGrid(DisplayGrid1); bindGrid.DataTable = BuildDataTable(i_oFunctionList); So the requirement is we want to move this section(binding the grid) to a user control, generalising the endpoints, create the proxy and implement. Is there any way to achieve this? Quick responses are really appreciated.

    Read the article

  • Maven String Replace of Text Web Resources

    - by Jaco van Niekerk
    I have a Maven web application with text files in src/main/webapp/textfilesdir As I understand it, during the package phase this textfilesdir directory will be copied into the target/project-1.0-SNAPSHOT directory, which is then zipped up into a target/project-1.0-SNAPSHOT.war Problem Now, I need to do a string replacement on the contents of the text files in target/project-1.0-SNAPSHOT/textfilesdir. This must then be done after the textfilesdir is copied into target/project-1.0-SNAPSHOT, but prior to the target/project-1.0-SNAPSHOT.war file being created. I believe this is all done during the package phase. How can a plugin (potentially maven-antrun-plugin), plug into the package phase to do this. The text files don't contain properties, like ${property-name} to filter on. String replacement is likely the only option. Options Modify the text files after the copy into target/project-1.0-SNAPSHOT directory, yet prior to the WAR creation. After packaging, extract the text files from WAR, modify them, and add them back into the WAR. I'm thinking there is another option here I'm missing. Thoughts anyone?

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

  • Security implications of writing files using PHP

    - by susmits
    I'm currently trying to create a CMS using PHP, purely in the interest of education. I want the administrators to be able to create content, which will be parsed and saved on the server storage in pure HTML form to avoid the overhead that executing PHP script would incur. Unfortunately, I could only think of a few ways of doing so: Setting write permission on every directory where the CMS should want to write a file. This sounds like quite a bad idea. Setting write permissions on a single cached directory. A PHP script could then include or fopen/fread/echo the content from a file in the cached directory at request-time. This could perhaps be carried out in a Mediawiki-esque fashion: something like index.php?page=xyz could read and echo content from cached/xyz.html at runtime. However, I'll need to ensure the sanity of $_GET['page'] to prevent nasty variations like index.php?page=http://www.bad-site.org/malicious-script.js. I'm personally not too thrilled by the second idea, but the first one sounds very insecure. Could someone please suggest a good way of getting this done?

    Read the article

  • Accessing protected REST endpoint with JQuery

    - by Andy
    I have a site where members login to their account (FormsAuth). I would like to set up a RESTful service that I can access using jQuery. I would like to protect these services using the same FormsAuth. How would a third-party site be able to access these services? They would need to pass in the Principal/Identity to the service, right? I've only seen examples of Basic Authentication (which Twitter uses and jQuery supports). I'm very new to WCT/REST, so not sure how this should be done.

    Read the article

  • nested include in php

    - by aeonsleo
    The directory structure: C:/wamp/www/application/model/data_access/data_object.php C:/wamp/www/application/model/users/user.class.php C:/wamp/www/application/controller/projects.php C:/wamp/www/application/controller/links/links.php I have 2 php files data_object.php and user.class.php Now user.class.php has an include statement for data_object.php wchih is relative to user.class.php.These two files are under different directory hierarchy. Now I have to include this user.class.php in various files (like projects.php, links.php-which themselves are under different hierarchy) whenever i want to create a User() object. The problem is the relative path for file inclusion of data_object.php does work for say projects.php but if i open links.php the error message says it could not open file data_object.php in user.class.php. What i think is for relative inclusion of data_object.php it is considering the path of the file in which user.class.php is included. I am facing such problems in more than one scenarios I have to keep my directory structure the way it is but have to find a way to work with nested includes. I used Document root of session it give root path as C:/wamp/www/ i appended the path for data_object.php include but this is not working. (note: the forward slash is present after www) I am currently running on wamp server's localhost but after completion i have to host the solution on a domain. Pls help

    Read the article

  • HTTP Error 403.1 - Permissions are fixed, what else is wrong?

    - by baron
    I have developed a HTTP Handler Web Service, and have had it successfully deployed, through testing on other environments i've ran into another problem. This is the error message I receive: You have attempted to execute a CGI, ISAPI, or other executable program from a directory that does not allow programs to be executed. HTTP Error 403.1 - Forbidden: Execute access is denied. Internet Information Services (IIS) So it is obvious what needs to be fixed, so: 1) Start Internet Information Services (IIS) Manager. 2) Right-click the Web site that contains the SharePoint Web site that you created, and then click Properties. 3) Click the Home Directory tab. 4) Under Application settings, click either Scripts only in the Execute permissions list or Scripts and Executables in the Execute permissions list (as appropriate to your situation). Click OK. 5) Quit IIS Manager. But I'm still getting the same error. So what else could be wrong?

    Read the article

  • Trying to Use LoadMoreElement in Monotouch.Dialog

    - by user1487581
    I am using Monotouch to write an Ipad app. The app uses tables to browse down through a directory tree and then select a file. I have used Monotouch.Dialog to browse the directories and I set up the directory tables as the app starts.However there are too many files to set up in a table as the app starts and so I want to set up the 'file table' as the file is selected from the lowest level directory table. I am trying to use LoadMoreElement to do this but I cannot make it work or find any examples online. I have coded the 'Elements API Walkthrough' in the Xamarin tutorial at:- http://docs.xamarin.com/ios/tutorials/MonoTouch.Dialog I then add a new section to the code:- _addButton.Clicked += (sender, e) => { ++n; var task = new Task{Name = "task " + n, DueDate = DateTime.Now}; var taskElement = new RootElement (task.Name){ new Section () { new EntryElement (task.Name, "Enter task description", task.Description) }, new Section () { new DateElement ("Due Date", task.DueDate) }, new Section() { new LoadMoreElement("Passive","Active", delegate {MyAction();}) } }; _rootElement [0].Add (taskElement); Where MyAction is:- public void MyAction() { Console.WriteLine ("we have been actioned"); } The problem is that MyAction is triggered and Console.Writeline writes the message but the table stays in the active state and never returns to passive. the documentation says:- Once your code in the NSAction is finished, the UIActivity indicator stops animating and the normal caption is displayed again. What am I missing? Ian

    Read the article

  • How to install DBD::mysql on OS X server?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • How can I display an ASP.NET MVC html part from one application in another

    - by Frank Sessions
    We have several asp.net MVC apps in the following setup SecurityApp (root application - handles forms auth for SSO and has a profile edit page) Application1 (virtual directory) Application2 (virtual directory) Application3 (virtual directory) so that domain.com points to SecurityApp and domain.com/Application1 etc point to their associated virtual directories. All of our Single Sign On (SSO) is working properly using forms authentication. Based on the users permissions when logging in a menu that lists their available applications and a logout link will be generated and saved in the cache - this menu displays fine whenever the user is in the SecurityApp (editing their profile) but we cannot figure out how to get the Applications in the virtual directories to display the same application menu. We have tried: 1) Using JSONP to do an request that will return the html for the menu. The ajax call returns the HTML with the html; however, because User.IsAuthenticated is false the menu comes back empty. 2) We created a user control and include it along with the dll's for the SecurityApp project and this works; however, we dont want to have to include all the dlls for the SecurityApp project in every application that we create (along with all the app settings in the web.config) We would like this to be as simple as possible to implement so that anyone creating a new app can add the menu to their application in as few steps as possible... Any ideas? To Clarify - we are using ASP.NET MVC 1.0 since these apps are in production and we do not have the okay to go to ASP.NET MVC 2.0 (unfortunately)

    Read the article

  • What is the purpose of the garbage (files) that Qt Creator auto-generates and how can I tame them?

    - by Venemo
    Hello Everyone, I'm fairly new to Qt, and I'm using the new Nokia Qt SDK beta and I'm working to develop a small application for my Nokia N900 in my free time. Fortunately, I was able to set up everything correctly, and also to run my app on the device. I've learned C++ in school, so I thought it won't be so difficult. I use Qt Creator as my IDE, because it doesn't work with Visual Studio. I also wish to port my app to Symbian, so I have run the emulator a few times, and I also compile for Windows to debug the most evil bugs. (The debugger doesn't work correctly on the device.) I come from a .NET background, so there are some things that I don't understand. When I hit the build button, Qt Creator generates a bunch of files to my project directory: moc_*.cpp files - I don't know their purpose. Could someone tell me? *.o files - I assume these are the object code *.rss files - I don't know their purpose, but they definitely don't have anything to do with RSS Makefile and Makefile.Debug - I have no idea AppName (without extension) - the executable for Maemo, and AppName.sis - the executable for Symbian, I guess? AppName.loc - I have no idea AppName_installer.pkg and AppName_template.pkg - I have no idea qrc_Resources.cpp - I guess this is for my Qt resources (where AppName is the name of the application in question) I noticed that these files can be safely deleted, Qt Creator simply regenerates them. The problem is that they pollute my source directory. Especially because I use version control, and if they can be regenerated, there is no point in uploading them to SVN. So, could someone please tell me what the exact purpose of these files is, and how can I ask Qt Creator to place them into another directory? Thank you in advance!

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >