Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 469/549 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Mixed Mode C++ DLL function call failure when app launched from network share. Called from unmanage

    - by Steve
    Mixed-mode DLL called from native C application fails to load: An unhandled exception of type 'System.IO.FileLoadException' occurred in Unknown Module. Additional information: Could not load file or assembly 'XXSharePoint, Version=0.0.0.0, Culture=neutral, PublicKeyToken=e0fbc95fd73fff47' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417) My environment is: Native C application calling a mixed mode C++ DLL, which then loads a C# DLL.. This works correctly when loaded from a local drive, but when launched from a network drive, it fails with the above messages. The call to LoadLibrary succeeds, as does the GetProcAddress. The load error happens when I call the function. I have digitally signed the C application, and I've performed "strong name" signing on the 2 DLLs. The PublickKeyToken in the message above does match the named DLL. I have also issued the CASPOLcommands on my client to grant FullTrust to that strong name keytoken. When that failed to work, I tried the CASPOL command to grant FullTrust to the URL of the network drive (including path to my application's directory); no change in results. I tried removing all dependencies, so that there was just the initial mixed-mode DLL... I replaced the bodies of all the functions with just a return of a "success" integer value. Results unchanged. Only when I changed it from Mixed Mode to Win32, and changed the Configuration Properties General Common Language Runtime Support from "Common Language Runtime Support" to "No Common Language Runtime Support" did calling the DLL produce the expected result (just returned the "success" integer return value).

    Read the article

  • Rails 2.3.2 trying to render ERB instead of HAML

    - by c00lryguy
    Rails is suddenly trying to render ERB instead of Haml and I can't figure out why. I've created new rails projects, reinstalled Haml, and reinstalled Rails. Here's exactly the steps I take when making my application (Rails 2.3.2): rails> rails test rails> cd test rails\test> haml --rails . rails\test> ruby script\generate model user email:string password:string rails\test> ruby script\generate controller users index rails\test> rake db:migrate Here's what the UsersController looks like: class UsersController < ApplicationController def index @users = User.all end end My routes: ActionController::Routing::Routes.draw do |map| map.resources :users end I now create views\users\index.html.haml: %table %th(style="text-align: left;") %h1 Users - for user in @users %tr %td= user.email %td= user.password Annnd run the server... I navigate to localhost:3000\users and I get this error message: Template is missing Missing template users/index.erb in view path app/views For some reason Rails is trying to find and render .erb files instead of .haml files. vendor\plugins\haml\init.rb exists, untouched. I've reinstalled Haml (Pretty Penny) multiple times and still get the same results. I've also tried adding config.gem 'haml' to my environment.rb but this also doesn't work. I can't figure out why suddenly rails will not render haml for me.

    Read the article

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

  • Why does output of fltk-config truncate arguments to gcc?

    - by James Morris
    I'm trying to build an application I've downloaded which uses the SCONS "make replacement" and the Fast Light Tool Kit Gui. The SConstruct code to detect the presence of fltk is: guienv = Environment(CPPFLAGS = '') guiconf = Configure(guienv) if not guiconf.CheckLibWithHeader('lo', 'lo/lo.h','c'): print 'Did not find liblo for OSC, exiting!' Exit(1) if not guiconf.CheckLibWithHeader('fltk', 'FL/Fl.H','c++'): print 'Did not find FLTK for the gui, exiting!' Exit(1) Unfortunately, on my (Gentoo Linux) system, and many others (Linux distributions) this can be quite troublesome if the package manager allows the simultaneous install of FLTK-1 and FLTK-2. I have attempted to modify the SConstruct file to use fltk-config --cflags and fltk-config --ldflags (or fltk-config --libs might be better than ldflags) by adding them like so: guienv.Append(CPPPATH = os.popen('fltk-config --cflags').read()) guienv.Append(LIBPATH = os.popen('fltk-config --ldflags').read()) But this causes the test for liblo to fail! Looking in config.log shows how it failed: scons: Configure: Checking for C library lo... gcc -o .sconf_temp/conftest_4.o -c "-I/usr/include/fltk-1.1 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT" gcc: no input files scons: Configure: no How should this really be done? And to complete my answer, how do I remove the quotes from the result of os.popen( 'command').read()? EDIT The real question here is why does appending the output of fltk-config cause gcc to not receive the filename argument it is supposed to compile?

    Read the article

  • Is there a disassembler + debugger for java (ala OllyDbg / SoftICE for assembler)?

    - by Ran Biron
    Is there a utility similar to OllyDbg / SoftICE for java? I.e. execute class (from jar / with class path) and, without source code, show the disassembly of the intermediate code with ability to step through / step over / search for references / edit specific intermediate code in memory / apply edit to file... If not, is it even possible to write something like this (assuming we're willing to live without hotspot for the debug duration)? Edit: I'm not talking about JAD or JD or Cavaj. These are fine decompilers, but I don't want a decompiler for several reasons, most notable is that their output is incorrect (at best, sometimes just plain wrong). I'm not looking for a magical "compiled bytes to java code" - I want to see the actual bytes that are about to be executed. Also, I'd like the ability to change those bytes (just like in an assembly debugger) and, hopefully, write the changed part back to the class file. Edit2: I know javap exists - but it does only one way (and without any sort of analysis). Example (code taken from the vmspec documentation): From java code, we use "javac" to compile this: void setIt(int value) { i = value; } int getIt() { return i; } to a java .class file. Using javap -c I can get this output: Method void setIt(int) 0 aload_0 1 iload_1 2 putfield #4 5 return Method int getIt() 0 aload_0 1 getfield #4 4 ireturn This is OK for the disassembly part (not really good without analysis - "field #4 is Example.i"), but I can't find the two other "tools": A debugger that goes over the instructions themselves (with stack, memory dumps, etc), allowing me to examine the actual code and environment. A way to reverse the process - edit the disassembled code and recreate the .class file (with the edited code).

    Read the article

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point.

    Read the article

  • C# text creation issue

    - by Mike
    This is whats going on. I have a huge text file that is suppose to be 1 line per entry. The issue is sometimes the line is broken with a new line. I edit this entire file and wherever the file doesn't begin with ("\"A) i need to append the current line to the previous line ( replacing \n with " "). Everything I come up with keeps appending the line to a new line. Any help is appricated... CODE: public void step1a() { string begins = ("\"A"); string betaFilePath = @"C:\ext.txt"; string[] lines = File.ReadAllLines(betaFilePath); foreach (string line in lines) { if (line.StartsWith(begins)) { File.AppendAllText(@"C:\xt2.txt",line); File.AppendAllText(@"C:\xt2.txt", "\n"); } else { string line2 = line.Replace(Environment.NewLine, " "); File.AppendAllText(@"C:\xt2.txt",line2); } } } Example: Orig: "\"A"Hero|apple|orange|for the fun of this "\"A"Hero|apple|mango|lots of fun always "\"A"Her|apple|fruit|no pain is the way "\"A"Hero|love|stackoverflowpeople|more fun Resulting: "\"A"Hero|apple|orange|for the fun of this "\"A"Hero|apple|mango|lots of fun always "\"A"Her|apple|fruit|no pain is the way "\"A"Hero|love|stackoverflowpeople|more fun

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Makefile trickery using VPATH and include.

    - by roe
    Hi, I'm playing around with make files and the VPATH variable. Basically, I'm grabbing source files from a few different places (specified by the VPATH), and compile them into the current directory using simply a list of .o-files that I want. So far so good, now I'm generating dependency information into a file called '.depend' and including that. Gnumake will attempt to use the rules defined so far to create the included file if it doesn't exist, so that's ok. Basically, my makefile looks like this. VPATH=A/source:B/source:C/source objects=first.o second.o third.o executable: $(objects) .depend: $(objects:.o=.c) $(CC) -MM $^ > $@ include .depend Now for the real question, can I suppress the generation of the .depend file in any way? I'm currently working in a clearcase environment - sloooow, so I'd prefer to have it a bit more under control when to update the dependency information. It's more or less an academic exercise as I could just wrap the thing in a script which is touching the .depend file before executing make (thus making it more recent than any source file), but it'd interesting to know if I can somehow suppress it using 'pure' make. I cannot remove the dependency to the source files (i.e. using simply .depend:), as I'm depending on the $^ variable to do the VPATH resolution for me. If there'd be any way to only update dependencies as a result of updated #include directives, that'd be even better of course.. But I'm not holding my breath for that one.. :)

    Read the article

  • How to checkout from SVN with an ANT task?

    - by Josh
    I'm interested in any way that I can create an Ant task to checkout files from SubVersion. I "just" want to do the checkout from the command line. I've been using Eclipse with Ant and SubVersion for a while now, but my Ant and SubVersion knowledge is somewhat lacking as I relied on Eclipse to wire it all together. I've been looking at SvnAnt as one solution, which is part of Subclipse from Tigris at http://subclipse.tigris.org/svnant/svn.html. It may work fine, but all I get are NoClassDefFoundErrors. To the more experienced this probably looks like a simple Ant configuration problem, but I don't know about that. I copied the svnant.jar and svnclientadapter.jar into my Ant lib directory. Then I tried to run the following: <?xml version="1.0"?> <project name="blah"> <property environment="env"/> <path id="svnant.classpath"> <pathelement location="${env.ANT_HOME}/lib"/> <fileset dir="${env.ANT_HOME}/lib/"> <include name="svnant.jar"/> </fileset> </path> <typedef resource="org/tigris/subversion/svnant/svnantlib.xml" classpathref="svnant.classpath" /> <target name="checkout"> <svn username="abc" password="123"> <checkout url="svn://blah/blah/trunk" destPath="workingcopy"/> </svn> </target> </project> To which I get the following response: build.xml:17: java.lang.NoClassDefFoundError: org/tigris/subversion/javahl/SVNClientInterface I am running SVN 1.7 and SvnAnt 1.3 on Windows XP 32-bit. Thanks for any pointers!

    Read the article

  • Login Website, curious Cookie Problem

    - by Collin Peters
    Hello, Language: C# Development Environment: Visual Studio 2008 Sorry if the english is not perfect. I want to login to a Website and get some Data from there. My Problem is that the Cookies does not work. Everytime the Website says that I should activate Cookies but i activated the Cookies trough a Cookiecontainer. I sniffed the traffic serveral times for the login progress and I see no problem there. I tried different methods to login and I have searched if someone else have this Problem but no results... Login Page is: "www.uploaded.to", Here is my Code to Login in Short Form: private void login() { //Global CookieContainer for all the Cookies CookieContainer _cookieContainer = new CookieContainer(); //First Login to the Website HttpWebRequest _request1 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login"); _request1.Method = "POST"; _request1.CookieContainer = _cookieContainer; string _postData = "email=XXXXX&password=XXXXX"; byte[] _byteArray = Encoding.UTF8.GetBytes(_postData); Stream _reqStream = _request1.GetRequestStream(); _reqStream.Write(_byteArray, 0, _byteArray.Length); _reqStream.Close(); HttpWebResponse _response1 = (HttpWebResponse)_request1.GetResponse(); _response1.Close(); //######################## //Follow the Link from Request1 HttpWebRequest _request2 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login?coo=1"); _request2.Method = "GET"; _request2.CookieContainer = _cookieContainer; HttpWebResponse _response2 = (HttpWebResponse)_request2.GetResponse(); _response2.Close(); //####################### //Get the Data from the Page after Login HttpWebRequest _request3 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/home"); _request3.Method = "GET"; _request3.CookieContainer = _cookieContainer; HttpWebResponse _response3 = (HttpWebResponse)_request3.GetResponse(); _response3.Close(); } I'm stuck at this problem since many weeks and i found no solution that works, please help...

    Read the article

  • Some help required while working on Java and Cygwin together

    - by Hippo
    Hello .. I am new to java and also cygwin . I do not have in detailed knowledge of both . I need some help.. I simple steps i will try to explain my problem. 1) I am working on tinyOS . its open source OS , used for wireless sensor networks. It provides java libraries to work on communication (PC to sensor) 2) I am working on windows xp environment through cigwin. 3) I am developing an application . THis application requires one java interface called "Serial Forwarder" , which is readily available in libraries provided. Previously i used to start this interface manually (by entering command *"java net.tinyos.sf.SerialForwarder ")*and then my application which uses this interface. But now i want to make my application independent . User need know about this background cygwin commands . 4) So in my java application i used "Runtime.getRuntime().exec( "java net.tinyos.sf.SerialForwarder)" . 5) This i neither giving any error nor starting the interface. Am I going on right way ? When i am using runtime execute command , how can i make sure that this command is called through cigwin interface ? Also .. if i want to write .bat file .. i which i can give commands which will be executed .. how can i make sure that those commands are given through cigwin .. and not through cmd.exe .. Please help . me .

    Read the article

  • Optimize Use of Ramdisk for Eclipse Development

    - by Eric J.
    We're developing Java/SpringSource applications with Eclipse on 32-bit Vista machines with 4GB RAM. The OS exposes roughly 3.3GB of RAM due to reservations for hardware etc. in the virtual address space. I came across several Ramdisk drivers that can create a virtual disk from the OS-hidden RAM and am looking for suggestions how best to use the 740MB virtual disk to speed development in our environment. The slowest part of development for us is compiling as well as launching SpringSource dm Server. One option is to configure Vista to swap to the Ramdisk. That works, and noticeably speeds up development in low memory situations. However, the 3.3GB available to the OS is often sufficient and there are many situations where we do not use the swap file(s) much. Another option is to use the Ramdisk as a location for temporary files. Using the Vista mklink command, I created a hard link from where the SpringSource dm Server's work area normally resides to the Ramdisk. That significantly improves server startup times but does nothing for compile times. There are roughly 500MB still free on the Ramdisk when the work directory is fully utilized, so room for plenty more. What other files/directories might be candidates to place on the Ramdisk? Eclipse-related files? (Parts of) the JDK? Is there a free/open source tool for Vista that will show me which files are used most frequently during a period of time to reduce the guesswork?

    Read the article

  • Merging changes to a workspace with uncommitted changes

    - by Kim L
    We've just recently switched over from SVN to Mercurial, but now we are running into problems with our workflow. Example I have my local clone of the repository which I work on. I'm making some highly experimental changes to our code base, something that I don't want to commit before I'm sure it works the way it is supposed to, I don't want to commit it even locally. Now, simultaneously, my co-worker has made some significant improvements/bug fixes which I need. He pushes his commits to our main repository. The question is, how can I merge his changes to my workspace without the requirement that I have to commit all my changes, since I need his changes to test my own code? A more day-to-day problem we have with the exact same workflow is where we have a couple of configuration files which are in the repository. Each developer makes a couple of small environment specific changes to the configuration files, but do not commit the changes. These couple of uncommitted files hinders us from making any merges to our workspace, just like with the example above. Ideally, the configuration files probably shouldn't be in the repository, unfortunately, that's just how it has to be for here unnamed reasons.

    Read the article

  • android/rails multipart upload problem

    - by trioglobal
    My problem is that I try to upload an image and some text values to an rails server, and the text values end up as files, insted of just param values. How the post looks on the server Parameters: {"action"="create", "controller"="problems", "problem"={"lon"=#File:/tmp/RackMultipart20100404-598-8pi1vj-0, "photos_attributes"={"0"={"image"=#File:/tmp/RackMultipart20100404-598-pak6jk-0}}, "subject"=#File:/tmp/RackMultipart20100404-598-nje11p-0, "category_id"=#File:/tmp/RackMultipart20100404-598-ijy1oo-0, "lat"=#File:/tmp/RackMultipart20100404-598-1a7140w-0, "email"=#File:/tmp/RackMultipart20100404-598-1b7w6jp-0}} part of the android code try { File file = new File(Environment.getExternalStorageDirectory(), "FMS_photo.jpg"); HttpClient client = new DefaultHttpClient(); HttpPost post = new HttpPost("http://homepage.com/path"); FileBody bin = new FileBody(file); Charset chars = Charset.forName("UTF-8"); MultipartEntity reqEntity = new MultipartEntity(); //reqEntity.addPart("problem[subject]", subject); reqEntity.addPart("problem[photos_attributes][0][image]", bin); reqEntity.addPart("problem[category_id]", new StringBody("17", chars)); //.... post.setEntity(reqEntity); HttpResponse response = client.execute(post); HttpEntity resEntity = response.getEntity(); if (resEntity != null) { resEntity.consumeContent(); } return true; } catch (Exception ex) { //Log.v(LOG_TAG, "Exception", ex); globalStatus = UPLOAD_ERROR; serverResponse = ""; return false; } finally { }

    Read the article

  • Is it possible to read data that has been separately copied to the Android sd card without having ro

    - by icecream
    I am developing an application that needs to access data on the sd card. When I run on my development device (an odroid with Android 2.1) I have root access and can construct the path using: File sdcard = Environment.getExternalStorageDirectory(); String path = sdcard.getAbsolutePath() + File.separator + "mydata" File data = new File(path); File[] files = data.listFiles(new FilenameFilter() { @Override public boolean accept(File dir, String filename) { return filename.toLowerCase().endsWith(".xyz"); }}); However, when I install this on a phone (2.1) where I do not have root access I get files == null. I assume this is because I do not have the right permissions to read the data from the sd card. I also get files == null when just trying to list files on /sdcard. So the same applies without my constructed path. Also, this app is not intended to be distributed through the app store and is needs to use data copied separately to the sd card so this is a real use-case. It is too much data to put in res/raw (I have tried, it did not work). I have also tried adding: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> to the manifest, even though I only want to read the sd card, but it did not help. I have not found a permission type for reading the storage. There is probably a correct way to do this, but I haven't been able to find it. Any hints would be useful.

    Read the article

  • Using Excel as front end to Access database (with VBA)

    - by Alex
    I am building a small application for a friend and they'd like to be able to use Excel as the front end. (the UI will basically be userforms in Excel). They have a bunch of data in Excel that they would like to be able to query but I do not want to use excel as a database as I don't think it is fit for that purpose and am considering using Access. [BTW, I know Access has its shortcomings but there is zero budget available and Access already on friend's PC] To summarise, I am considering dumping a bunch of data into Access and then using Excel as a front end to query the database and display results in a userform style environment. Questions: How easy is it to link to Access from Excel using ADO / DAO? Is it quite limited in terms of functionality or can I get creative? Do I pay a performance penalty (vs.using forms in Access as the UI)? Assuming that the database will always be updated using ADO / DAO commands from within Excel VBA, does that mean I can have multiple Excel users using that one single Access database and not run into any concurrency issues etc.? Any other things I should be aware of? I have strong Excel VBA skills and think I can overcome Access VBA quite quickly but never really done Excel / Access link before. I could shoehorn the data into Excel and use as a quasi-database but that just seems more pain than it is worth (and not a robust long term solution) Any advice appreciated. Alex

    Read the article

  • auto m3u creation

    - by newbie69
    Hi, I am looking for a solution to automatically create .m3u playlists for each music folder in my sdcard so that the music player can play music by folders. I had written a simple VB.Net app in the past that does exactly the above but apparently, it has to be run from Windows. Since I have no Java nor Android developing experience I found it quite hard to try to write a similar app that can be run directly from the phone. In a few words, the app does the following: 1) Searches the SD and lists all folders that contain 2 or more .mp3 files (just for user verification) 2) Creates in every listed folder above, a .m3u file that simply lists line-by-line all the mp3 files that exist in the specific folder. Is there such an app or could someone spare some time and give me some rough instructions on how to create it in Eclipse 3.5.2 environment? (device used: Motorola Droid/Milestone, Android 2.1) I don't care about any graphics or complex UI, just a script to execute the above procedure that would give every "playlist-supporting" music player in Android, the precious ability to play music by folders. I know it is too much to ask but just in case! Thanx in advance.

    Read the article

  • WCF Discovery finds endpoint but host is "localhost"

    - by Flo
    I am trying to use the Discovery feature in WCF using http://msdn.microsoft.com/en-us/library/dd456783(v=VS.100).aspx as a starting point. It works fine on my machine, but then I wanted to run the service on a different machine. The service was discovered properly but the hostname of the found service is always "localhost" which is of course not much use. Service Endpoint: var endpointAddress = new EndpointAddress(new UriBuilder { Scheme = Uri.UriSchemeNetTcp, Port = port}.Uri); var endpoint = new ServiceEndpoint(ContractDescription.GetContract(typeof(IServiceInterface)), new NetTcpBinding (), endpointAddress); Client: static EndpointAddress FindServiceAddress<T>() { Stopwatch stopwatch = new Stopwatch(); stopwatch.Start(); DiscoveryClient discoveryClient = new DiscoveryClient(new UdpDiscoveryEndpoint()); // Find endpoints FindResponse findResponse = discoveryClient.Find(new FindCriteria(typeof(T))); Console.WriteLine(string.Format("Searched for {0} seconds. Found {1} Endpoint(s).",stopwatch.ElapsedMilliseconds / 1000,findResponse.Endpoints.Count)); if (findResponse.Endpoints.Count > 0) { return findResponse.Endpoints[0].Address; } return null; } Should I simply set the Host to System.Environment.MachineName?

    Read the article

  • Sending email through a Google Apps account is working locally, but not on my web server...

    - by Janis Baldwin
    Related: Send Email via C# through Google Apps account My question is the same so I implemented Koistya's solution as follows. The heartbreak is that it works beautifully on my dev laptop but when ported to web server it times out with no explanation. My SMTP config is in my web.config file. I made mods based on Koistya's answer: <mailSettings> **Changed** <smtp from="[email protected]"> <network host="smtp.gmail.com" password="[password]" port="587" userName="[email protected]"/> </smtp> **Original** <!--<smtp from="[email protected]"> <network host="mail.domain.com" password="[password]" port="25" userName="[email protected]"/> </smtp>--> </mailSettings> My .Net C# code (before & after removed): SmtpClient mSmtpClient = new SmtpClient(); mSmtpClient.EnableSsl = true; mSmtpClient.Send(message); As I said this works great on my dev environment but not on web. Can anyone help? Thanks.

    Read the article

  • TCP: Address already in use exception - possible causes for client port? NO PORT EXHAUSTION

    - by TomTom
    Hello, stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options. The environment: *Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times. * The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;) The Symptom: At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run. Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with. Which is why I am asking here. Ayneone an ide what ELSE could cause this? It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause? Thanks

    Read the article

  • Unpacking gems [Rails 2.3.5]

    - by yuval
    I have the following gems defined in my environment.rb file: config.gem "authlogic" config.gem "paperclip" config.gem "pauldix-feedzirra", :lib => "feedzirra", :source => "http://gems.github.com" config.gem 'whenever', :lib => false, :source => 'http://gemcutter.org/' I have them installed on my local computer and everything is working well. Since I am working on a shared-server (DreamHost), I need to unpack those gems to get them to work (can't install them as I did on my own computer to get them to work). Before uploading, I ran the following on my local machine: rake gems:unpack This created the following folders in /vender/gems: authlogic-2.1.3, paperclip-2.3.1.1, pauldix-feedzirra-0.0.18, whenever-0.4.1 So it looks like they're all there. When I run rake db:migrate on the server, though, I get these following error: Missing these required gems: pauldix-feedzirra For some reason, the feedzirra unpacked gem is not detected. Could anybody give me a clue as to why this is happening and how to solve it? Thanks!

    Read the article

  • Code Own Socket Server or Use Red5/ElectroServer on Amazon EC2?

    - by Travis
    I've been thinking for a long time about working on a multiplayer game in Flash. I need updates frequently enough that ajax requests won't work so I need to use a socket server. The system will eventually have enough objects/players that I would consider it an MMO. I would like to set up a scalable system on Amazon's EC2. (Which probably effects my choice of server) This architecture would hopefully allow the game to grow without many changes over time. (Using a domain decomposition technique or something similar) Heres my internal debate: Should I a. Code my own socket server in C++ or Java? b. Use the free and open source Red5 socket server for Flash? or c. Pay the licensing fees and go for Electroserver? I consider myself a decent developer, but am at an impasse as to what road to go down. I'm not sure if I, could develop/would need, the features of one of the prepackaged socket servers. I'm also not sure if the prepackaged servers would work well in an Amazon EC2 environment and take full advantage of its features. Any help or guidance would be greatly appreciated.

    Read the article

  • django inner redirects

    - by Zayatzz
    Hello I have one project that in my own development computer (uses mod_wsgi to serve the project) caused no problems. In live server (uses mod_fastcgi) it generates 500 though. my url conf is like this: # -*- coding: utf-8 -*- from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^admin/', include(admin.site.urls)), url(r'^', include('jalka.game.urls')), ) and # -*- coding: utf-8 -*- from django.conf.urls.defaults import * from django.contrib.auth import views as auth_views urlpatterns = patterns('jalka.game.views', url(r'^$', view = 'front', name = 'front',), url(r'^ennusta/(?P<game_id>\d+)/$', view = 'ennusta', name = 'ennusta',), url(r'^login/$', auth_views.login, {'template_name': 'game/login.html'}, name='auth_login'), url(r'^logout/$', auth_views.logout, {'template_name': 'game/logout.html'}, name='auth_logout'), url(r'^arvuta/$', view = 'arvuta', name = 'arvuta',), ) and .htaccess is like that: Options +FollowSymLinks RewriteEngine on RewriteOptions MaxRedirects=10 # RewriteCond %{HTTP_HOST} . RewriteCond %{HTTP_HOST} ^www\.domain\.com RewriteRule (.*) http://domain.com/$1 [R=301,L] AddHandler fastcgi-script .fcgi RewriteCond %{HTTP_HOST} ^jalka\.domain\.com$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*) cgi-bin/fifa2010.fcgi/$1 [QSA,L] RewriteCond %{HTTP_HOST} ^subdomain\.otherdomain\.eu$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*) cgi-bin/django.fcgi/$1 [QSA,L] Notice, that i have also other project set up with same .htaccess and that one is running just fine with more complex urls and views fifa2010.fcgi: #!/usr/local/bin/python # -*- coding: utf-8 -*- import sys, os DOMAIN = "domain.com" APPNAME = "jalka" PREFIX = "/www/apache/domains/www.%s" % (DOMAIN,) # Add a custom Python path. sys.path.insert(0, os.path.join(PREFIX, "htdocs/django/Django-1.2.1")) sys.path.insert(0, os.path.join(PREFIX, "htdocs")) sys.path.insert(0, os.path.join(PREFIX, "htdocs/jalka")) # Switch to the directory of your project. (Optional.) os.chdir(os.path.join(PREFIX, "htdocs", APPNAME)) # Set the DJANGO_SETTINGS_MODULE environment variable. os.environ['DJANGO_SETTINGS_MODULE'] = "%s.settings" % (APPNAME,) from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") Alan

    Read the article

  • Continuous Integration with 64-bit Sharepoint and TFS 2008?

    - by Hirvox
    I've set up a 64-bit TFS 2008 build server with Sharepoint, continuous integration and out-of-the-box MSTest. Unit tests for plain business logic classes run just fine and test results are published into TFS. However, any test that uses Sharepoint's API fails horribly, SPFarm.Local returning null and so on. Is there a way to fix this? The tests run fine in an otherwise identical 32-bit development environment (Windows Server 2008 under Hyper-V, Sharepoint patched up to June 2009 cumulative update) from both Visual Studio and command line, so the problem is not about improper use of SPContext.Current or any other part of the API that needs to be run in a web server context. I've ruled out permissions issues, because the build agent account can deploy the solution and create site collections just fine with stsadm. The next culprit could be that the unit tests were being run with a 32-bit process, which couldn't access the 64-bit Sharepoint API properly. I tried a workaround, but it has the side effect of disabling TFS support in MSTest. Do I have to wait for 2010 versions of MS tools (and hope for the best) or is there a third-party test framework available that runs natively in 64 bit and can publish test results into TFS 2008?

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >