Search Results

Search found 6928 results on 278 pages for 'calling'.

Page 220/278 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Are there Windows API binaries for Subversion or do I have to build SVN to call the API from Windows

    - by JeffH
    I want to call a Subversion API from a Visual Studio 2003 C++ project. I know there are threads here, here, here, and here that tell how to get started with C#.NET on Windows (the consensus seems to be SharpSvn, which I've used easily and successfully on another project) but that's not what I want. I've read the chapter on using APIs in the red-bean book which says: Subversion is primarily a set of C libraries, with header (.h) files that live in the subversion/include directory of the source tree. These headers are copied into your system locations (e.g., /usr/local/include) when you build and install Subversion itself from source. These headers represent the entirety of the functions and types meant to be accessible by users of the Subversion libraries. I'd like to use CollabNet Subversion but there doesn't seem to be API binary downloads, and I'd just as soon not build the whole thing if I can avoid it. Considering another approach, I found RapidSVN's C++ API, but it doesn't appear to offer Windows API binaries either and seems to require building SVN (which I would be willing to do as a last choice if RapidSVN's API is higher-level than the stock SVN offering.) Does calling the API from C++ in Windows have to be this much more work compared to using SharpSvn under .NET, or is there something I haven't found that would help me achieve my goal?

    Read the article

  • VC++ 6 and MS Speech SDK 5.1 fatal error C1083: Cannot open source file: 'files\microsoft': No such

    - by eg123
    Trying to compile an application (flite synthesis sapi) on vc++6. This requires Microsoft Speech SDK 5.1 Have included C:\Program Files\Microsoft Speech SDK 5.1\IDL C:\Program Files\Microsoft Speech SDK 5.1\include using Toolsoptionsdirectories and also on another attempt via ProjectSettings Repeatedly get this error microsoft fatal error C1083: Cannot open source file: 'files\microsoft': No such file or directory speech fatal error C1083: Cannot open source file: 'speech': No such file or directory sdk fatal error C1083: Cannot open source file: 'sdk': No such file or directory idl fatal error C1083: Cannot open source file: '5.1\idl': No such file or directory FliteCMUKalDiphone.idl Thought it may be spaces related so included full path in quotes in relevant .h files. No joy Installed Microsoft Speech SDK 5.1 on another machine in same folder as flite and renamed to mssdk51 (so no spaces in pathname) but same error came up. Tried pasting in contents of each .idl called in file where glitch seems to generate Still same message. I am new to C++ and programming in general. My only guess is that something in the speech sdk is calling the .idl file and I can't find where from. Of course this is probably way wrong!

    Read the article

  • Am I mocking this helper function right in my Django test?

    - by CppLearner
    lib.py from django.core.urlresolvers import reverse def render_reverse(f, kwargs): """ kwargs is a dictionary, usually of the form {'args': [cbid]} """ return reverse(f, **kwargs) tests.py from lib import render_reverse, print_ls class LibTest(unittest.TestCase): def test_render_reverse_is_correct(self): #with patch('webclient.apps.codebundles.lib.reverse') as mock_reverse: with patch('django.core.urlresolvers.reverse') as mock_reverse: from lib import render_reverse mock_f = MagicMock(name='f', return_value='dummy_views') mock_kwargs = MagicMock(name='kwargs',return_value={'args':['123']}) mock_reverse.return_value = '/natrium/cb/details/123' response = render_reverse(mock_f(), mock_kwargs()) self.assertTrue('/natrium/cb/details/' in response) But instead, I get File "/var/lib/graphyte-webclient/graphyte-webenv/lib/python2.6/site-packages/django/core/urlresolvers.py", line 296, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'dummy_readfile' with arguments '('123',)' and keyword arguments '{}' not found. Why is it calling reverse instead of my mock_reverse (it is looking up my urls.py!!) The author of Mock library Michael Foord did a video cast here (around 9:17), and in the example he passed the mock object request to the view function index. Furthermore, he patched POll and assigned an expected return value. Isn't that what I am doing here? I patched reverse? Thanks.

    Read the article

  • cancelPreviousPerformRequestWithTarget is not canceling my previously delayed thread started with pe

    - by jmurphy
    Hello, I've launched a delayed thread using performSelector but the user still has the ability to hit the back button on the current view causing dealloc to be called. When this happens my thread still seems to be called which causes my app to crash because the properties that thread is trying to write to have been released. To solve this I am trying to call cancelPreviousPerformRequestsWithTarget to cancel the previous request but it doesn't seem to be working. Below are some code snippets. - (void) viewDidLoad { [self performSelector:@selector(myStopUpdatingLocation) withObject:nil afterDelay:6]; } (void)viewWillDisappear:(BOOL)animated { [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(myStopUpdatingLocation) object:nil]; } Am I doing something incorrect here? The method myStopUpdatingLocation is defined in the same class that I'm calling the perform requests. A little more background. The function that I'm trying to implement is to find a users location, search google for some locations around that location and display several annotations on the map. On viewDidLoad I start updating the location with CLLocationManager. I've build in a timeout after 6 seconds if I don't get my desired accuracy within the timeout and I'm using a performSelector to do this. What can happen is the user clicks the back button in the view and this thread will still execute even though all my properties have been released causing a crash. Thanks in advance! James

    Read the article

  • Am I a discoverer of a bug in the WPF engine?

    - by bitbonk
    We have a MFC 8 application compiled with /CLR that contains a larger amount of Windows Forms UserControls wich again contain WPF user controls using ElementHost. Due to the architecture of our software we can not use HwndHost directly. We observed an extremely strange behavior here that we can not make any sense of: When the CPU load is very high during startup of the application and there are a lot live of ElementHost instances, the whole property engine completely stops working. For example animations that usually just work fine now never update the values of the bound properties, they just stay at some random value after startup. When I set a property that is not bound to anything the value is correctly stored in the dependency property (calling the getter returns the new value) but the visual representation never reflects that. I set the background to red but the background color does not change. We tested this on a lot of different machines all running Windows XP SP2 and it is pretty reproducible. The funny thing here is, that there is in fact one situation where the bound properties actually pickup a new value from the animation and the visual gets updated based on the property values. It is when I resize the ElementHost or when I hide and reshow the parent native control. As soon as I do this, properties that are bound to an animation pickup a new value and the visuals rerender based on the new property values - but just once - if I want to see another update I have to resize the ElementHost. Do you have any explanation of what could be happening here or how I could approach this problem to find it out? What can I do to debug this? Is there a way I can get more information about what WPF actually does or where WPF might have crashed? To me it currently seems like a bug in WPF itself since it only happens at high CPU load at startup.

    Read the article

  • Shared Memory and Process Sempahores (IPC)

    - by fsdfa
    This is an extract from Advanced Liniux Programming: Semaphores continue to exist even after all processes using them have terminated. The last process to use a semaphore set must explicitly remove it to ensure that the operating system does not run out of semaphores.To do so, invoke semctl with the semaphore identifier, the number of semaphores in the set, IPC_RMID as the third argument, and any union semun value as the fourth argument (which is ignored).The effective user ID of the calling process must match that of the semaphore’s allocator (or the caller must be root). Unlike shared memory segments, removing a semaphore set causes Linux to deallocate immediately. If a process allocate a shared memory, and many process use it and never set to delete it (with shmctl), if all them terminate, then the shared page continues being available. (We can see this with ipcs). If some process did the shmctl, then when the last process deattached, then the system will deallocate the shared memory. So far so good (I guess, if not, correct me). What I dont understand from that quote I did, is that first it say: "Semaphores continue to exist even after all processes using them have terminated." and then: "Unlike shared memory segments, removing a semaphore set causes Linux to deallocate immediately."

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • How to debug problems in Linux kernel module `init()`?

    - by Kimvais
    I am using remote (k)gdb to debug a problem in a module that causes a panic when loaded e.g. when init() is called. The stack trace just shows that do_one_initcall(mod->init) causes the crash. In order to get the symbol file loaded in the gdb, I need to get the address of the module text section, and to get that I need to get the module loaded. Because the insmod in busybox (1.16.1) doesn't support -m so I'm stuck to grep modulename /proc/modules + adding the offset from nm to figure out the address. So I'm facing a sort a of a chicken and an egg problem here - to be able to debug the module loading, I need to get the module loaded - but in order to get the module loaded, I need to debug the problem... So I am currently thinking about two options - is there a way to get the address information either: by printk() in the module init code by printk() somewhere in the kernel code all this prior to calling the mod->init() - so I could place a breakpoint there, load the symbol file, hit c and see it crash and burn...

    Read the article

  • Why is JavaMail Transport.send() a static method?

    - by skiphoppy
    I'm revising code I did not write that uses JavaMail, and having a little trouble understanding why the JavaMail API is designed the way it is. I have the feeling that if I understood, I could be doing a better job. We call: transport = session.getTransport("smtp"); transport.connect(hostName, port, user, password); So why is Eclipse warning me that this: transport.send(message, message.getAllRecipients()); is a call to a static method? Why am I getting a Transport object and providing settings that are specific to it if I can't use that object to send the message? How does the Transport class even know what server and other settings to use to send the message? It's working fine, which is hard to believe. What if I had instantiated Transport objects for two different servers; how would it know which one to use? In the course of writing this question, I've discovered that I should really be calling: transport.sendMessage(message, message.getAllRecipients()); So what is the purpose of the static Transport.send() method? Is this just poor design, or is there a reason it is this way?

    Read the article

  • c++ use of winmain()

    - by Jack
    Hi, I just started learning programming for windows in c++. I had this crazy image, that win32 programming is based on calling windows functions and sending parameters to and from them. Like, when you want to create window, you call some win32 function that handles windows GUI and say "Hi, please, create me new window, 100 x 100 px, with two buttons", and that GUI function says "Hi, no problem, when something happends, like user clicks one button, I will change this variable xy located in this location". So, I thought that it will be very similiar to console programming. But the very first instruction surprised me. I always thought that every program executes main() function first. So, when I launch app, windows stores some parameters on top of stack and run that application. So I assumed that initializing main() is just a c++ way to tell the compiler where the first instruction should be. But in win32 programming, there is function called winmain() which starts first. So I am little confused. I thought it´s rule that compiler must have main() to start with, that main just defines where ti start, like some start point identifier. So, please, why is there winmain() function instead of main()? When I thought that C++ programming is as logical as assembler, it confuses me once again.

    Read the article

  • c++ use of winmain()

    - by Jack
    Hi, I just started learning programming for windows in c++. I had this crazy image, that win32 programming is based on calling windows functions and sending parameters to and from them. Like, when you want to create window, you call some win32 function that handles windows GUI and say "Hi, please, create me new window, 100 x 100 px, with two buttons", and that GUI function says "Hi, no problem, when something happends, like user clicks one button, I will change this variable xy located in this location". So, I thought that it will be very similiar to console programming. But the very first instruction surprised me. I always thought that every program executes main() function first. So, when I launch app, windows stores some parameters on top of stack and run that application. So I assumed that initializing main() is just a c++ way to tell the compiler where the first instruction should be. But in win32 programming, there is function called winmain() which starts first. So I am little confused. I thought it´s rule that compiler must have main() to start with, that main just defines where ti start, like some start point identifier. So, please, why is there winmain() function instead of main()? When I thought that C++ programming is as logical as assembler, it confuses me once again.

    Read the article

  • Problem using SQLDataReader with Sybase ASE

    - by John K.
    We're developing a reporting application that uses asp.net-mvc (.net 4). We connect through DDTEK.Sybase middleware to a Sybase ASE 12.5 database. We're having a problem pulling data into a datareader (from a stored procedure). The stored procedure computes values (approximately 50 columns) by doing sums, counts, and calling other stored procedures. The problem we're experiencing is... certain (maybe 5% of the columns) come back with NULL or 0. If we debug and copy the SQL statement being used for the datareader and run it inside another SQL tool we get all valid values for all columns. conn = new SybaseConnection { ConnectionString = ConfigurationManager.ConnectionStrings[ConnectStringName].ToString() }; conn.Open(); cmd = new SybaseCommand { CommandTimeout = cmdTimeout, Connection = conn, CommandText = mainSql }; reader = cmd.ExecuteReader(); // AT THIS POINT IMMEDIATELY AFTER THE EXECUTEREADER COMMAND // THE READER CONTAINS THE BAD (NULL OR 0) DATA FOR THESE COLUMNS. DataTable schemaTable = reader.GetSchemaTable(); // AT THIS POINT WE CAN VIEW THE DATATABLE FOR THE SCHEMA AND IT APPEARS CORRECT // THE COLUMNS THAT DON'T WORK HAVE SPECIFICATIONS IDENTICAL TO THE COLUMNS THAT DO WORK Has anyone had problems like this using Sybase and ADO? Thanks, John K.

    Read the article

  • Automaticaly update ActiveRecord object

    - by Aleksandr Koss
    I have same models: class Father < ActiveRecord::Base has_many :children end class Child < ActiveRecord::Base  belongs_to :father end Then do something like that: $ script/console test Loading test environment (Rails 2.3.5) >> @f1 = Father.create :test => "Father" => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f2 = Father.find :first => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f1 == @f2 => true >> @f1.children => [] >> @f2.children => [] >> @f1.children.create :test => "Child1" => #<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15"> >> @f1.children => [#<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15">] >> @f2.children => [] >> @f2.reload => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f2.children => [#<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15">] As you see rails cache @f2 object. To get actual data we should call reload. There is a way to automatically reload @f2 after children update without calling method "reload"?

    Read the article

  • Route WCF ServiceHost to another computer

    - by I2nfo
    GoodDay, I'm not a guru when it comes to WCF, but i do know the basics. My question is, how do i create a ServiceHost on machine X, while the code is on machine Y? if i build and run this code on my dev machine(localhost) : servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(),"net.tcp://my.datacenter.com/MyApp/MyService1"); //This is normally set to localhost. What implementation must be done on the datacenter server, so that if i had to point to http://my.datacenter.com/MyApp/MyService1 , it will route the service operation to my dev machine (localhost). However, the datacenter should not be accessible via the internet. It is a possible infrastructure that we researching to see if we can create a service bus type architecture so that all our customers can invoke other customer services running on their respective machines just by calling our datacenter url. We have looked at Windows Azure, but we have our own datacenter infrasture that we wish to leverage off. Come think of it, we kind of building our own Azure, on a very very basic scale. How does one go creating this? Thanks in Advance

    Read the article

  • [Ruby] Object assignment and pointers

    - by Jergason
    I am a little confused about object assignment and pointers in Ruby, and coded up this snippet to test my assumptions. class Foo attr_accessor :one, :two def initialize(one, two) @one = one @two = two end end bar = Foo.new(1, 2) beans = bar puts bar puts beans beans.one = 2 puts bar puts beans puts beans.one puts bar.one I had assumed that when I assigned bar to beans, it would create a copy of the object, and modifying one would not affect the other. Alas, the output shows otherwise. ^_^[jergason:~]$ ruby test.rb #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> 2 2 I believe that the numbers have something to do with the address of the object, and they are the same for both beans and bar, and when I modify beans, bar gets changed as well, which is not what I had expected. It appears that I am only creating a pointer to the object, not a copy of it. What do I need to do to copy the object on assignment, instead of creating a pointer? Tests with the Array class shows some strange behavior as well. foo = [0, 1, 2, 3, 4, 5] baz = foo puts "foo is #{foo}" puts "baz is #{baz}" foo.pop puts "foo is #{foo}" puts "baz is #{baz}" foo += ["a hill of beans is a wonderful thing"] puts "foo is #{foo}" puts "baz is #{baz}" This produces the following wonky output: foo is 012345 baz is 012345 foo is 01234 baz is 01234 foo is 01234a hill of beans is a wonderful thing baz is 01234 This blows my mind. Calling pop on foo affects baz as well, so it isn't a copy, but concatenating something onto foo only affects foo, and not baz. So when am I dealing with the original object, and when am I dealing with a copy? In my own classes, how can I make sure that assignment copies, and doesn't make pointers? Help this confused guy out.

    Read the article

  • How can I bind a javascript dialog using Knockout?

    - by Brian
    I've got a list of data in an observableArray and I want to show it in a javascript dialog window (I'm using jQuery.blockUI if it matters). Unfortunately the dialog seems to come unbound after the page is loaded. The dialog initializes correctly (the data is displayed), but it isn't updating with changes. There are no Javascript errors and I've moved the binding to after the dialog is generated and added to the document (no effect). I've also tried calling ko.applyBinding on the main div that makes up the dialog but that, for some reason, causes part of the main page to hide (the DOM is there, but they are hidden). EDIT: I've created a project on jsfiddle that reproduces the problem. The main culprit seems to be wrapping the content of the dialog in a div. If I show the content directly it seems to work (of course I can't do that, the wrappers provide a common style for our dialogs). I'm recovering from the flu and could easily be missing something obvious, but I've been trying all day and nothing is coming to me. Any ideas?

    Read the article

  • How do I return an array from a method?

    - by dwwilson66
    I'm trying to create a deck of cards for my homework. Code is posted below. I need to create four sets of cards (the four suits) and am create a multidimensional array. When I print the results instead of trying to pass the array, I can see that the data in the array is as expected. However, when I try to pass the array card, I get an error cannot find symbol. I've got this modeled after texbook and Java tutorial examples, and I need some help figuring out what I'm missing. I've over-documented to give an idea of how I'm thinking this SHOULD work...please let me know where I've gone horribly wrong in my understanding. import java.util.*; import java.lang.*; // public class CardGame { public static int[][] main(String[] args) { int[][] startDeck = deckOfCards(); /* cast new deck as int[][], calling method deckOfCards System.out.println(" /// from array: " + Arrays.deepToString(startDeck)); } public static int[][] deckOfCards() /* method to return a multi-dimensional array */ { int rank; int suit; for(rank=1;rank<14;rank++) /* cards 1 - 13 .... */ { for(suit=1;suit<5;suit++) /* suits 1 - 4 .... */ { int[][] card = new int[][] /* define a new card... */ { {rank,suit} /* with rank/suit from for... loops */ }; System.out.println(" /// from array: " + Arrays.deepToString(card)); } } return card; /* Error: cannot find symbol } }

    Read the article

  • javascript: "Object doesn't support this property or method" when ActiveX object called.

    - by agnieszka
    I've got simple html on Login.aspx with an ActiveX object: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head><title></title> <script language="javaScript" type="text/javascript"> function getUserInfo() { var userInfo = MyActiveX.GetInfo(); form1.info.value = userInfo; form1.submit(); } </script> </head> <body onload="javascript:getUserInfo()"> <object id="MyActiveX" name="MyActiveX" codebase="MyActiveX.cab" classid="CLSID:C63E6630-047E-4C31-H457-425C8412JAI25"></object> <form name="form1" method="post" action="Login.aspx"> <input type="hidden" id="info" name="info" value="" /> </form> </body> </html> The code works perfectly fine on my machine (edit: hosted and run), it does't work on the other: there is an error "Object doesn't support this property or method" in the first line of javascript function. The cab file is in the same folder as the page file. I don't know javascript at all and have no idea why is the problem occuring. Googling didn't help. Do you ave any idea? Edit: on both machines IE was used and activex was enabled. Edit2: I also added if (document.MyActiveX) at the beggining of the function and I still get error in the same line of code - I mean it looks like document.MyActiveX is true but calling the method still fails

    Read the article

  • RSpec: Can't convert Image to String when using Nested Resource.

    - by darrint
    I'm having trouble with and RSpec view test. I'm using nested resources and the model with a belongs_to association. Here's what I have so far: describe "/images/edit.html.erb" do include ImagesHelper before(:each) do @image_pool = stub_model(ImagePool, :new_record => false, :base_path => '/') assigns[:image] = @image = stub_model(Image, :new_record? => false, :source_name => "value for source_name", :image_pool => @image_pool) end it "renders the edit image form" do render response.should have_tag("form[action=#{image_path(@image)}][method=post]") do with_tag('input#image_source_name[name=?]', "image[source_name]") end end end The error I'm receiving: ActionView::TemplateError in '/images/edit.html.erb renders the edit image form' can't convert Image into String On line #3 of app/views/images/edit.html.erb 1: <h1>Editing image</h1> 2: 3: <% form_for(@image) do |f| %> 4: <%= f.error_messages %> 5: 6: <p> app/views/images/edit.html.erb:3 /opt/dtcm/railstest/lib/ruby/gems/1.9.1/gems/rspec-rails-1.3.2/lib/spec/rails/extensions/action_view/base.rb:27:in `render_with_mock_proxy' /opt/dtcm/railstest/lib/ruby/gems/1.9.1/gems/rspec-rails-1.3.2/lib/spec/rails/example/view_example_group.rb:170:in `render' Looking at the rails code where the exception occurs is not very revealing. Any ideas on how I can narrow down what is going on here? One thing I tried was calling form_for directly from the example and I got a different error griping about lack of 'polymorphic_path' defined on Spec::Rails::Example::ViewExampleGroup::Subclass_4:0xblah. Not sure if that actually means anything.

    Read the article

  • How do I run NUnit in debug mode from Visual Studio?

    - by Jon Cage
    I've recently been building a test framework for a bit of C# I've been working on. I have NUnit set up and a new project within my workspace to test the component. All works well if I load up my unit tests from Nunit (v2.4), but I've got to the point where it would be really useful to run in debug mode and set some break points. I've tried the suggestions from several guides which all suggest changing the 'Debug' properties of the test project: Start external program: C:\Program Files\NUnit 2.4.8\bin\nunit-console.exe Command line arguments: /assembly: <full-path-to-solution>\TestDSP\bin\Debug\TestDSP.dll I'm using the console version there, but have tried the calling the GUI as well. Both give me the same error when I try and start debugging: Cannot start test project 'TestDSP' because the project does not contain any tests. Is this because I normally load \DSP.nunit into the Nunit GUI and that's where the tests are held? I'm beginning to think the problem may be that VS wants to run it's own test framework and that's why it's failing to find the NUnit tests? [Edit] To those asking about test fixtures, one of my .cs files in the TestDSP project looks roughly like this: namespace Some.TestNamespace { // Testing framework includes using NUnit.Framework; [TestFixture] public class FirFilterTest { /// <summary> /// Tests that a FirFilter can be created /// </summary> [Test] public void Test01_ConstructorTest() { ...some tests... } } } ...I'm pretty new to C# and the Nunit test framework so it's entirely possible I've missed some crucial bit of information ;-) [FINAL SOLUTION] The big problem was the project I'd used. If you pick: Other Languages->Visual C#->Test->Test Project ...when you're choosing the project type, Visual Studio will try and use it's own testing framework as far as I can tell. You should pick a normal c# class library project instead and then the instructions in my selected answer will work.

    Read the article

  • Login in via curl then open that page logged in

    - by user207022
    I'm trying the following code to send post data to the login form, then reload that page in the browser as a logged in user. somehow it's not saving the cookie, and reusing it for the header() function, can the same thing as header be done by calling curl again after sending the login details? .. $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_FOLLOWLOCATION,true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_TIMEOUT, 60); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER , false ); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST , false ); curl_setopt($ch, CURLOPT_USERAGENT, $defined_vars['HTTP_USER_AGENT']); //curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie); curl_setopt($ch, CURLOPT_MAXREDIRS, 1); // Apply the XML to our curl call curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); $data = curl_exec($ch); setcookie($cookie); header('location: ' . $url); die();

    Read the article

  • Server returns 500 error only when called by Java client using urlConnection/httpUrlConnection

    - by user455889
    Hi - I'm having a very strange problem. I'm trying to call a servlet (JSP) with an HTTP GET and a few parameters (http://mydomain.com/method?param1=test&param2=123). If I call it from the browser or via WGET in a bash session, it works fine. However, when I make the exact same call in a Java client using urlConnection or httpURLConnection, the server returns a 500 error. I've tried everything I have found online including: urlConn.setRequestProperty("Accept-Language", "en-us,en;q=0.5"); Nothing I've tried, however, has worked. Unfortunately, I don't have access to the server I'm calling so I can't see the logs. Here's the latest code: private String testURLConnection() { String ret = ""; String url = "http://localhost:8080/TestService/test"; String query = "param1=value1&param2=value2"; try { URLConnection connection = new URL(url + "?" + query).openConnection(); connection.setRequestProperty("Accept-Charset", "UTF-8"); connection.setRequestProperty("Accept-Language", "en-us,en;q=0.5"); BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(connection.getInputStream())); String line; StringBuilder content = new StringBuilder(); while ((line = bufferedReader.readLine()) != null) { content.append(line + "\n"); } bufferedReader.close(); metaRet = content.toString(); log.debug(methodName + " return = " + metaRet); } catch (Exception ex) { log.error("Exception: " + ex); log.error("stack trace: " + getStackTrace(ex)); } return metaRet; } Any help would be greatly appreciated!

    Read the article

  • Android NDK import-module / code reuse

    - by Graeme
    Morning! I've created a small NDK project which allows dynamic serialisation of objects between Java and C++ through JNI. The logic works like this: Bean - JavaCInterface.Java - JavaCInterface.cpp - JavaCInterface.java - Bean The problem is I want to use this functionality in other projects. I separated out the test code from the project and created a "Tester" project. The tester project sends a Java object through to C++ which then echo's it back to the Java layer. I thought linking would be pretty simple - ("Simple" in terms of NDK/JNI is usually a day of frustration) I added the JNIBridge project as a source project and including the following lines to Android.mk: NDK_MODULE_PATH=.../JNIBridge/jni/" JNIBridge/jni/JavaCInterface/Android.mk: ... include $(BUILD_STATIC_LIBRARY) JNITester/jni/Android.mk: ... include $(BUILD_SHARED_LIBRARY) $(call import-module, JavaCInterface) This all works fine. The C++ files which rely on headers from JavaCInterface module work fine. Also the Java classes can happily use interfaces from JNIBridge project. All the linking is happy. Unfortunately JavaCInterface.java which contains the native method calls cannot see the JNI method located in the static library. (Logically they are in the same project but both are imported into the project where you wish to use them through the above mechanism). My current solutions are are follows. I'm hoping someone can suggest something that will preserve the modular nature of what I'm trying to achieve: My current solution would be to include the JavaCInterface cpp files in the calling project like so: LOCAL_SRC_FILES := FunctionTable.cpp $(PATH_TO_SHARED_PROJECT)/JavaCInterface.cpp But I'd rather not do this as it would lead to me needing to update each depending project if I changed the JavaCInterface architecture. I could create a new set of JNI method signatures in each local project which then link to the imported modules. Again, this binds the implementations too tightly.

    Read the article

  • Classic ASP application-wide initializations and object caching

    - by slack3r
    In classic ASP (which I am forced to use), I have a few factory functions, that is, functions that return classes. I use JScript. In one include file I use these factory functions to create some classes that are used throughout the application. This include file is included with the #include directive in all pages. These factory functions do some "heavy lifting" and I don't want them to be executed on every page load. So, to make this clear I have something like this: // factory.inc function make_class(arg1, arg2) { function klass() { //... } // ... Some heavy stuff return klass; } // init.inc, included everywhere <!-- #include FILE="factory.inc" --> // ... MyClass1 = make_class(myarg01, myarg02); MyClass2 = make_class(myarg11, myarg12); //... How can I achieve the same effect without calling make_class on every page load? I know that I can't cache the classes in the Application object I can't use the Application_OnStart hook in Global.asa I could probably create a scripting component, but I really don't want to do that So, is there something else I can do? Maybe some way to achieve caching of these classes, which are really objects in JScript. PS: [further clarification] In the above code "heavy stuff" is not so heavy, but I just want to know if there's a way to avoid it being executed all the time. It reads database meta information, builds a table of the primary keys in the database and another table that resolves strings to classes, etc.

    Read the article

  • Zend Partial + Zend Action Helper causes an additional request to bootstrap?

    - by AndreLiem
    I've been profiling some zend framework code with webgrind to see where some bottle necks are and I'm noticing some very odd behavior. Using the zend partial for example, if I pass a variable value that comes from a zend action helper, it results in two requests being made. in sample.phtml echo $this->partial('partial/embed.phtml', array('url' => $this->url)); in indexcontroller.php $this->view->url = $this->_helper->Embed()->url; But if I don't pass the value from the helper to the partial, but still run the helper, it only makes one request in webgrind. e.g. $this->view->url = 'test'; $this->_helper->Embed()->url; Does anybody know why this could be happening? Am I potentially interpreting web grind incorrectly, or is it really calling the bootstrap twice when the an action helper value is tied to a partial? I'm starting to realize how inefficient some components of Zend are. Thanks

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >