Search Results

Search found 2333 results on 94 pages for 'x64'.

Page 81/94 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Flash movies in inactive browser tabs pause or don't execute in real time

    - by ZenBlender
    I'm noticing some unexpected behavior. Some time in the last few months, a change in either Firefox, the Flash player, or both, has made it so that Flash movies that are in inactive browser tabs no longer execute in real time. They appear to still execute, but only in bursts, and not in a predictable way. This is a problem because I develop a Flash-based (Actionscript 2.0, Flash CS3) multiplayer game that maintains a network connection and allows players to chat, etc. Many of our players complain about Firefox crashing while playing the game. I have noticed it too, not too frequently, but it crashes several times a week. (Firefox crashes, I do not get a message from Flash player that indicates an infinite loop or problem in my code) My theory is that this new behavior is causing crashes when there is a lot of activity in my game, leading to lots of unhandled network traffic for my game getting buffered before Firefox/Flash will give it a chance to execute. Maybe this leads to a buffer overflow or missing packets, and as a result, something crashes. At times I will switch back to the tab that is running my game and discover a display bug, which looks as though Flash has simply failed to execute something that it was supposed to. I would assume this new behavior is on purpose, for example to prevent all the Flash-based advertisements in inactive tabs from executing and therefore killing performance. In a quick test on Chrome (5.0.342.9 beta), this "pausing" of Flash seems to be there as well, but somehow it seems much less of a problem. My users have only complained about Firefox crashing, not other browsers. My machine: Windows 7 x64 Firefox 3.6.3 Flash Player 10.1.50.426 My game: triplejack.com Any ideas? Ideally I'd like to disable this behavior for my Flash game so it can execute in real time even when in an inactive tab. Thanks for any help!

    Read the article

  • C++ thread safety - exchange data between worker and controller

    - by peterchen
    I still feel a bit unsafe about the topic and hope you folks can help me - For passing data (configuration or results) between a worker thread polling something and a controlling thread interested in the most recent data, I've ended up using more or less the following pattern repeatedly: Mutex m; tData * stage; // temporary, accessed concurrently // send data, gives up ownership, receives old stage if any tData * Send(tData * newData) { ScopedLock lock(m); swap(newData, stage); return newData; } // receiving thread fetches latest data here tData * Fetch(tData * prev) { ScopedLock lock(m); if (stage != 0) { // ... release prev prev = stage; stage = 0; } return prev; // now current } Note: This is not supposed to be a full producer-consumer queue, only the msot recent data is relevant. Also, I've skimmed ressource management somewhat here. When necessary I'm using two such stages: one to send config changes to the worker, and for sending back results. Now, my questions assuming that ScopedLock implements a full memory barrier: do stage and/or workerData need to be volatile? is volatile necessary for tData members? can I use smart pointers instead of the raw pointers - say boost::shared_ptr? Anything else that can go wrong? I am basically trying to avoid "volatile infection" spreading into tData, and minimize lock contention (a lock free implementation seems possible, too). However, I'm not sure if this is the easiest solution. ScopedLock acts as a full memory barrier. Since all this is more or less platform dependent, let's say Visual C++ x86 or x64, though differences/notes for other platforms are welcome, too. (a prelimenary "thanks but" for recommending libraries such as Intel TBB - I am trying to understand the platform issues here)

    Read the article

  • theoretical and practical matrix multiplication FLOP

    - by mjr
    I wrote traditional matrix multiplication in c++ and tried to measure and compare its theoretical and practical FLOP. As I know inner loop of MM has 2 operation therefore simple MM theoretical Flops is 2*n*n*n (2n^3) but in practice I get something like 4n^3 + number of operation which is 2 i.e. 6n^3 also if I just try to add up only one array a[i][j]++ practical flops then calculate like 3n^3 and not n^3 as you see again it is 2n^3 +1 operation and not 1 operation * n^3 . This is in case if I use 1D array in three nested loops as Matrix multiplication and compare flop, practical flop is the same (near) the theoretical flop and depend exactly as the number of operation in inner loop.I could not find the reason for this behaviour. what is the reason in both case? I know that theoretical flop is not the same as practical one because of some operations like load etc. system specification: Intel core2duo E4500 3700g memory L2 cache 2M x64 fedora 17 sample results: Matrix matrix multiplication 512*512 Real_time: 1.718368 Proc_time: 1.227672 Total flpops: 807,107,072 MFLOPS: 657.429016 Real_time: 3.608078 Proc_time: 3.042272 Total flpops: 807,024,448 MFLOPS: 265.270355 theoretical flop: 2*512*512*512=268,435,456 Practical flops= 6*512^3 =807,107,072 Using 1 dimensional array float d[size][size]:512 or any size for (int j = 0; j < size; ++j) { for (int k = 0; k < size; ++k) { d[k]=d[k]+e[k]+f[k]+g[k]+r; } } Real_time: 0.002288 Proc_time: 0.002260 Total flpops: 1,048,578 MFLOPS: 464.027161 theroretical flop: *4n^2=4*512^2=1,048,576* practical flop : 4n^2+overhead (other operation?)=1,048,578 3 loop version: Real_time: 1.282257 Proc_time: 1.155990 Total flpops: 536,872,000 MFLOPS: 464.426117 theoretical flop:4n^3 = 536,870,912 practical flop: *4n^3=4*512^3+overheads(other operation?)=536,872,000* thank you

    Read the article

  • HTTP Error: 400 when sending msmq message over http

    - by dontera
    I am developing a solution which will utilize msmq to transmit data between two machines. Due to the seperation of said machines, we need to use HTTP transport for the messages. In my test environment I am using a Windows 7 x64 development machine, which is attempting to send messages using a homebrew app to any of several test machines I have control over. All machines are either windows server 2003 or server 2008 with msmq and msmq http support installed. For any test destination, I can use the following queue path name with success: FORMATNAME:DIRECT=TCP:[machine_name_or_ip]\private$\test_queue But for any test destination, the following always fails FORMATNAME:DIRECT=HTTP://[machine_name_or_ip]/msmq/private$/test_queue I have used all permutations of machine names/ips available. I have created mappings using the method described at this blog post. All result in the same HTTP Error: 400. The following is the code used to send messages: MessageQueue mq = new MessageQueue(queuepath); System.Messaging.Message msg = new System.Messaging.Message { Priority = MessagePriority.Normal, Formatter = new XmlMessageFormatter(), Label = "test" }; msg.Body = txtMessageBody.Text; msg.UseDeadLetterQueue = true; msg.UseJournalQueue = true; msg.AcknowledgeType = AcknowledgeTypes.FullReachQueue | AcknowledgeTypes.FullReceive; msg.AdministrationQueue = new MessageQueue(@".\private$\Ack"); if (SendTransactional) mq.Send(msg, MessageQueueTransactionType.Single); else mq.Send(msg); Additional Information: in the IIS logs on the destination machines I can see each message I send being recorded as a POST with a status code of 200. I am open to any suggestions.

    Read the article

  • Test Results window in VS2008 not showing results

    - by TimK
    I have an existing solution that has been working for a long time, containing around 600 tests in a couple of test projects. I recently moved to a new PC - it's Win7-x64, and I installed a fresh copy of VS2008. When I first opened the solution on the new machine, the Test List Editor was completely empty. Trying to create a new test list caused the editor to refresh, and now it shows my test lists, but they're acting funny. I can select tests in the lists, and run them, but the results window doesn't usually update automatically to show the results of the latest test. It has done this when running a single test a couple of times, but even that is not consistent. The only way I can view the results is by manually going to the Test Runs window and connecting to individual test runs. When I do that, the results show up in the results list, but I can't check them to re-run the failed tests - the check boxes are all disabled. I guess I should describe the way it used to work, in case that was unusual - I used to select some tests from the Test Lists window, tell it to run them, and the results window would clear itself, and then display the results from the current run. I could then check any tests that I wanted to re-run, and use the run/debug button in the results window to do so. Any ideas what's going on here?

    Read the article

  • error to start Windows Media Encoder

    - by George2
    Hello everyone, I am using the following code snippet to run on Windows Server 2003 x64 edition. I met with the following error when invoking encoder.start method. I am using Windows Media Encoder 9. System.Runtime.InteropServices.COMException 0xC00D1B67 My code snippet is below, does anyone have any ideas what is wrong? IWMEncSourceGroup SrcGrp; IWMEncSourceGroupCollection SrcGrpColl; SrcGrpColl = encoder.SourceGroupCollection; SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1"); IWMEncVideoSource2 SrcVid; IWMEncSource SrcAud; SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO); SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO); SrcVid.SetInput("ScreenCap://ScreenCapture1", "", ""); SrcAud.SetInput("Device://Default_Audio_Device", "", ""); // Specify a file object in which to save encoded content. IWMEncFile File = encoder.File; string CurrentFileName = Guid.NewGuid().ToString(); File.LocalFileName = CurrentFileName; CurrentFileName = File.LocalFileName; // Choose a profile from the collection. IWMEncProfileCollection ProColl = encoder.ProfileCollection; IWMEncProfile Pro; for (int i = 0; i < ProColl.Count; i++) { Pro = ProColl.Item(i); if (Pro.Name == "Screen Video/Audio High (CBR)") { SrcGrp.set_Profile(Pro); break; } } encoder.Start(); thanks in advance, George

    Read the article

  • Java Server Client Program I/O Exception

    - by AjayP
    I made this program: http://java.sun.com/docs/books/tutorial/networking/sockets/clientServer.html And it works perfectly if I put the server's hostname as 127.0.0.1 or my computers name (Ajay-PC). However these 2 methods are LAN or local only not internet. So I changed it to my internet ip. 70.128.xxx.xxx etc. But it didn't work. I checked: canyouseeme.org and it said 4444 was CLOSED. So I did a quick port forward. Portforward: Name: My Java Program Start Port: 4444 End Port: 4444 Server IP: 10.0.0.12 <-- (Yeah this is my Local IP I checked) then I tried canyouseeme.org AGAIN: and it said 4444 was OPEN I ran my server client program and it yet to work. So my problem is the client server program is not working on the internet just locally. So something is blocking it and I don't know what. Computer: Windows Vista x64 Norton AntiVirus 2010 Thanks! I'll give best answer or whatever to who ever answers the best ;) :)

    Read the article

  • SUA + Visual Studio + pthreads

    - by vasek7
    Hi, I cannot compile this code under SUA: #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <pthread.h> void * thread_function(void *arg) { printf("thread_function started. Arg was %s\n", (char *)arg); // pause for 3 seconds sleep(3); // exit and return a message to another thread // that may be waiting for us to finish pthread_exit ("thread one all done"); } int main() { int res; pthread_t a_thread; void *thread_result; // create a thread that starts to run ‘thread_function’ pthread_create (&a_thread, NULL, thread_function, (void*)"thread one"); printf("Waiting for thread to finish...\n"); // now wait for new thread to finish // and get any returned message in ‘thread_result’ pthread_join(a_thread, &thread_result); printf("Thread joined, it returned %s\n", (char *)thread_result); exit(0); } I'm running on Windows 7 Ultimate x64 with Visual Studio 2008 and 2010 and I have installed: Windows Subsystem for UNIX Utilities and SDK for Subsystem for UNIX-based Applications in Microsoft Windows 7 and Windows Server 2008 R2 Include directories property of Visual Studio project is set to "C:\Windows\SUA\usr\include" What I have to configure in order to compile and run (and possibly debug) pthreads programs in Visual Studio 2010 (or 2008)?

    Read the article

  • Surprising results with .NET multi-theading algorithm

    - by Myles J
    Hi, I've recently wrote a C# console time tabling algorithm that is based on a combination of a genetic algorithm with a few brute force routines thrown in. The initial results were promising but I figured I could improve the performance by splitting the brute force routines up to run in parallel on multi processor architectures. To do this I used the well documented Producer/Consumer model (as documented in this fantastic article http://www.albahari.com/threading/part2.aspx#_ProducerConsumerQWaitHandle). I changed my code to create one thread per logical processor during the brute force routines. The performance gains on my work station were very pleasing. I am running Windows XP on the following hardware: Intel Core 2 Quad CPU 2.33 GHz 3.49 GB RAM Initial tests indicated average performance gains of approx 40% when using 4 threads. The next step was to deploy the new multi-threading version of the algorithm to our higher spec UAT server. Here is the spec of our UAT server: Windows 2003 Server R2 Enterprise x64 8 cpu (Quad-Core) AMD Opteron 2.70 GHz 255 GB RAM After running the first round of tests we were all extremely surprised to find that the algorithm actually runs slower on the high spec W2003 server than on my local XP work station! In fact the tests seem to indicate that it doesn't matter how many threads are generated (tests were ran with the app spawning between 2 to 32 threads). The algorithm always runs significantly slower on the UAT W2003 server? How could this be? Surely the app should run faster on a 8 cpu (Quad-Core) than my 2 Quad work station? Why are we seeing no performance gains with the multi-threading on the W2003 server whilst the XP workstation tests show gains of up to 40%? Any help or pointers would be appreciated. Regards Myles

    Read the article

  • Why can't nvcc find my Visual C++ installation?

    - by Jack Lloyd
    I'm running Windows 7 Pro x64 on a Core i5 with a NVIDIA 3100m, which is CUDA compatible. I've tried installing both the 32-bit and 64-bit CUDA toolkits from NVIDIA, unfortunately from with either of them I cannot compile anything; nvcc says "cannot find a supported cl version. Only MSVC 8.0 and MSVC 9.0 are supported". I have the x86 and x86-64 compilers installed via the Windows 7 SDK (compiler version 15.00.30729.01 for both arches). Both compilers are operating correctly; I've built and tested C and C++ code using them. I've tried running nvcc from command shells set up for both 32 bit and 64 bit compilation, and using the -ccbin command line option to nvcc to point it at the Visual C++ install directory. What is the right way of handling this setup? Is there some way I make nvcc be more verbose about what is going on? The -v flag isn't terrible helpful. Ideally some way to make it show what it is finding versus what it's expecting to find. Will this work better if I install Visual C++ Express instead? Or is only a commercial version of VC++ supported for use with CUDA?

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • Linq to SQL Problem System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager

    - by luckyluke
    I have a really tricky thing going up here. My project has around 100 tables and they are all mapped by LINQ. Everything works fine in a dev and test environment. These enviroments are MS Win 2008 r2 servers with SQL 2008 sp1 databases. IIS and SQL are on a different machines. Now on production enviroment which is MS Win 2003 x64 web farm + geoclustered SQL 2008 IT DOES not work. All I get is the exception System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager3.TryCreateKeyFr>om Values(Object[] values, MultiKey& k) at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache2.Find(Object[] keyValues) at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance) at System.Data.Linq.ChangeProcessor.BuildEdgeMaps() at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at ERS.IIMP.Services.ExposuresSrv.Update(Int32 ExpID, Int32 AssID) Services\ExposuresSrv.cs` My question is What the hell. They have precisely the same DBML, the DB has exactly THE SAME structure (when I get the DB from prod to TEST and mount it eveything works just great), the binaries on the WEB Server are the same. I seriously do not know what to do.... Did anyone found that Linq works on one env and does not on the second?? I mam really lost here. I really hope You can help me:)

    Read the article

  • VERY simple C program won't compile with VC 64

    - by Paperflyer
    Here is a very simple C program: #include <stdio.h> int main (int argc, char *argv[]) { printf("sizeof(short) = %d\n",(int)sizeof(short)); printf("sizeof(int) = %d\n",(int)sizeof(int)); printf("sizeof(long) = %d\n",(int)sizeof(long)); printf("sizeof(long long) = %d\n",(int)sizeof(long long)); printf("sizeof(float) = %d\n",(int)sizeof(float)); printf("sizeof(double) = %d\n",(int)sizeof(double)); return 0; } While it compiles fine on Win32 (command line: cl main.c), it does not using the Win64 compiler ("c:\Program Files(x86)\Microsoft Visual Studio 9.0\VC\bin\amd64\cl.exe" main.c). Specifically, it sais "error LNK2019: unresolved external symbol printf referenced in function main". As far as I understand this, it can not link to printf, right? Obviously, I have Microsoft Visual C++ Compiler 2008 (Standard enu) x86 and x64 installed and am using the 64-bit flavor of Windows (7). What is the problem here?

    Read the article

  • Client unable to authenticate when connecting to WCF service

    - by davecoulter
    I have a WCF service hosted in a Windows service. The application is an intranet app, and I have programmatically set the bindings on both the service and the client as: NetTcpBinding aBinding = new NetTcpBinding(SecurityMode.Transport); aBinding.Security.Transport.ClientCredentialType = TcpClientCredentialType.Windows; aBinding.Security.Transport.ProtectionLevel = System.Net.Security.ProtectionLevel.EncryptAndSign; Both the service and client have endpoints configured with SPNs: EndpointAddress = new EndpointAddress(uri, EndpointIdentity.CreateSpnIdentity("Service1")); As far as I know, I have setup the bindings correctly-- and I am usually able to connect to the service just fine. I did however run into a case where on a server running Windows Server 2003 R2, x64, SP2 I get the following exception immediately when the client tries to connect: INNEREXCEPTION -- Exception Message: InvalidCredentialException: Either the target name is incorrect or the server has rejected the client credentials. Stack Trace: at System.Net.Security.NegoState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.Security.NegotiateStream.AuthenticateAsClient(NetworkCredential credential, String targetName, ProtectionLevel requiredProtectionLevel, TokenImpersonationLevel allowedImpersonationLevel) at System.ServiceModel.Channels.WindowsStreamSecurityUpgradeProvider.WindowsStreamSecurityUpgradeInitiator.OnInitiateUpgrade(Stream stream, SecurityMessageProperty& remoteSecurity) I get the exception when I try to connect to the service from another machine in the domain, but if I connect to the service on the same machine running the service it works fine. The hosting service itself is running as a domain user account-- but I have tried running the service as a Local System and Network Service to no avail. I have checked the Local Security Policies for the server and didn't see anything amiss (i.e. 'Access this computer from the network' includes 'Everyone'). Anyone have an idea of what could resolve this? I am wondering if I need to do something in Active Directory with respect to the service's SPN? I have read some about using setspn.exe to register or refresh SPNs, but I haven't needed to do this before. Why would this be working with other configurations but not the one above?

    Read the article

  • Alternate cause of BadImageFormatException in .NET Assembly?

    - by Phillip Knauss
    I'm working on a .NET 3.5 console application in C# which uses a VC++ unmanaged DLL. It ran without a problem when I worked on it a few weeks ago, but I'm coming back to it today and am now getting a BadImageFormatException ("An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)). My development workstation is running 64bit Windows 7, and I do a fair amount of work with unmanaged code, so I immediately checked that the .NET assembly and the VC++ library both had x86 targets. They did. Just to be sure, I cleaned and rebuilt the VC++ library and the .NET assembly, to no avail. Neither system is doing anything particularly unusual. The VC++ library loads a binary data file and does some mathematical processing on its contents. The .NET assembly has the DllImports for the library and some code to wire it up. This all worked a few weeks ago. So now I'm left wondering if there's some other cause of BadImageFormatException that's less common than an x86/x64 conflict that I might be running into. Thanks.

    Read the article

  • Stack Overflow on Marshal.PtrToStructure reading wmv files

    - by Nick Udell
    Hi, I'm using a frame grabber class in order to capture and process each frame in a video. The class can be found here: http://www.codeproject.com/KB/graphics/FrameGrabber.aspx I'm having issues with running it, however. When loading the file, it attempts to marshal a video format pointer into a VideoInfoHeader (I'm using DirectShow.Net). The code that does this is as follows: videoInfo = (VideoInfoHeader)Marshal.PtrToStructure(mediaType.formatPtr, typeof(VideoInfoHeader)); When I run this it immediately crashes out of the debugging environment, probably with a stack overflow. When stepping through I can see that the formatPtr always equals 93, though I do not know what to make of this as I am fairly new to marshalling. I have checked that the video runs fine in Windows Media Player. This is essential in finding the dimensions of the video and also the size of the header, which needs to be skipped before the frames can be read. I am running Windows 7 x64. Any help on this would be much appreciated, I must've tried fifteen different frame grabbing techniques.

    Read the article

  • Visual C++ Assembly link library troubles

    - by Sanarothe
    Hi. I'm having a problem having my projects built in VC++ Express 2008... I'm using a library, irvine32.inc/lib. INCLUDE Irvine32.inc works for me at school (On already configured VS environments) by default, but at home (Windows 7 x64) I'm having a boatload of issues. My original post here was that a file that irvine32.inc referenced, in the same folder, 'could not be opened.' Added irvine folder to the include path for specific project, progress. Then I was getting an error with mt.exe, but a suggestion on the MSDN suggested turn off antivirus, and now project does build but when I run a program that does NOT reference anything in irvine32, it tells me repeatedly that my project has triggered a breakpoint, and allows me to continue or break. Continue just pops the same window, break loads another popup telling me that "No symbols are loaded for any call stack frame. Source code cannot be displayed." This popup lets me view the disassembly. I tested it with and without working statements, it just throws the same breakpoint on the first line of code. Now, if I run the program when it DOES require something from the include file, in this case, DumpRegs: INCLUDE Irvine32.inc .data .code main PROC mov ebx,1000h mov eax,1000h add eax,ebx call DumpRegs main ENDP END main This gives me 1main.obj : error LNK2019: unresolved external symbol _DumpRegs@0 referenced in function _main@0 1C:\Users\Cameron\csis165\Lab8_CCarroll\Debug\Lab8_CCarroll.exe : fatal error LNK1120: 1 unresolved externals This does NOT happen when I build a project from the book author's examples, which has the same include statement. I'm baffled. :(

    Read the article

  • Microsoft OLE DB Provider for SQL Server error '80040e14' Could not find stored procedure

    - by BBlake
    I am migrating a classic ASP web app to new servers. The database back end is migrating from SQL Server 2000 to SQL Server 2008, and the app is moving from Win2000 x86 to Win2003R2 x64. I am getting the above error on every single stored procedure call within the application. I have verified: Yes, the SQL user is set up, using correct username and password Yes, the SQL user has execute permissions on the stored procedures in the database Yes, I have updated the TypeLib references to the new UUID Yes, I have logged into the database via SSMS with the SQL user id and it can see and execute the stored procedures just fine in SSMS, but not from the web app. Yes, the SQL user has the database set as its default database. The most frustrating thing is it works fine on the DEV server, but not on the production server. I have gone through every IIS setting 5 or 6 times and the web app is set up precisely the same in both environments. The only difference is the database server name in the connection string (DEV vs prod) EDIT: I have also tried pointing the prod web box at the dev database server and get the same error so I'm fairly sure the issue isn't on the database side.

    Read the article

  • Using PHP SoapClient with Java JAX-WS RI (Webservice)

    - by Kau-Boy
    For a new project, we want to build a web service in JAVA using JAX-WS RI and for the web service client, we want to use PHP. In a small tutorial about JAX-WS RI I found this example web service: package webservice; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; import javax.jws.soap.SOAPBinding.Style; @WebService @SOAPBinding(style = Style.RPC) public class Calculator { public long addValues(int val1, int val2) { return val1 + val2; } } and for the web server: package webservice; import javax.xml.ws.Endpoint; import webservice.Calculator; public class CalculatorServer { public static void main(String args[]) { Calculator server = new Calculator(); Endpoint endpoint = Endpoint.publish("http://localhost:8080/calculator", server); } } Starting the Server and browsing the WDSL with the URL "http://localhost:8080/calculator?wsdl" works perfectly. But calling the web service from PHP fails My very simple PHP call looks like this: $client = new SoapClient('http://localhost:8080/calculator?wsdl', array('trace' => 1)); echo 'Sum: '.$client->addValues(4, 5); But than I either get a "Fatal error: Maximum execution time of 60 seconds exceeded..." or a "Uncaught SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://localhost:8080/calculator?wsdl' ..." exception. I have tested the PHP SoapClient() with other web services and they work without any issues. Is there a known issue with JAX-WS RI in comibation with PHP, or is there an error in my web service I didn't see? I have found this bug report, but even updating to PHP 5.3.2 doesn't solved the problem. Can anyone tell me what to do? And by the way, my java version running on Windows 7 x64 is the following: java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)

    Read the article

  • Trying to build the basic python extension example fails (windows)

    - by Alexandros
    Hello, I have Python 2.6 and Visual Studio 2008 running on a Win7 x64 machine. When I try to build the basic python extension example in c "example_nt" as found in the python 2.6 sources distribution, it fails: python setup.py build And this results in: running build running build_ext building 'aspell' extension Traceback (most recent call last): File "setup.py", line 7, in <module> ext_modules = [module1]) File "C:\Python26\lib\distutils\core.py", line 152, in setup dist.run_commands() File "C:\Python26\lib\distutils\dist.py", line 975, in run_commands self.run_command(cmd) File "C:\Python26\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "C:\Python26\lib\distutils\command\build.py", line 134, in run self.run_command(cmd_name) File "C:\Python26\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\Python26\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "C:\Python26\lib\distutils\command\build_ext.py", line 343, in run self.build_extensions() File "C:\Python26\lib\distutils\command\build_ext.py", line 469, in build_extensions self.build_extension(ext) File "C:\Python26\lib\distutils\command\build_ext.py", line 534, in build_extension depends=ext.depends) File "C:\Python26\lib\distutils\msvc9compiler.py", line 448, in compile self.initialize() File "C:\Python26\lib\distutils\msvc9compiler.py", line 358, in initialize vc_env = query_vcvarsall(VERSION, plat_spec) File "C:\Python26\lib\distutils\msvc9compiler.py", line 274, in query_vcvarsall raise ValueError(str(list(result.keys()))) ValueError: [u'path'] What can I do to fix this? Any help will be appreciated

    Read the article

  • Visual Studio 2008 / ASP.NET 3.5 / C# -- issues with intellisense, references, and builds

    - by goober
    Hey all, Hoping you can help me -- the strangest thing seems to have happened with my VS install. System config: Windows 7 Pro x64, Visual Studio 2008 SP1, C#, ASP.NET 3.5. I have two web site projects in a solution. I am referencing NUnit / NHibernate (did this by right-clicking on the project and selecting "Add Reference". I've done this for several projects in the past). Things were working fine but recently stopped working and I can't figure out why. Intellisense completely disappears for any files in my App_Code directory, and none of the references are recognized (they are recognized by any file in the root directory of the web site project. Additionally, pretty simple commands like the following (in Page_Load) fail (assume TextBox1 is definitely an element on the page): if (Page.IsPostBack) { str test1; test1 = TextBox1.Text; } It says that all the page elements are null or that it can't access them. At first I thought it was me, but due to the combination of issues, it seems to be Visual Studio itself. I've tried clearing the temp directories & rebuilding the solution. I've also tried tools -- options -- text editor settings to ensure intellisense is turned on. I'd appreciate any help you can give! Thanks, Sean

    Read the article

  • Why use bin2hex when inserting binary data from PHP into MySQL?

    - by Atli
    I heard a rumor that when inserting binary data (files and such) into MySQL, you should use the bin2hex() function and send it as a HEX-coded value, rather than just use mysql_real_escape_string on the binary string and use that. // That you should do $hex = bin2hex($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES (X'{$hex}')"; // Rather than $bin = mysql_real_escape_string($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES ('{$bin}')"; It is supposedly for performance reasons. Something to do with how MySQL handles large strings vs. how it handles HEX-coded values However, I am having a hard time confirming this. All my tests indicate the exact oposite; that the bin2hex method is ~85% slower and uses ~24% more memory. (I am testing this on PHP 5.3, MySQL 5.1, Win7 x64 - Using a farily simple insert loop.) For instance, this graph shows the private memory usage of the mysqld process while the test code was running: Does anybody have any explainations or reasources that would clarify this? Thanks.

    Read the article

  • Visual Studio 2010 / ASP.NET MVC / Publish

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • Global.asax parser errors when deploying MVC 1 application to remote server.

    - by mannish
    So we're having some issues deploying an ASP.NET MVC app to a client site. Basically when we try to test the app from localhost, we get the dreaded Global.asax parser error indicating it could not load the application global. Research indicates there are basically 4 possible reasons for this exception we're seeing: The solution hasn't been built. This clearly isn't the case since we can deploy it here and it runs fine on any machine we deploy to AND we had to build and publish the darn thing to deploy it anyway. The Global.asax namespace inheritance does not match the application global code file. Again we double checked this and since it runs just fine here that can't be the issue. Miscellaneous non-descript IIS/VS.NET mischief. Basically something get's wonky in IIS or VS.NET and the web server won't behave correctly for this application. We've done cleans and rebuilds, we've deleted virtual dir and recreated, and performed all of the IIS munging that we've found elsewhere online. Various combinations of IIS bounces, server reboots, virtual dir/application recreation, etc. Code level permissions issue. We've verified full trust in machine/web config in the framework directory, we've set .NET trust to full in IIS, we've granted Everyone full control on the directories just to hit it with the security hammer, etc. etc. The pertinent detials: Windows Server 2008 x64 IIS 7, 32 bit compatible app pool (app was written on 32 bit OS compiled for any cpu) App pool identity set to NetworkService Microsoft ASP.NET MVC 1.0 XCopy deployment We deployed another read-only app just fine. The significant difference in this app is the use of NHibernate and Log4Net which require full trust. Additionally, the actual project name of the web project differs from the default namespace however the Inherits namespace in Global.asax and the Global.asax.cs files match so this shouldn't be an issue. Anybody have any bright ideas? We're officially down to just the dim ones.

    Read the article

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >