Search Results

Search found 18842 results on 754 pages for 'the machine'.

Page 275/754 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Possible Data Execution Prevention (DEP) problem in Windows 7

    - by Joel in Gö
    I have a serious problem with my .Net program. It calls a native dll, and then crashes instantly because it can't find a native method. This is behaviour we have seen before, whereby the C# compiler, in its infinite wisdom, sets the flag that the program is DEP compatible, even if it calls a native dll which patently is not. We have the standard workaround for this, where the flag is set to Not DEP Compatible in a post-build step, and this works fine. Everywhere except on my machine. I have Windows 7 32bit, and the program works fine on the Win 7 64bit machines that we have, as well as on Vista and XP; we have not yet been able to check on another Win7 32bit. However, on my machine the DataExecutionPolicy_SupportPolicy is 0, i.e. we have successfully switched DEP off. Does anyone know whether there is some situation in which it can still act? Or any other mechanism which could have the same effect? The dll in question also works fine when called from a native program. We are running out of ideas... any help would be much appreciated!

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • Problem calling web service from within JBOSS EJB Service

    - by Rob Goodwin
    I have a simple web service sitting on our internal network. I used SOAPUI to do a bit of testing, generated the service classes from the WSDL , and wrote some java code to access the service. All went as expected as I was able to create the service proxy classes and make calls. Pretty simple stuff. The only speed bump was getting java to trust the certificate from the machine providing the web service. That was not a technical problem, but rather my lack of experience with SSL based web services. Now onto my problem. I coded up a simple EJB service and deployed it into JBoss Application Server 4.3 and now get the following error in the code that previously worked. 12:21:50,235 WARN [ServiceDelegateImpl] Cannot access wsdlURL: https://WS-Test/TestService/v2/TestService?wsdl I can access the wsdl file from a web browser running on the same machine as the application server using the URL in the error message. I am at a loss as to where to go from here. I turned on the debug logs in JBOSS and got nothing more than what I showed above. I have done some searching on the net and found the same error in some questions, but those questions had no answers. The web services classes where generated with JAX-WS 2.2 using the wsimport ant task and placed in a jar that is included in the ejb package. JBoss is deployed in RHEL 5.4, where the standalone testing was done on Windows XP.

    Read the article

  • Access problems with System.Diagnostics.Process in webservice

    - by Martin
    Hello everyone. I have problems with executing a process in a webservice method on a Windows Server 2003 machine. Here is the code: Dim target As String = "C:\dnscmd.exe" Dim fileInfo As System.IO.FileInfo = New System.IO.FileInfo(target) If Not fileInfo.Exists Then Throw New System.IO.FileNotFoundException("The requested file was not found: " + fileInfo.FullName) End If Dim startinfo As New System.Diagnostics.ProcessStartInfo("C:\dnscmd.exe") startinfo.UseShellExecute = False startinfo.Arguments = "\\MYCOMPUTER /recordadd mydomain.no " & dnsname & " CNAME myhost.mydomain.no" startinfo.UserName = "user1" Dim password As New SecureString() For Each c As Char In "secretpassword".ToCharArray() password.AppendChar(c) Next startinfo.Password = password Process.Start(startinfo) I know the process is being executed because i use processmonitor.exe on the server and it tells me that c:\dnscmd.exe is called with the right parameters Full command line value from procmon is: "C:\dnscmd.exe" \MYCOMPUTER /recordadd mydomain.no mysubdomain CNAME myhost.mydomain.no The user of the created process on the server is NT AUTHORITY\SYSTEM. BUT, the dns entry will not be added! Here is the weird thing: I KNOW the user (user1) I authenticate with has administrative rights (it's the same user I use to log on the machine with), but the user of the process in procmon says NT AUTHORITY\SYSTEM (which is not the user i use to authenticate). Weirdest: If i log on to the server, copy the command line value read from the procmon logging, and paste it in a command line window, it works! Reading procmon after this shows that user1 owns the dnscmd process. Why doesn't user1 become owner of the process started with system.diagnostics.process? Is the reason why the command doesn't work?

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Photoshop CS5 not recognising activeDocument

    - by Max Kielland
    I wrote a quite big script for Photoshop CS5.1 on my 64bit Vista machine. Now when I run the very same script on my new 64bit Windows 7 machine, Adobe ExtendScript Tool complains about activeDocument (no such element) in this simple script: #target photoshop var pDoc = app.activeDocument; alert("Done!"); I have tried both and without #target and choosing the target in the ExtendedScript Tool. Is there something I have missed, or do I need to install something more. I only installed the 64bit version of Photoshop. Is it so that the 32bit Photoshop has the script extensions? I don't see why I need to install both 32bit and 64bit versions if I'm only going to use the 64bit version. EDIT I installed the 32bit version as well. Tried the same script against 32 and 64 bit, still no difference. SOLVED The mystery is solved. It is embarrassing simple if you interpret the error message more careful. Of course I can't get an activeDocument if there are no documents in Photoshop, duh!?! I interpreted it as the statement activeDocument wasn't recognised, but of course if I have no document there is no such element (as a photoshop document) to give me. I'm used to C++ and would expect the reuslt to be a NULL value or similar if there is a problem to get the document... excuses, excuses ;) Well, if someone else should get into the same problem, here is the answer on my expense :D

    Read the article

  • Qt moc failure without an error message

    - by Robert Parker
    So I'm pretty new to Qt, and I've just inherited a project from someone else who is also new to Qt. He isn't around this week btw. We are using Visual Studio 2008, and have the latest version of Qt installed(4.6.2). The project builds on my coworker's machine fine, and I can get the project from svn and build it directly. But under any other circumstances it refuses to build on my machine, and it doesn't give me much of an explanation why. Even if I just do a 'build clean' and then a 'build' it doesn't work. Any slight modification will make it fail. When I try to build the entire project I get the error message: 1Moc'ing MatrixTypeInterface.h... 1moc: Cannot create .\GeneratedFiles\Debug\moc_MatrixTypeInterface.cpp;.\GeneratedFiles\Debug\moc_matrixtypeinterface.cpp 1Project : error PRJ0019: A tool returned an error code from "Moc'ing MatrixTypeInterface.h..." The moc tool doesn't give any sort of error message as to why it isn't working, and I wasted most of yesterday trying to figure out why. I got the command that VS was using to call moc, and I entered in the command line myself. It didn't write anything to the screen. Any ideas?

    Read the article

  • Applets failing to load

    - by Roy Tang
    While testing our setup for user acceptance testing, we got some reports that java applets in our web application would occasionally fail to load. The envt where it was reported was WinXP/IE6, and there were no errors found in the java console. Obviously we'd like to avoid it. What sort of things should we be checking for here? On our local servers, everything seems fine. There's some turnaround time when sending questions to the on-site guy, so I'd look to cover as many possible causes as possible. Some more info: We have multiple applets, in the instance that they fail loading, all of them fail loading. The applet jar files vary in size from 2MB to 8MB. I'm told it seems more likely to happen if the applet isn't cached yet, i.e. if they've been able to load the applets once on a given machine, further runs on that machine go smoothly. I'm wondering if there's some sort of network transfer error when downloading the applets, but I don't know how to verify that. Any advise is welcome!

    Read the article

  • Autologin for web application

    - by Maulin
    We want to AutoLogin feature to allow user directly login using link into our Web Application. What is the best way achieve this? We have following approches in our mind. 1) Store user credentials(username/password) in cookie. Send cookie for authentication. e.g. http: //www.mysite.com/AutoLogin (here username/password will be passed in cookie) OR Pass user credentials in link URL. http: //www.mysite.com/AutoLogin?userid=<&password=< 2) Generate randon token and store user random token and user IP on server side database. When user login using link, validate token and user IP on server. e.g. http: //www.mysite.com/AutoLogin?token=< The problem with 1st approach is if hacker copies link/cookie from user machine to another machine he can login. The problem with 2nd approach is the user ip will be same for all users of same organization behind proxy. Which one is better from above from security perspective? If there is better solution which is other than mentioned above, please let us know.

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • Can't read from RSOP_RegistryPolicySetting WMI class in root\RSOP namespace

    - by JCCyC
    The class is documented in http://msdn.microsoft.com/en-us/library/aa375050%28VS.85%29.aspx And from this page it seems it's not an abstract class: http://msdn.microsoft.com/en-us/library/aa375084%28VS.85%29.aspx But whenever I run the code below I get an "Invalid Class" exception in ManagementObjectSearcher.Get(). So, does this class exist or not? ManagementScope scope; ConnectionOptions options = new ConnectionOptions(); options.Username = tbUsername.Text; options.Password = tbPassword.Password; options.Authority = String.Format("ntlmdomain:{0}", tbDomain.Text); scope = new ManagementScope(String.Format("\\\\{0}\\root\\RSOP", tbHost.Text), options); scope.Connect(); ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, new ObjectQuery("SELECT * FROM RSOP_RegistryPolicySetting")); foreach (ManagementObject queryObj in searcher.Get()) { wmiResults.Text += String.Format("id={0}\n", queryObj["id"]); wmiResults.Text += String.Format("precedence={0}\n", queryObj["precedence"]); wmiResults.Text += String.Format("registryKey={0}\n", queryObj["registryKey"]); wmiResults.Text += String.Format("valueType={0}\n", queryObj["valueType"]); } In the first link above, it lists as a requirement something called a "MOF": "Rsopcls.mof". Is this something I should have but have not? How do I obtain it? Is it necessary in the querying machine or the queried machine? Or both? I do have two copies of this file: C:\Windows>dir rsop*.mof /s Volume in drive C has no label. Volume Serial Number is 245C-A6EF Directory of C:\Windows\System32\wbem 02/11/2006 05:22 100.388 rsop.mof 1 File(s) 100.388 bytes Directory of C:\Windows\winsxs\x86_microsoft-windows-grouppolicy-base-mof_31bf3856ad364e35_6.0.6001.18000_none_f2c4356a12313758 19/01/2008 07:03 100.388 rsop.mof 1 File(s) 100.388 bytes Total Files Listed: 2 File(s) 200.776 bytes 0 Dir(s) 6.625.456.128 bytes free

    Read the article

  • 32 bit dllimport generating incorrect format error (0x8007000b) on win7 x64 platform

    - by DFP
    Hello, I'm trying to install and run a 32 bit application on a Win7 x64 machine. The application is built as a Win32 app. It runs fine on 32 bit platforms. On the x64 machine it installs correctly in the Programs(x86) directory and runs fine until I make a call into a 32 bit dll. At that time I get the incorrect format error (0x8007000b) indicating it is trying to load the dll of the wrong bitness. Indeed it is trying to load the 64 bit dll from the System32 directory rather than the 32 bit version in the SystemWOW64 directory. Another 32 bit application provided by the dll vendor runs correctly and it does load the 32 bit dll from the SystemWOW64 directory. I do not have source to their application to see how they are accessing the DLL. I'm using the DllImport function as shown below to access the dll. Is there a way to decorate the DllImport calls to force it to load the 32 bit version? Any thoughts appreciated. Thanks, DP public static class Micronas { [DllImport(@"UAC2.DLL")] public static extern short UacBuildDeviceList(uint uFlags); [DllImport(@"UAC2.DLL")] public static extern short UacGetNumberOfDevices(); [DllImport(@"UAC2.DLL")] public static extern uint UacGetFirstDevice(); [DllImport(@"UAC2.DLL")] public static extern uint UacGetNextDevice(uint handle); [DllImport(@"UAC2.DLL")] public static extern uint UacSetXDFP(uint handle, short adr, uint data); [DllImport(@"UAC2.DLL")] public unsafe static extern uint UacGetXDFP(uint handle, short adr, IntPtr data); }

    Read the article

  • When is a>a true ?

    - by Cricri
    Right, I think I really am living a dream. I have the following piece of code which I compile and run on an AIX machine: AIX 3 5 PowerPC_POWER5 processor type IBM XL C/C++ for AIX, V10.1 Version: 10.01.0000.0003 #include <stdio.h> #include <math.h> #define RADIAN(x) ((x) * acos(0.0) / 90.0) double nearest_distance(double radius,double lon1, double lat1, double lon2, double lat2){ double rlat1=RADIAN(lat1); double rlat2=RADIAN(lat2); double rlon1=lon1; double rlon2=lon2; double a=0,b=0,c=0; a = sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1); printf("%lf\n",a); if (a > 1) { printf("aaaaaaaaaaaaaaaa\n"); } b = acos(a); c = radius * b; return radius*(acos(sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1))); } int main(int argc, char** argv) { nearest_distance(6367.47,10,64,10,64); return 0; } Now, the value of 'a' after the calculation is reported as being '1'. And, on this AIX machine, it looks like 1 1 is true as my 'if' is entered !!! And my acos of what I think is '1' returns NanQ since 1 is bigger than 1. May I ask how that is even possible ? I do not know what to think anymore ! The code works just fine on other architectures where 'a' really takes the value of what I think is 1 and acos(a) is 0.

    Read the article

  • WCF Discovery finds endpoint but host is "localhost"

    - by Flo
    I am trying to use the Discovery feature in WCF using http://msdn.microsoft.com/en-us/library/dd456783(v=VS.100).aspx as a starting point. It works fine on my machine, but then I wanted to run the service on a different machine. The service was discovered properly but the hostname of the found service is always "localhost" which is of course not much use. Service Endpoint: var endpointAddress = new EndpointAddress(new UriBuilder { Scheme = Uri.UriSchemeNetTcp, Port = port}.Uri); var endpoint = new ServiceEndpoint(ContractDescription.GetContract(typeof(IServiceInterface)), new NetTcpBinding (), endpointAddress); Client: static EndpointAddress FindServiceAddress<T>() { Stopwatch stopwatch = new Stopwatch(); stopwatch.Start(); DiscoveryClient discoveryClient = new DiscoveryClient(new UdpDiscoveryEndpoint()); // Find endpoints FindResponse findResponse = discoveryClient.Find(new FindCriteria(typeof(T))); Console.WriteLine(string.Format("Searched for {0} seconds. Found {1} Endpoint(s).",stopwatch.ElapsedMilliseconds / 1000,findResponse.Endpoints.Count)); if (findResponse.Endpoints.Count > 0) { return findResponse.Endpoints[0].Address; } return null; } Should I simply set the Host to System.Environment.MachineName?

    Read the article

  • I'm trying to grasp the concept of creating a program that uses a MS SQL database, but I'm used to r

    - by Sergio Tapia
    How can I make a program use a MS SQL server, and have that program work on whatever computer it's installed on. If you've been following my string of questions today, you'd know that I'm making an open source and free Help Desk suite for small and medium businesses. The client application. The client application is a Windows Forms app. On installation and first launch on every client machine, it'll ask for the address of the main Help Desk server. The server. Here I plan to handle all incoming help requests, show them to the IT guys, and provide WCF services for the Client application to consume. My dilemma lies in that, I know how to make the program run on my local machine; but I'm really stumped on how to make this work for everyone who wants to download and install the server bit on their Windows Server. Would I have to make an SQL Script and have it run on the MS SQL server when a user wants to install the 'server' application? Many thanks to all for your valuable time and effort to teach me. It's really really appreciated. :)

    Read the article

  • Analyzing Windows crash dumps generated on XP/32 machines with Win7/64 ?

    - by Martin
    We have a problem with analyzing our Windows crash-dumps that were created on customer Windows XP/32 boxes on our development machines. Many of our development machines are now Win7/64 boxes, but it appears that the crash-dumps generated under Windows XP cannot full resolve their binary dependency, thereby leading to warnings when displaying the call stacks in Visual Studio (2005). For example, the msvcr80.dll cannot be resolved when loaded from a Win7 machine when the dump was generated on Windows XP: On XP, the WinSxS path appears to be C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.4053_x-ww_e6967989\msvcr80.dll -- on Win7, the WinSxS path to the same DLL version seems to be: x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d08d7da0442a985d (I got this info from a forum thread on codeguru that link to an msdn article.) Visual Studio (2005) can now no longer correctly resolve the binaries for the crash-dump. How can I get Visual Studio to resolve all the correct binaries for my dump file? Note: I have already correctly set up the symbol server. The public symbols for most system DLLs (kernel32.dll, etc) and our symbols of our own DLLs are correctly loaded. It is just that the symbols of DLLs that reside in the WinSxS folder are not loaded, because it appears that Vista/7 uses a different path scheme for these DLLs than XP does and therefore Visual Studio cannot find the dll (not the pdb) on the local dev machine and so cannot load the corresponding symbols for the dump file.

    Read the article

  • VS2008 - Find and Replace - Searches too many files.

    - by Pam Bullock
    I've used VS2008 a lot and have never had this problem. However, I started a new job and am using a new machine. Ever since I've gotten here the VS Find feature has been acting funny. I first noticed it when I did a replace all for "All Open Files". The project wouldn't build because the values had actually been replaced in other files within the solution that were not open and didn't even open after I pressed replace all. I have found that I can never use replace all on this machine because I never know what it is going to do. Even if I just do a find on "Current Document", once it's done with the document and I should get that message that says "No more matches found" it actually OPENS another random file from my solution where there is a match and keeps on going. It seems to never make any difference what "Look in" option I've chosen. My coworker has an install off the same disk and claims to not be experiencing this. We're in the middle of a stressful, huge project with a close deadline so I know my boss won't let me do a reinstall. Has anyone else ever had this happen? Anyone know a fix?? Thanks, Pam

    Read the article

  • How do I find the module dependencies of my Perl script?

    - by zoul
    I want another developer to run a Perl script I have written. The script uses many CPAN modules that have to be installed before the script can be run. Is it possible to make the script (or the perl binary) to dump a list of all the missing modules? Perl prints out the missing modules’ names when I attempt to run the script, but this is verbose and does not list all the missing modules at once. I’d like to do something like: $ cpan -i `said-script --list-deps` Or even: $ list-deps said-script > required-modules # on my machine $ cpan -i `cat required-modules` # on his machine Is there a simple way to do it? This is not a show stopper, but I would like to make the other developer’s life easier. (The required modules are sprinkled across several files, so that it’s not easy for me to make the list by hand without missing anything. I know about PAR, but it seems a bit too complicated for what I want.) Update: Thanks, Manni, that will do. I did not know about %INC, I only knew about @INC. I settled with something like this: print join("\n", map { s|/|::|g; s|\.pm$||; $_ } keys %INC); Which prints out: Moose::Meta::TypeConstraint::Registry Moose::Meta::Role::Application::ToClass Class::C3 List::Util Imager::Color … Looks like this will work.

    Read the article

  • Linking problems using libcurl with Visual C++ 2005: "unresolved external symbol __imp__curl_easy_se

    - by user88595
    Hi, I am planning to use libcurl in my project. I had downloaded the library source,built and integrated it in a small POC application. I am able to build and run the application without any issues with the generated libcurl.dll and libcurl_imp.lib files. Now when I integrate the same library in my project I am getting linker errors. 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_setopt 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_perform 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_cleanup 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_global_init 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_init I have researched and tried all manners of workarounds like adding CURL_STATICLIB definitions , additional libraries , changing to /MT even copying the libs to the release directory but nothing seems to work. As far as I can see the only difference between approach #1 and #2 in my steps are #1 is an console application using the libcurl.dll while in my main project this is another dll which is trying to link to libcurl.dll.. Would that necessitate any change in approach? Can I use the same generated multi threaded DLL /MD file for both(Tried /MT also with no success)? Any other ideas? Following are the linker options. -------------------------------------------------Working------------------------------------------------- /OUT:"C:\SampleFTP\Release\SampleFTP.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\SampleFTP\SampleFTP\Release" /MANIFEST /MANIFESTFILE:"Release\SampleFTP.exe.intermediate.manifest" /DEBUG /PDB:"c:\SampleFTP\release\SampleFTP.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /LTCG /MACHINE:X86 /ERRORREPORT:PROMPT libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib -------------------------------------------------Working------------------------------------------------- ----------------------------------------------NotWorking------------------------------------------------- /OUT:".......\nt\Win32\Release/foo__tests.dll" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\FullLibPath\libcurl_libs" /LIBPATH:"......\nt\Win32\Release" /DLL /MANIFEST /MANIFESTFILE:".\foo_tests\Win32\Release\foo_tests.dll.intermediate.manifest" /DEBUG /PDB:".......\nt\Win32\Release/foo_tests.pdb" /OPT:REF /OPT:ICF /LTCG /IMPLIB:".......\nt\Win32\Release/foo_tests.lib" /MACHINE:X86 /ERRORREPORT:PROMPT odbc32.lib odbccp32.lib util_process.lib wsock32.lib Version.lib libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib "......\nt\win32\release\otherlib1.lib" "......\nt\win32\release\otherlib2.lib" ----------------------------------------------NotWorking-------------------------------------------------

    Read the article

  • Request for the permission of type 'System.Web.AspNetHostingPermission' when compiling web site

    - by ahsteele
    I have been using Windows 7 for a while but have not had to work with a particular legacy intranet application since my upgrade. Unfortunately, this application is setup as an ASP.NET Website project hosted on a remote server. When I have the website open in Visual Studio 2008 and try to debug it: Request for the permission of type 'System.Web.AspNetHostingPermission' failed To resolve this issue on Windows Vista machines I would change the machine's .NET Security Configuration to trust the local intranet. I believe this configuration utility relied upon the mscorcfg.msc which from some cursory research appears to be apart of the .NET 2.0 SDK. I have tried to follow the instructions from this Microsoft Support article running the command below to no avail. Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on Presently, I have the following .NET and ASP.NET components installed on my machine Microsoft .NET Compact Framework 2.0 SP2 Microsoft .NET Compact Framework 3.5 Microsoft .NET Framework 4 Client Profile Microsoft .NET Framework 4 Extended Microsoft .NET Framework 4 Multi-Targeting Pack Microsoft ASP.NET MVC 1.0 Microsoft ASP.NET MVC 2 Microsoft ASP.NET MVC 2 - Visual Studio 2008 Tools Microsoft ASP.NET MVC 2 - Visual Studio 2010 Tools Do I need to install the .NET 2.0 SDK? Am I issuing the caspol command incorrectly? Is there something else that I am missing?

    Read the article

  • How to handle "Remember me" in the Asp.Net Membership Provider

    - by RemotecUk
    Ive written a custom membership provider for my ASP.Net website. Im using the default Forms.Authentication redirect where you simply pass true to the method to tell it to "Remember me" for the current user. I presume that this function simply writes a cookie to the local machine containing some login credential of the user. What does ASP.Net put in this cookie? Is it possible if the format of my usernames was known (e.g. sequential numbering) someone could easily copy this cookie and by putting it on their own machine be able to access the site as another user? Additionally I need to be able to inercept the authentication of the user who has the cookie. Since the last time they logged in their account may have been cancelled, they may need to change their password etc so I need the option to intercept the authentication and if everything is still ok allow them to continue or to redirect them to the proper login page. I would be greatful for guidance on both of these two points. I gather for the second I can possibly put something in global.asax to intercept the authentication? Thanks in advance.

    Read the article

  • Strange File Upload issue with asp.net site on a web farm

    - by Coov
    I have a basic asp.net file upload page. When I test file uploads from my local machine, it works fine. When I test file uploads from our dev machine, it works fine. When I deploy the site to our production webfarm, it behaves strangely. If I access the site from off the network, I can load file-after-file without issue. If I access the site from within our network, I can load the first file just fine but any subsequent files result it a bad sequence of commands error. I'm not sure if this is web farm issue, a network issue, or something else. It feels like a connection is not being disposed of properly but it doesn't make sense why everything works fine remotely. Markup: <asp:FileUpload ID="FileUpload1" runat="server" Width="350px" /> <asp:Button ID="btnSubmit" runat="server" Text="Upload" onclick="btnSubmit_Click" /> Code: if (FileUpload1.HasFile) { FtpWebRequest ftpRequest; FtpWebResponse ftpResponse; ftpRequest = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://ftp.myftpsite.com/" + FileUpload1.FileName)); ftpRequest.Method = WebRequestMethods.Ftp.UploadFile; ftpRequest.Proxy = null; ftpRequest.UseBinary = true; ftpRequest.Credentials = new NetworkCredential("username", "password"); ftpRequest.KeepAlive = false; byte[] fileContents = new byte[FileUpload1.PostedFile.ContentLength]; using (Stream fr = FileUpload1.PostedFile.InputStream) { fr.Read(fileContents, 0, FileUpload1.PostedFile.ContentLength); } using (Stream writer = ftpRequest.GetRequestStream()) { writer.Write(fileContents, 0, fileContents.Length); } ftpResponse = (FtpWebResponse)ftpRequest.GetResponse(); Response.Write(ftpResponse.StatusDescription); }

    Read the article

  • What are practical guidelines for evaluating a language's "Turing Completeness"?

    - by AShelly
    I've read "what-is-turing-complete" and the wikipedia page, but I'm less interested in a formal proof than in the practical implications of being Turing Complete. What I'm actually trying to decide is if the toy language I've just designed could be used as a general-purpose language. I know I can prove it is if I can write a Turing machine with it. But I don't want to go through that exercise until I'm fairly certain of success. Is there a minimum set of features without which Turing Completeness is impossible? Is there a set of features which virtually guarantees completeness? (My guess is that conditional branching and a readable/writeable memory store will get me most of the way there) EDIT: I think I've gone off on a tangent by saying "Turing Complete". I'm trying to guess with reasonable confidence that a newly invented language with a certain feature set (or alternately, a VM with a certain instruction set) would be able to compute anything worth computing. I know proving you can building a Turing machine with it is one way, but not the only way. What I was hoping for was a set of guidelines like: "if it can do X,Y,and Z, it can probably do anything".

    Read the article

  • Unknown user 'app' with capistrano

    - by trobrock
    This is my first time trying to set up capistrano to deploy a rails application. I am deploying from my local machine to my remote server that has the repo, web, app, and mysql servers all on the same machine. I am following this walk through: http://www.capify.org/index.php/From_The_Beginning I get to the command cap deploy:start Then I get this error: *** [err :: example.com] sudo: unknown user: app command finished failed: "sh -c 'cd /var/www/example/current && sudo -p '\\''sudo password: '\\'' -u app nohup script/spin'" on example.com Am I supposed to add an 'app' user, or is there a way of changing what user the command runs as? This is my deploy.rb: set :application, "example" set :repository, "[email protected]:example.git" set :user, "trobrock" set :branch, 'master' set :deploy_to, "/var/www/example" set :scm, :git # Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none` role :web, "example.com" # Your HTTP server, Apache/etc role :app, "example.com" # This may be the same as your `Web` server role :db, "example.com", :primary => true # This is where Rails migrations will run And obviously everywhere it says example.com is my servers hostname and every it just says example is the app name.

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >