Search Results

Search found 16544 results on 662 pages for 'sys path'.

Page 570/662 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • GMail appearing to ignore Reply-To.

    - by Samuurai
    I'm using a gmail account to send emails from my website. I'm using the same account to pick up emails which are generated by the contact facility on my site. I'm using the Reply-To field to attempt to make it easier to hit reply and easily get back to people. The message comes up with the 'from' address and ignores the 'reply-to' address. Here's my header: Return-Path: <[email protected]> Received: from svr1 (ec2-79-125-266-266.eu-west-1.compute.amazonaws.com [79.125.266.266]) by mx.google.com with ESMTPS id u14sm23273123gvf.17.2010.03.10.14.33.24 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 10 Mar 2010 14:33:25 -0800 (PST) Received: from localhost ([127.0.0.1] helo=www.rds.com) by aquacouture with esmtp (Exim 4.69) (envelope-from <[email protected]>) id 1NpUSx-0001dK-JM for [email protected]; Wed, 10 Mar 2010 22:33:23 +0000 User-Agent: CodeIgniter Date: Wed, 10 Mar 2010 22:33:23 +0000 From: "New Inquiry" <[email protected]> Reply-To: "Beren" <[email protected]> To: [email protected] Subject: =?utf-8?Q?Test?= X-Sender: [email protected] X-Mailer: CodeIgniter X-Priority: 3 (Normal) Message-ID: <[email protected]> Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="B_ALT_4b981e3390ccd" This is a multi-part message in MIME format. Your email application may not support this format. --B_ALT_4b981e3390ccd Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit test --B_ALT_4b981e3390ccd Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable test --B_ALT_4b981e3390ccd--

    Read the article

  • centos: linking lib that doesn't have a .pc file for pkg-config

    - by Paulie
    I'm trying to compile svg2pdf on centos. I think I've managed to get the required dependencies installed using yum: sudo yum install librsvg2 sudo yum install cairo The Makefile contains: MYCFLAGS=`pkg-config --cflags librsvg-2.0 cairo-pdf` -Wall -Wpointer-arith -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations - Wnested-externs -fno-strict-aliasing MYLDFLAGS=`pkg-config --libs librsvg-2.0 cairo-pdf` After typing 'make', the first couple of lines of output are: cc `pkg-config --cflags librsvg-2.0 cairo-pdf` -Wall -Wpointer-arith -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-strict-aliasing `pkg-config --libs librsvg-2.0 cairo-pdf` svg2pdf.c -o svg2pdf Package librsvg-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `librsvg-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'librsvg-2.0' found There is no librsvg-2.0.pc on this system (but there is when I installed this using macports on my macbookpro). How should I get this package linked? Should I change the Makefile, and if so, to what? Should I generate the .pc files, and if so, how? Or is there another way to resolve this? Or, has anyone been successful at installing svg2pdf on centos, and, if so, how did you manage it? I don't have much experience on linux, so I might be missing something obvious.

    Read the article

  • Rails: Custom template for email "deliver_" method?

    - by neezer
    I'm building an email system that stores my different emails in the database and calls the appropriate "deliver_" method via method_missing (since I can't explicitly declare methods since they're user-generated). My problem is that my rails app still tries to render the template for whatever the generated email is, though those templates don't exist. I want to force all emails to use the same template (views/test_email.html.haml), which will be setup to draw their formatting from my database records. How can I accomplish this? I tried adding render :template => 'test_email' in the test_email method in emailer_controller with no luck. models/emailer.rb: class Emailer < ActionMailer::Base def method_missing(method, *args) # not been implemented yet logger.info "method missing was called!!" end end controller/emailer_controller.rb: class EmailerController < ApplicationController def test_email @email = Email.find(params[:id]) Emailer.send("deliver_#{@email.name}") end end views/emails/index.html.haml: %h1 Listing emails %table{ :cellspacing => 0 } %tr %th Name %th Subject - @emails.each do |email| %tr %td=h email.name %td=h email.subject %td= link_to 'Show', email %td= link_to 'Edit', edit_email_path(email) %td= link_to 'Send Test Message', :controller => 'emailer', :action => 'test_email', :params => { :id => email.id } %td= link_to 'Destroy', email, :confirm => 'Are you sure?', :method => :delete %p= link_to 'New email', new_email_path Error I'm getting with the above: Template is missing Missing template emailer/name_of_email_in_database.erb in view path app/views

    Read the article

  • Mouse bugginess - SWFObject, Firefox 3 for Mac, and Flash

    - by justinbach
    I'm pulling my hair out over a problem I'm encountering on Firefox 3.5 & 3.6 on OS X. I'm using SWFobject to embed an AmMap of the US, which has rollover tooltips for various states. The rollovers are working fine in every other browser I've tested, but they're very buggy on FF for Mac--most of the time they don't show up at all, but if I persistently click a state that's supposed to have a hover event, I might catch a glimpse of the tooltip. Here's the code for the SWFObject embed (incidentally, this isn't being done in the document head due to templating reasons). The reason that the SWFObject initialization is wrapped in Jquery's document.ready handler is that the swf wasn't even appearing in FF 3.5.9 for mac until I added that in: $(document).ready(function() { var params = { quality: "high", scale: "noscale", allowscriptaccess: "always", allowfullscreen: "true", bgcolor: "#FFFFFF", base:"/<?php print LANG . "/locations/" ?>" }; var flashvars = { path: "", settings_file: "mapsettings", data_file:"mapdata" }; var attributes = { id: "flashmap", name: "flashmap" }; swfobject.embedSWF("/assets/flash/ammap.swf", "flashmap", "470", "300", "8", null, flashvars, params, attributes); }); Any feedback would be greatly appreciate...site goes live in 48 hours! Thanks!

    Read the article

  • Embedding Perl Interpreter

    - by cam
    Hi, just downloaded ActivePerl. I want to embed the perl interpreter in a C# application (or at least call the perl interpreter from C#). I need to be able to send send out data to Perl from C#, then receive the output back into C#. I just installed ActivePerl, and added MS Script Control 1.0 as a reference. I found this code on the internet, but am having trouble getting it to work. MSScriptControl.ScriptControlClass Interpreter = new MSScriptControl.ScriptControlClass(); Interpreter.Language = @"ActivePerl"; string Program = @"reverse 'abcde'"; string Results = (string)Interpreter.Eval(Program); return Results; Originally, it had 'PerlScript' instead of 'ActivePerl', but neither work for me. I'm not entirely sure what Interpreter.Language expects. Does it require the path to the interpreter? Solved... I'm not sure how, but when I changed it back to PerlScript it works now. Still, I would like to know if MSScript Control is using ActivePerl or another interpreter.

    Read the article

  • NSString's stringByAppendingPathComponent: removes a '/' in http://

    - by Jasarien
    I've been modifying some code to work between Mac OS X and iPhone OS. I came across some code that was using NSURL's URLByAppendingPathComponent: (added in 10.6), which as some may know, isn't available in the iPhone SDK. My solution to make this code work between OS's is to use NSString *urlString = [myURL absoluteString]; urlString = [urlString stringByAppendingPathComponent:@"helloworld"]; myURL = [NSURL urlWithString:urlString]; The problem with this is that NSString's stringByAppendingPathComponent: seems to remove one of the /'s from the http:// part of the URL. Is this intended behaviour or a bug? Edit Ok, So I was a bit too quick in asking the question above. I re-read the documentation and it does say: Note that this method only works with file paths (not, for example, string representations of URLs) However, it doesn't give any pointers in the right direction for what to do if you need to append a path component to a URL on the iPhone... I could always just do it manually, adding a /if necessary and the extra string, but I was looking to keep it as close to the original Mac OS X code as possible...

    Read the article

  • XCTest.framework build error

    - by user2703123
    I am using the DropBox Core API in my app and therefore, I must include the XCTest framework, because, when I haven't added the XCTest framework, my app can't connect to dropbox, however when I do add the framework, I get an error while building for the simulator. There is nothing wrong with my code! Here is the error: Ld /Users/Zach/Library/Developer/Xcode/DerivedData/SnapDrop!-fchnxyvnqyeefscfhmohrzxtiqeb/Build/Products/Debug-iphonesimulator/SnapDrop!.app/SnapDrop! normal i386 cd "/Users/Zach/Desktop/SnapDrop!" setenv IPHONEOS_DEPLOYMENT_TARGET 6.1 setenv PATH "/Applications/Xcode5-DP6.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode5-DP6.app/Contents/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Applications/Xcode5-DP6.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -arch i386 -isysroot /Applications/Xcode5-DP6.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator7.0.sdk -L/Users/Zach/Library/Developer/Xcode/DerivedData/SnapDrop!-fchnxyvnqyeefscfhmohrzxtiqeb/Build/Products/Debug-iphonesimulator -F/Users/Zach/Library/Developer/Xcode/DerivedData/SnapDrop!-fchnxyvnqyeefscfhmohrzxtiqeb/Build/Products/Debug-iphonesimulator -F/Users/Zach/Downloads/dropbox-ios-sdk-1.3.5 -F/Users/Zach/Downloads/dropbox-ios-sync-sdk-1-1.1.0 -F/Applications/Xcode5-DP6.app/Contents/Developer/Library/Frameworks -F/Users/Zach/Desktop -filelist /Users/Zach/Library/Developer/Xcode/DerivedData/SnapDrop!-fchnxyvnqyeefscfhmohrzxtiqeb/Build/Intermediates/SnapDrop!.build/Debug-iphonesimulator/SnapDrop!.build/Objects-normal/i386/SnapDrop!.LinkFileList -Xlinker -objc_abi_version -Xlinker 2 -fobjc-arc -fobjc-link-runtime -Xlinker -no_implicit_dylibs -mios-simulator-version-min=6.1 -framework iAd -framework AssetsLibrary -framework QuartzCore -framework SystemConfiguration -framework Security -framework CFNetwork -framework XCTest -framework Dropbox -framework DropboxSDK -framework CoreGraphics -framework UIKit -framework Foundation -Xlinker -dependency_info -Xlinker /Users/Zach/Library/Developer/Xcode/DerivedData/SnapDrop!-fchnxyvnqyeefscfhmohrzxtiqeb/Build/Intermediates/SnapDrop!.build/Debug-iphonesimulator/SnapDrop!.build/Objects-normal/i386/SnapDrop!_dependency_info.dat -o /Users/Zach/Library/Developer/Xcode/DerivedData/SnapDrop!-fchnxyvnqyeefscfhmohrzxtiqeb/Build/Products/Debug-iphonesimulator/SnapDrop!.app/SnapDrop! ld: building for iOS Simulator, but linking against dylib built for MacOSX file '/Applications/Xcode5-DP6.app/Contents/Developer/Library/Frameworks/XCTest.framework/XCTest' for architecture i386 clang: error: linker command failed with exit code 1 (use -v to see invocation) What should I do? If my framework is corrupt, can you tell me how to reinstall it? I have tried deleting and reinstalling Xcode with no luck.

    Read the article

  • Porting an IBXpress Interbase 6 app to the current Firebird platform, on Delphi 7?

    - by robsoft
    Just wondering if there are any gotchas to be wary of here. We have a legacy D7 app that we developed several years ago for a client, which uses IBXpress to talk to the open source Interbase 6 build. We're having a number of issues with that platform these days (very slow to connect/start-up on new hardware being the chief one) and the client has okayed spending some time/money moving the database over to Firebird. We really DON'T want to embark upon moving it to D2010 (or D2007 which would be my preference right now) as we figure that we might have to move the database layer from IBXpress to something else to best suit Firebird anyway. And at the end of the day, the client is only looking to lessen the database pain, not overhaul/upgrade/rewrite the app. Given the ancestry of Firebird, is it a fairly painless, well-understood path from IBXpress Interbase 6 to (whatever) with Firebird? We have quite a number of sprocs, triggers (and even datatypes) etc in the existing IB database already (and the client has a number of paying customers all using this platform) so we felt that going to Firebird was more likely to be a smoother move than moving to SQL Express (or another flavour of DB entirely). Note that we're not looking for 'embedded' DB advocacy - in many of our client's customers' installations, the software is used in a multi-user client-server way so keeping that kind of approach is important.

    Read the article

  • Getting DirectoryNotFoundException when trying to Connect to Device with CoreCon API

    - by ageektrapped
    I'm trying to use the CoreCon API in Visual Studio 2008 to programmatically launch device emulators. When I call device.Connect(), I inexplicably get a DirectoryNotFoundException. I get it if I try it in PowerShell or in C# Console Application. Here's the code I'm using: static void Main(string[] args) { DatastoreManager dm = new DatastoreManager(1033); Collection<Platform> platforms = dm.GetPlatforms(); foreach (var p in platforms) { Console.WriteLine("{0} {1}", p.Name, p.Id); } Platform platform = platforms[3]; Console.WriteLine("Selected {0}", platform.Name); Device device = platform.GetDevices()[0]; device.Connect(); Console.WriteLine("Device Connected"); SystemInfo info = device.GetSystemInfo(); Console.WriteLine("System OS Version:{0}.{1}.{2}", info.OSMajor, info.OSMinor, info.OSBuildNo); Console.ReadLine(); } My question: Does anyone know why I'm getting this error? I'm running this on WinXP 32-bit, plain jane Visual Studio 2008 Pro. I imagine it's some config issue since I can't do it from a Console app or PowerShell. Here's the stack trace as requested: System.IO.DirectoryNotFoundException was unhandled Message="The system cannot find the path specified.\r\n" Source="Device Connection Manager" StackTrace: at Microsoft.VisualStudio.DeviceConnectivity.Interop.ConManServerClass.ConnectDevice() at Microsoft.SmartDevice.Connectivity.Device.Connect() at ConsoleApplication1.Program.Main(String[] args) in C:\Documents and Settings\Thomas\Local Settings\Application Data\Temporary Projects\ConsoleApplication1\Program.cs:line 23 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • Java EE6 App + EJB in Glassfish 3.0/Netbeans 6.8?

    - by egbokul
    Has anyone got this configuration working? Latest Netbeans, latest Glassfish, I created an EJB project, also an EE Application. The EJB in itself builds & deploys to Glassfish OK. Now when I want to reference the EJB, I have to add the EJB jar to the EE Application path, if I don't do this the code does not compile. But, the EJB jar gets packaged in the App jar and as a result when I try to deploy the app to Glassfish it says: "java.lang.IllegalArgumentException: Sniffers with type [ejb] and type [appclient] should not claim the archive at the same time. Please check the packaging of your archive" How do I tell Netbeans NOT TO package the EJB in the App jar? Or is the problem somewhere else? btw. if I remove the EJB manually from the JAR then the app deploys successfully (with asadmin deploy), but when I try to run it with appclient, I get a NullPointerException. Surely there must be a solution to this, I thought Netbeans was for web application development after all...

    Read the article

  • Makefile : Build in a separate directory tree

    - by Simone Margaritelli
    My project (an interpreted language) has a standard library composed by multiple files, each of them will be built into an .so dynamic library that the interpreter will load upon user request (with an import directive). Each source file is located into a subdirectory representing its "namespace", for instance : The build process has to create a "build" directory, then when each file is compiling has to create its namespace directory inside the "build" one, for instance, when compiling std/io/network/tcp.cc he run an mkdir command with mkdir -p build/std/io/network The Makefile snippet is : STDSRC=stdlib/std/hashing/md5.cc \ stdlib/std/hashing/crc32.cc \ stdlib/std/hashing/sha1.cc \ stdlib/std/hashing/sha2.cc \ stdlib/std/io/network/http.cc \ stdlib/std/io/network/tcp.cc \ stdlib/std/io/network/smtp.cc \ stdlib/std/io/file.cc \ stdlib/std/io/console.cc \ stdlib/std/io/xml.cc \ stdlib/std/type/reflection.cc \ stdlib/std/type/string.cc \ stdlib/std/type/matrix.cc \ stdlib/std/type/array.cc \ stdlib/std/type/map.cc \ stdlib/std/type/type.cc \ stdlib/std/type/binary.cc \ stdlib/std/encoding.cc \ stdlib/std/os/dll.cc \ stdlib/std/os/time.cc \ stdlib/std/os/threads.cc \ stdlib/std/os/process.cc \ stdlib/std/pcre.cc \ stdlib/std/math.cc STDOBJ=$(STDSRC:.cc=.so) all: stdlib stdlib: $(STDOBJ) .cc.so: mkdir -p `dirname $< | sed -e 's/stdlib/stdlib\/build/'` $(CXX) $< -o `dirname $< | sed -e 's/stdlib/stdlib\/build/'`/`basename $< .cc`.so $(CFLAGS) $(LDFLAGS) I have two questions : 1 - The problem is that the make command, i really don't know why, doesn't check if a file was modified and launch the build process on ALL the files no matter what, so if i need to build only one file, i have to build them all or use the command : make path/to/single/file.so Is there any way to solve this? 2 - Any way to do this in a "cleaner" way without have to distribute all the build directories with sources? Thanks

    Read the article

  • How do I create Twitter style URLs for my app - Using existing application or app redesign - Ruby on

    - by bgadoci
    I have developed a blog application of sorts that I am trying to allow other users to take advantage of (for free and mostly for family). I wondering if the authentication I have set up will allow for such a thing. Here is the scenario. Currently the application allows for users to sign up for an account and when they do so they can create blog posts and organize those posts via tags. The application displays no data publicly (another words, you have to login to see anything). To gain access you have to create an account and even after you do, you cannot see anyone else's information as the applications filters using the current_user method and displays in the /posts/index.html.erb page. This would be great if a user only wanted to blog and share it with themselves, not really what I am looking for. My question has two parts (hopefully I won't make anyone mad by not putting these into two questions) Is it possible for a particular users data to live at www.myapplication.com/user without moving everything to the /user/show.html.erb file? Is it possible to make some of that information (living at the URL) public but still require login for create and destroy actions. Essentially, exactly like twitter. I am just curious if I can get from where I am (using the current_user methods across controllers to display in /posts/index.html.erb) to where I want to be. My fear is that I have to redesign the app such that the user data lives in the /user/show.html.erb page. Thoughts? UPDATE: I am using Clearance for authentication by Thoughtbot. I wonder if there is something I can set in the vendored gem path to represent the /posts/index.html.erb code as the /user/id code and replace id with the user name.

    Read the article

  • Database version is zero on initial install

    - by mahesh
    I have released an app (World Time) with initial database. Now i want to update the app with a database upgrade. I have put in the upgrade code in OnUpgrade() and checking for the newVersion. But it was not being called in my local testing... So i put in the debug statement to get the database version and it is zero . Any idea why it is not being versioned ? Following is the code to copy the database from my Assets folder ... InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); -- Mahesh http://android.maheshdixit.com

    Read the article

  • Download file in coldfusion and read its content

    - by Deepak
    cfhttp with a get to download the files. Does anyone have an example of cfhttp working? Are there special settings that need to be set up on the server side to get this tag to work. When I try the following code: <CFHTTP METHOD = "get" URL="http://data.bls.gov/PDQ/servlet/SurveyOutputServlet?series_id=LNU04032231&years_option=specific_years&to_year=2010&from_year=2009&delimiter=comma&output_view&output_format=excelTable" path="/Users/Deepak" file="testfile.xls"> Nothing comes back to my computer? How do you get it to pop up the "where do you want to save the file box" dialogue box? I am submitting a form in coldfusion by hitting this link http://data.bls.gov/PDQ/servlet/SurveyOutputServlet?series_id=LNU04032231&years_option=specific_years&to_year=2010&from_year=2009&delimiter=comma&output_view&output_format=excelTable I am getting a excel file as a result. How can I save this file on my local box. Or, is it possible to directly read the content of file without saving it in my local box through coldfusion using cfftp or cfhttp? cfhttp.mimeType is application/vnd.ms-excel in this case. Thanks!!

    Read the article

  • What Happens to Commit Logs on a Branch After Merging?

    - by Levi Hackwith
    Scenario: Programmer creates a branch for project 'foo' called 'my_foo' at revision 5 Programmer makes multiple changes to multiple files as he works on the 'my_foo' feature. At the end of each major step, say adding several new functions to class, the programmer does an svn commit on the appropriate files therefore committing them to the branch After several weeks and many commits later (each commit having a commit log describing what he did), the programmer merges the branch back into the trunk: #Assume the following is being done from inside a working copy of the trunk: svn merge -r 5:15 file:///path/to/repo/branches/my_foo Hazzah! he's merged all his changes back into trunk! There's much rejoicing and drinking of Mountain Dew. Now let's say another programmer comes along a week later and updates their working copy from revision 5 to revision 15. "Wow", they say. "I wonder what's changed since revision 5". The programmer then does an svn status on their working copy and they get something like this: ------------------------------------------------------------------------ r15 | programmer1 | 2010-03-20 21:27:04 -0400 (Sat, 20 Mar 2010) | 1 line Merging Version 2.0 Changes into trunk ------------------------------------------------------------------------ r5 | programmer2 | 2010-02-15 10:59:55 -0500 (Mon, 15 Feb 2010) | 1 line Added assets/images/tumblr_icon.png to trunk What the heck happened to all the notes that the other programmer put in with all of his commits in his branch? Do those not get pulled over during a merge? Am I crazy or just forgetting something?

    Read the article

  • Flash AS3 load file xml

    - by Elias
    Hello, I'm just trying to load an xml file witch can be anywere in the hdd, this is what I have done to browse it, but later when I'm trying to load the file it would only look in the same path of the swf file here is the code package { import flash.display.Sprite; import flash.events.; import flash.net.; public class cargadorXML extends Sprite { public var cuadro:Sprite = new Sprite(); public var file:FileReference; public var req:URLRequest; public var xml:XML; public var xmlLoader:URLLoader = new URLLoader(); public function cargadorXML() { cuadro.graphics.beginFill(0xFF0000); cuadro.graphics.drawRoundRect(0,0,100,100,10); cuadro.graphics.endFill(); cuadro.addEventListener(MouseEvent.CLICK,browser); addChild(cuadro); } public function browser(e:Event) { file = new FileReference(); file.addEventListener(Event.SELECT,bien); file.browse(); } public function bien(e:Event) { xmlLoader.addEventListener(Event.COMPLETE, loadXML); req=new URLRequest(file.name); xmlLoader.load(req); } public function loadXML(e:Event) { xml=new XML(e.target.data); //xml.name=file.name; trace(xml); } } } when I open a xml file that isnt it the same directory as the swf, it gives me an unfound file error. is there anything I can do? cause for example for mp3 there is an especial class for loading the file, see http://www.flexiblefactory.co.uk/flexible/?p=46 thanks

    Read the article

  • Silverlight Data Access - how to keep the gruntwork on the server

    - by akaphenom
    What technologies are used / recommended for HTTP Rpc Calls from Silverlight. My Server Side stack is JBoss (servlets / json_rpc [jabsorb]), and we have a ton of business logic (object creation, validation, persistence, server side events) in place that I still want to take advantage of. This is our first attempt at bringing an applet style ria to our product, and ideally we keep both HTML and Silverlight versions. For better or worse the powers that be have pushed us down the silverlight path, and while flex / java fx / silverlight is an interesting debate, that question is removed from the equation. We just have to find a way to get silverlight to behave with our classes. Should I be defining .NET Class representation of our JSON objects and the methodology to serialize / deserialize access to those objects? IE "blah.com/dispenseRpc?servlet=xxxx&p1=blah&p2=blahblah creating functions that invoke the web request and convert the incomming response string to objects? Another way would be to reverse engineer the .NET wcf(or whatever) communications and implement the handler on the Java side that invokes the correct server side code and returns what .NET expects back. But that sounds much trickier. T

    Read the article

  • How do I stop cpan from reconfiguring each time? + More

    - by Leonard
    I'm running on a Mac (version 10.6.3) and am struggling to understand what is going on with my Perl installation. I let the system do a copy from my previous mac, and I appear to have a second perl installed, which appears earlier in my path. I can't tell (or remember) if I might have installed it with fink, macports or CPAN or what. type -a cpan cpan is /opt/local/bin/cpan cpan is /usr/bin/cpan I'm seeing two oddities. (To start with!) When I run cpan, and let it configure in ~lcuff/.cpan, each time I run it, it wants to reconfigure, giving the message: Sorry, we have to rerun the configuration dialog for CPAN.pm due to some missing parameters... Also, when I try to install File::Find::Rule (so I can list my CPAN modules, per the FAQ) I end up with an error message that I can't decipher or Google a solution for: Use of inherited AUTOLOAD for non-method Digest::SHA::shaopen() is deprecated at /opt/local/lib/perl5/vendor_perl/5.8.9/darwin-2level/Digest/SHA.pm line 55. Catching error: "Can't locate auto/Digest/SHA/shaopen.al in \@INC (\@INC contains: /sw/lib/perl5 /sw/lib/perl5/darwin /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level /opt/local/lib/perl5/site_perl/5.8.9 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl/5.8.9/darwin-2level /opt/local/lib/perl5/vendor_perl/5.8.9 /opt/local/lib/perl5/vendor_perl /opt/local/lib/perl5/5.8.9/darwin-2level /opt/local/lib/perl5/5.8.9 /Users/lcuff) at /opt/local/lib/perl5/vendor_perl/5.8.9/darwin-2level/Digest/SHA.pm line 55\cJ" at /opt/local/lib/perl5/5.8.9/CPAN.pm line 359 CPAN::shell() called at /opt/local/bin/cpan line 198

    Read the article

  • How to handle media kept on a separate server (PHP)

    - by Sandman
    So, I have three server, and the idea was to keep all media (images, files, movies) on a media server. I never got around to do it but I think I probably should. So these are the three servers: WWW server DB server Media server Visitors obviously connect to the WWW server and currently image resizing and cache:ing is done on the WWW servers as the original files are kept there. So the idea for me is for image functions I have, that does all the image compositioning, resizing and cahceing would just pie the command over to the media server that would return ther path to the finnished file. What I don't know is how to handle functions such as file_exists() and figuring out image dimensions when needed before even any image management comes into play. Do I pipe all these commands to the other server, via HTTP? I was thinking along the ways of doing it this way: function image(##ARGS##){ if ($GLOBALS["media_host"] != "localhost"){ list ($src, $width, height) = file('http://$GLOBALS[media_host]/imgfunc.php?args=##ARGS##'); return "<img src='$src' height and width >"; } .... do other stuff here } Am I approaching this the wrong way? Is there a better way to do this?

    Read the article

  • python list mysteriously getting set to something within my django/piston handler

    - by Anverc
    To start, I'm very new to python, let alone Django and Piston. Anyway, I've created a new BaseHandler class "class BaseApiHandler(BaseHandler)" so that I can extend some of the stff that BaseHandler does. This has been working fine until I added a new filter that could limit results to the first or last result. Now I can refresh the api page over and over and sometimes it will limit the result even if I don't include /limit/whatever in my URL... I've added some debug info into my return value to see what is happening, and that's when it gets more weird. this return value will make more sense after you see the code, but here they are for reference: When the results are correct: "statusmsg": "2 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',}", when the results are incorrect (once you read the code you'll notice two things wrong. First, it doesn't have 'limit':'None', secondly it shouldn't even get this far to begin with. "statusmsg": "1 hours_detail found with query: {'empid':'22','datestamp':'2009-03-02',with limit[0,1](limit,None),}", It may be important to note that I'm the only person with access to the server running this right now, so even if it was a cache issue, it doesn't make sense that I can just refresh and get different results by hitting F5 while viewing: http://localhost/api/hours_detail/datestamp/2009-03-02/empid/22 Here's the code broken into urls.py and handlers.py so that you can see what i'm doing: URLS.PY urlpatterns = patterns('', #hours_detail/id/{id}/empid/{empid}/projid/{projid}/datestamp/{datestamp}/daterange/{fromdate}to{todate}/limit/{first|last}/exact #empid is required # id, empid, projid, datestamp, daterange can be in any order url(r'^api/hours_detail/(?:' + \ r'(?:[/]?id/(?P<id>\d+))?' + \ r'(?:[/]?empid/(?P<empid>\d+))?' + \ r'(?:[/]?projid/(?P<projid>\d+))?' + \ r'(?:[/]?datestamp/(?P<datestamp>\d{4,}[-/\.]\d{2,}[-/\.]\d{2,}))?' + \ r'(?:[/]?daterange/(?P<daterange>(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})(?:to|/-)(?:\d{4,}[-/\.]\d{2,}[-/\.]\d{2,})))?' + \ r')+' + \ r'(?:/limit/(?P<limit>(?:first|last)))?' + \ r'(?:/(?P<exact>exact))?$', hours_detail_resource), HANDLERS.PY # inherit from BaseHandler to add the extra functionality i need to process the possibly null URL params class BaseApiHandler(BaseHandler): # keep track of the handler so the data is represented back to me correctly post_name = 'base' # THIS IS THE LIST IN QUESTION - SOMETIMES IT IS GETTING SET TO [0,1] MYSTERIOUSLY # this gets set to a list when the results are to be limited limit = None def has_limit(self): return (isinstance(self.limit, list) and len(self.limit) == 2) def process_kwarg_read(self, key, value, d_post, b_exact): """ this should be overridden in the derived classes to process kwargs """ pass # override 'read' so we can better handle our api's searching capabilities def read(self, request, *args, **kwargs): d_post = {'status':0,'statusmsg':'Nothing Happened'} try: # setup the named response object # select all employees then filter - querysets are lazy in django # the actual query is only done once data is needed, so this may # seem like some memory hog slow beast, but it's actually not. d_post[self.post_name] = self.queryset(request) # this is a string that holds debug information... it's the string I mentioned before pasting this code s_query = '' b_exact = False if 'exact' in kwargs and kwargs['exact'] <> None: b_exact = True s_query = '\'exact\':True,' for key,value in kwargs.iteritems(): # the regex url possibilities will push None into the kwargs dictionary # if not specified, so just continue looping through if that's the case if value == None or key == 'exact': continue # write to the s_query string so we have a nice error message s_query = '%s\'%s\':\'%s\',' % (s_query, key, value) # now process this key/value kwarg self.process_kwarg_read(key=key, value=value, d_post=d_post, b_exact=b_exact) # end of the kwargs for loop else: if self.has_limit(): # THIS SEEMS TO GET HIT SOMETIMES IF YOU CONSTANTLY REFRESH THE API PAGE, EVEN THOUGH # THE LINE IN THE FOR LOOP WHICH UPDATES s_query DOESN'T GET HIS AND THUS self.process_kwarg_read ALSO # DOESN'T GET HIT SO NEITHER DOES limit = [0,1] s_query = '%swith limit[%s,%s](limit,%s),' % (s_query, self.limit[0], self.limit[1], kwargs['limit']) d_post[self.post_name] = d_post[self.post_name][self.limit[0]:self.limit[1]] if d_post[self.post_name].count() == 0: d_post['status'] = 0 d_post['statusmsg'] = '%s not found with query: {%s}' % (self.post_name, s_query) else: d_post['status'] = 1 d_post['statusmsg'] = '%s %s found with query: {%s}' % (d_post[self.post_name].count(), self.post_name, s_query) except: e = sys.exc_info()[1] d_post['status'] = 0 d_post['statusmsg'] = 'error: %s' % e d_post[self.post_name] = [] return d_post class HoursDetailHandler(BaseApiHandler): #allowed_methods = ('GET',) model = HoursDetail exclude = () post_name = 'hours_detail' def process_kwarg_read(self, key, value, d_post, b_exact): if ... # I have several if/elif statements here that check for other things... # 'self.limit =' only shows up in the following elif: elif key == 'limit': order_by = 'clock_time' if value == 'last': order_by = '-clock_time' d_post[self.post_name] = d_post[self.post_name].order_by(order_by) # TO GET HERE, THE ONLY PLACE IN CODE WHERE self.limit IS SET, YOU MUST HAVE GONE THROUGH # THE value == None CHECK???? self.limit = [0, 1] else: raise NameError def read(self, request, *args, **kwargs): # empid is required, so make sure it exists before running BaseApiHandler's read method if not('empid' in kwargs and kwargs['empid'] <> None and kwargs['empid'] >= 0): return {'status':0,'statusmsg':'empid cannot be empty'} else: return BaseApiHandler.read(self, request, *args, **kwargs) Does anyone have a clue how else self.limit might be getting set to [0, 1] ? Am I misunderstanding kwargs or loops or anything in Python?

    Read the article

  • Getting Safari document title/location with Scripting Bridge does not work in full-screen mode

    - by Mark
    I'm trying to get the URL and document title from the topmost Safari document/tab. I have an AppleScript and an objective-c version using Apple's Scripting Bridge framework. Both versions work fine for most web pages, however when I open a Youtube video in full-screen mode, the Scripting Bridge based version fails. The Apple Script works fine for "normal" and full-screen Safari windows. Can anyone see what is wrong with the Scripting Bridge code below to cause it to fail for full-screen Safari windows? Here the code (I omitted error checking for brevity): AppleScript: tell application "Safari" # Give us some time to open video in full-screen mode delay 10 do JavaScript "document.title" in document 0 end tell Scripting Bridge: SafariApplication* safari = [SBApplication applicationWithBundleIdentifier:@"com.apple.Safari"]; SBElementArray* windows = [safari windows]; SafariTab* currentTab = [[windows objectAtIndex: 0] currentTab]; // This fails when in full-screen mode: id result = [safari doJavaScript: @"document.title" in: currentTab]; NSLog(@"title: %@", result); Scripting Bridge error (with added line breaks): Apple event returned an error. Event = 'sfri'\'dojs'{ '----':'utxt'("document.title"), 'dcnm':'obj '{ 'want':'prop', 'from':'obj '{ 'want':'cwin', 'from':'null'(), 'form':'indx', 'seld':1 }, 'form':'prop', 'seld':'cTab' } } Error info = { ErrorNumber = -1728; ErrorOffendingObject = <SBObject @0x175c2de0: currentTab of SafariWindow 0 of application "Safari" (238)>; } I could not find details about the given error code. It complains about 'currentTab' which shows that the JavaScript event at least made it all the way to Safari. I assume that the current tab receives the event, but refuses to run the JS code, because it is in full-screen mode. However, why does this work for an AppleScript? Don't they use the same code path eventually? Any suggestions are greatly appreciated. Thanks!

    Read the article

  • Updating a Minimum spanning tree when a new edge is inserted

    - by Lynette
    Hello, I've been presented the following problem in University: Let G = (V, E) be an (undirected) graph with costs ce = 0 on the edges e € E. Assume you are given a minimum-cost spanning tree T in G. Now assume that a new edge is added to G, connecting two nodes v, tv € V with cost c. a) Give an efficient algorithm to test if T remains the minimum-cost spanning tree with the new edge added to G (but not to the tree T). Make your algorithm run in time O(|E|). Can you do it in O(|V|) time? Please note any assumptions you make about what data structure is used to represent the tree T and the graph G. b)Suppose T is no longer the minimum-cost spanning tree. Give a linear-time algorithm (time O(|E|)) to update the tree T to the new minimum-cost spanning tree. This is the solution I found: Let e1=(a,b) the new edge added Find in T the shortest path from a to b (BFS) if e1 is the most expensive edge in the cycle then T remains the MST else T is not the MST It seems to work but i can easily make this run in O(|V|) time, while the problem asks O(|E|) time. Am i missing something? By the way we are authorized to ask for help from anyone so I'm not cheating :D Thanks in advance

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • decompressing .gZ file from Document directory?

    - by senthilmuthu
    hi, i am having .gZ (zip file) in document directory.i want unZip it.but i am using libz.dylib framework .will it decompress and save all data to that file path?how can i get that extracted data?any has experienced in doing this?any help?when i use the method,but when i put break point, it returns data error(used in NSLog)--Z_DATA_ERROR-- - (id)initWithGzippedData: (NSData *)gzippedData; { [gzippedData retain]; if ([gzippedData length] == 0) return nil; unsigned full_length = [gzippedData length]; unsigned half_length = [gzippedData length] / 2; NSMutableData *decompressed = [[NSMutableData alloc] initWithLength:(full_length + half_length)]; BOOL done = NO; int status; z_stream strm; strm.next_in = (Bytef *)[gzippedData bytes]; strm.avail_in = [gzippedData length]; strm.total_out = 0; strm.zalloc = Z_NULL; strm.zfree = Z_NULL; if (inflateInit2(&strm, (15+32)) != Z_OK) { [gzippedData release]; [decompressed release]; return nil; } while (!done) { // Make sure we have enough room and reset the lengths. if (strm.total_out >= [decompressed length]) [decompressed increaseLengthBy: half_length]; strm.next_out = [decompressed mutableBytes] + strm.total_out; strm.avail_out = [decompressed length] - strm.total_out; // Inflate another chunk. status = inflate (&strm, Z_SYNC_FLUSH); if(status == Z_DATA_ERROR) { NSLog(@"data error"); } if (status == Z_STREAM_END) done = YES; else if (status != Z_OK) break; } if (inflateEnd (&strm) != Z_OK) { [decompressed release]; return nil; } // Set real length. [decompressed setLength: strm.total_out]; id newObject = [self initWithBytes:[decompressed bytes] length:[decompressed length]]; [decompressed release]; [gzippedData release]; return newObject; }

    Read the article

  • Visual C++ Assembly link library troubles

    - by Sanarothe
    Hi. I'm having a problem having my projects built in VC++ Express 2008... I'm using a library, irvine32.inc/lib. INCLUDE Irvine32.inc works for me at school (On already configured VS environments) by default, but at home (Windows 7 x64) I'm having a boatload of issues. My original post here was that a file that irvine32.inc referenced, in the same folder, 'could not be opened.' Added irvine folder to the include path for specific project, progress. Then I was getting an error with mt.exe, but a suggestion on the MSDN suggested turn off antivirus, and now project does build but when I run a program that does NOT reference anything in irvine32, it tells me repeatedly that my project has triggered a breakpoint, and allows me to continue or break. Continue just pops the same window, break loads another popup telling me that "No symbols are loaded for any call stack frame. Source code cannot be displayed." This popup lets me view the disassembly. I tested it with and without working statements, it just throws the same breakpoint on the first line of code. Now, if I run the program when it DOES require something from the include file, in this case, DumpRegs: INCLUDE Irvine32.inc .data .code main PROC mov ebx,1000h mov eax,1000h add eax,ebx call DumpRegs main ENDP END main This gives me 1main.obj : error LNK2019: unresolved external symbol _DumpRegs@0 referenced in function _main@0 1C:\Users\Cameron\csis165\Lab8_CCarroll\Debug\Lab8_CCarroll.exe : fatal error LNK1120: 1 unresolved externals This does NOT happen when I build a project from the book author's examples, which has the same include statement. I'm baffled. :(

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >