Search Results

Search found 17579 results on 704 pages for 'mercurial build'.

Page 607/704 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • xcode project-/target-settings-syntax for linker flag force_load on iPhone

    - by Kaiserludi
    Hi all. I am confronted with the double bind, that on the one hand for one of the 3rd party static libraries, my iPhone application uses, the linker flag -all_load has to be set in the application project- or target settings, otherwise the app crashes at runtime not finding some symbols, called internally from the lib, on the other hand for another 3rd party static lib -all_load must not be set on application level, or the app won't build thanks to a "duplicate symbols"-linker error. To solve this issue I now want to use force_load instant of load_all, as it due to documentation it does the same like all_load, but only for the passed path or lib-file, instead of all libs. The problem with force_load is, I do not have a clue, how to pass a path or file as parameter with it, when passing it via xcode project- or target-settings. All syntax-possibilities coming to my mind either lead into xcode thinking its another linker flag instead of a parameter to the previous one, or the linker is throwing syntax related errors or the flag simply does nothing at all in comparison to not being set. I also opened the .pbxproj-file in a text-editor to edit it to the correct command line syntax manually, but when reloading the project with xcode, it auto changes the syntax into interpreting the parameter to force_load as a separate flag. Anyone having an idea on this issue? Thx, Kaiserludi.

    Read the article

  • How to update maven local repository with newer artifacts from a remote repository?

    - by Richard
    My maven module A has a dependency on another maven module B provided by other people. When I run "mvn install" under A for the first time, maven downloads B-1.0.jar from a remote repository to my local maven repository. My module A builds fine. In the mean time, other people are deploying newer B-1.0.jar to the remote repository. When I run "mvn install" under A again, maven does not download the newer B-1.0.jar from the remote repository to my local repository. As a result, my module A build fails due to API changes in B-1.0.jar. I could manually delete B-1.0.jar from my local repository. Then maven would download the latest B-1.0.jar from the remote repository the next time when I run "mvn install". My question is how I can automatically let maven download the latest artifacts from a remote repository. I tried to set updatePolicy to "always". But that did not do the trick.

    Read the article

  • invasive vs non-invasive ref-counted pointers in C++

    - by anon
    For the past few years, I've generally accepted that if I am going to use ref-counted smart pointers invasive smart pointers is the way to go -- However, I'm starting to like non-invasive smart pointers due to the following: I only use smart pointers (so no Foo* lying around, only Ptr) I'm starting to build custom allocators for each class. (So Foo would overload operator new). Now, if Foo has a list of all Ptr (as it easily can with non-invasive smart pointers). Then, I can avoid memory fragmentation issues since class Foo move the objects around (and just update the corresponding Ptr). The only reason why this Foo moving objects around in non-invasive smart pointers being easier than invasive smart pointers is: In non-invasive smart pointers, there is only one pointer that points to each Foo. In invasive smart pointers, I have no idea how many objects point to each Foo. Now, the only cost of non-invasive smart pointers ... is the double indirection. [Perhaps this screws up the caches]. Does anyone have a good study of expensive this extra layer of indirection is?

    Read the article

  • Using IAM for user authentication

    - by mdavis6890
    I've read lots and lots of posts that touch on what I think should be a very common use case - but without finding exactly what I want, or a simple reason why it can't be done. I have some files on S3. I want to be able to grant certain users access to certain files, via a front end that I build. So far, I've made it work this way: I built the front end in Django, using it's built-in Users and Groups I have a model for Buckets, in which I mirror my S3 buckets. I have a m2m relationship from groups to buckets representing the S3 permissions. The user logs in and authenticates against Django's users. I grab from Django the list of buckets that the user is allowed to see I use boto to grab a list of links to files from those buckets and display to user. This works, but isn't ideal, and also just doesn't feel right. I've got to keep a mirror of the buckets, and I also have to maintain my own list of user/passwords and permissions, when AWS already has all that built in. What I really want is to simply create the users in IAM and use group permissions in IAM to control access to the S3 buckets. No duplication of data or function. My app would request a UN/PW from the user and use that to connect to IAM/S3 to pull the list of buckets and files, then display links to the user. Simple. How can I, or why can't I? Am I looking at this the wrong way? What's the "right" way to address this (I assume) very common use case?

    Read the article

  • How to solve generic algebra using solver/library programmatically? Matlab, Mathematica, Wolfram etc?

    - by DevDevDev
    I'm trying to build an algebra trainer for students. I want to construct a representative problem, define constraints and relationships on the parameters, and then generate a bunch of Latex formatted problems from the representation. As an example: A specific question might be: If y < 0 and (x+3)(y-5) = 0, what is x? Answer (x = -3) I would like to encode this as a Latex formatted problem like. If $y<0$ and $(x+constant_1)(y+constant_2)=0$ what is the value of x? Answer = -constant_1 And plug into my problem solver constant_1 > 0, constant_1 < 60, constant_1 = INTEGER constant_2 < 0, constant_2 > -60, constant_2 = INTEGER Then it will randomly construct me pairs of (constant_1, constant_2) that I can feed into my Latex generator. Obviously this is an extremely simple example with no real "solving" but hopefully it gets the point across. Things I'm looking for ideally in priority order * Solve algebra problems * Definition of relationships relatively straight forward * Rich support for latex formatting (not just writing encoded strings) Thanks!

    Read the article

  • How to inherit the current path when invoking Maven's exec-maven-plugin?

    - by wishihadabettername
    I have an <exec-maven-plugin> which calls an external command (in this case, svnversion). The command is in the path for the current user. However, when a separate shell is spawned by the plugin, the path is not initialized. I don't want to hardcode or define a variable for each external command (there would be too much to maintain, especially that there are both Windows and *nix users). My pom.xml contains the following: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <id>svnversion-exec</id> <phase>process-resources</phase> <goals> <goal>exec</goal> </goals> <configuration> <executable>svnversion</executable> <arguments> <argument><![CDATA[ >version.txt ]]></argument> </arguments> </configuration> </execution> </executions> </plugin> Currently I get the following output: [INFO] [exec:exec {execution: svnversion-exec}] 'svnversion' is not recognized as an internal or external command, operable program or batch file. [ERROR] BUILD ERROR: Result of cmd.exe /X /C "svnversion >version.txt" execution is: '1'. Thank you!

    Read the article

  • Excess errors on model from somewhere

    - by gmile
    I have a User model, and use an acts_as_authentic (from authlogic) on it. My User model have 3 validations on username and looks as following: User < ActiveRecord::Base acts_as_authentic validates_presence_of :username validates_length_of :username, :within => 4..40 validates_uniqueness_of :username end I'm writing a test to see my validations in action. Somehow, I get 2 errors instead of one when validating a uniqueness of a name. To see excess error, I do the following test: describe User do before(:each) do @user = Factory.build(:user) end it "should have a username longer then 3 symbols" do @user2 = Factory(:user) @user.username = @user2.username @user.save puts @user.errors.inspect end end I got 2 errors on username: @errors={"username"=>["has already been taken", "has already been taken"]}. Somehow the validation passes two times. I think authlogic causes that, but I don't have a clue on how to avoid that. Another case of problem is when I set username to nil. Somehow I get four validation errors instead of three: @errors={"username"=>["is too short (minimum is 3 characters)", "should use only letters, numbers, spaces, and .-_@ please.", "can't be blank", "is too short (minimum is 4 characters)"]} I think authlogic is one that causes this strange behaviour. But I can't even imagine on how to solve that. Any ideas?

    Read the article

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • Efficiently sending protocol buffer messages with http on an android platform

    - by Ben Griffiths
    I'm trying to send messages generated by Google Protocol Buffer code via a simple HTTP scheme to a server. What I have currently have implemented is here (forgive the obvious incompletion...): HttpClient client = new DefaultHttpClient(); String url = "http://192.168.1.69:8888/sdroidmarshal"; HttpPost postRequest = new HttpPost(url); String proto = offers.build().toString(); List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(1); nameValuePairs.add(new BasicNameValuePair("sdroidmsg", proto)); postRequest.setEntity(new UrlEncodedFormEntity(nameValuePairs)); try { ResponseHandler<String> responseHandler = new BasicResponseHandler(); String responseBody = client.execute(postRequest, responseHandler); } catch (Throwable t) { } I'm not that experienced with communications over the internet and no more so with HTTP - while I do understand the basics... So my question, before I blindly develop the rest of the application around this, is whether or not this is particularly efficient? I ideally would like to keep messages small and I assume toString() adds some unnecessary formatting.

    Read the article

  • Memeory Leak in Windows Page file when calling a shell command

    - by Arno
    I have an issue on our Windows 2003 x64 Build Server when invoking shell commands from a script. Each call causes a "memory leak" in the page file so it grows quite rapidly until it reaches the maximum and the machine stops working. I can reproduce the problem very nicely by running a perl script like for ($count=1; $count<5000; $count++) { system "echo huhu"; } It is independent of he scripting language as the same happens with lua: for i=1,5000 do os.execute("echo huhu") end I found somebody describing the same issue with php at http://www.issociate.de/board/post/454835/Memory_leak_occurs_when_exec%28%29_function_is_used_on_Windows_platform.html His solution: Firewall/Virus Scanner does not apply, neither are running on the machine. We can also reproduce the issue on other Developer Machines running XP 64, but not on XP 32 Bit. I also found an article describing a leak situation in page file at http://www.programfragment.com/ The guilty guy for the allocation is C:\WINDOWS\System32\svchost.exe -k netsvcs which runs all the basic Windows services. Does anybody know the issue and how to resolve it ?

    Read the article

  • What is your favorite API developer community site? And why? [closed]

    - by whatupwilly
    There are a lot of great sites out there that offer good documentation, tools, tips, best-practices, sample code, etc. for the API's they are publishing. A sample: http://apiwiki.twitter.com http://developer.netflix.com/ http://developers.facebook.com/ https://affiliate-program.amazon.com/gp/advertising/api/detail/main.html http://code.google.com/ http://remix.bestbuy.com/ http://www.flickr.com/services/api/misc.overview.html http://products.wolframalpha.com/api/webserviceapi.html There are some no-brainers that I think a good developer site should have: Hi level introduction Quick start guide API specific details - showing example request and responses Links to sample code and/or 3rd party libraries Developer registration (e.g. get an API key) Blog But what about some other things: Online-Forum or Msg Board vs. Google Group (or similar) Galleries/ShowCases - spotlighting great apps built on the API - who has done nice galleries? Community Wiki - How do people feel about letting the community have edit rights on API documentation pages Online testing tools (like Facebook has a lot of nice interactive tools to simulate request/responses) What are some packages that you would recommend to put this all together: pbwiki Google Group pages MediaWiki API vendor package such as Sonoa Systems that offers a customizable developer portal So, to summarize: What are some other great API developer portals out there What are some nice features you like on them Any recommendations on what to use to build these features out Thanks, Will Zappos.com Public API (soon to launch) Product Manager

    Read the article

  • SQLAlchemy Expression Language problem

    - by Torkel
    I'm trying to convert this to something sqlalchemy expression language compatible, I don't know if it's possible out of box and are hoping someone more experienced can help me along. The backend is PostgreSQL and if I can't make it as an expression I'll create a string instead. SELECT DISTINCT date_trunc('month', x.x) as date, COALESCE(b.res1, 0) AS res1, COALESCE(b.res2, 0) AS res2 FROM generate_series( date_trunc('year', now() - interval '1 years'), date_trunc('year', now() + interval '1 years'), interval '1 months' ) AS x LEFT OUTER JOIN( SELECT date_trunc('month', access_datetime) AS when, count(NULLIF(resource_id != 1, TRUE)) AS res1, count(NULLIF(resource_id != 2, TRUE)) AS res2 FROM tracking_entries GROUP BY date_trunc('month', access_datetime) ) AS b ON (date_trunc('month', x.x) = b.when) First of all I got a class TrackingEntry mapped to tracking_entries, the select statement within the outer joined can be converted to something like (pseudocode):: from sqlalchemy.sql import func, select from datetime import datetime, timedelta stmt = select([ func.date_trunc('month', TrackingEntry.resource_id).label('when'), func.count(func.nullif(TrackingEntry.resource_id != 1, True)).label('res1'), func.count(func.nullif(TrackingEntry.resource_id != 2, True)).label('res2') ], group_by=[func.date_trunc('month', TrackingEntry.access_datetime), ]) Considering the outer select statement I have no idea how to build it, my guess is something like: outer = select([ func.distinct(func.date_trunc('month', ?)).label('date'), func.coalesce(?.res1, 0).label('res1'), func.coalesce(?.res2, 0).label('res2') ], from_obj=[ func.generate_series( datetime.now(), datetime.now() + timedelta(days=365), timedelta(days=1) ).label(x) ]) Then I suppose I have to link those statements together without using foreign keys: outer.outerjoin(stmt???).??(func.date_trunc('month', ?.?), ?.when) Anyone got any suggestions or even better a solution?

    Read the article

  • Unable to create PDB file

    - by Ryan Smith
    For some reason this error started popping up today on one of my projects. Error 1 Unable to write to output file 'C:\MyProject\Release\MyProject.pdb': Unspecified error If I go into advanced compile options and change it to not generate and debug info, my project compiles fine. I have tried setting the permissions on the Release folder to full for everyone, so I would assume it's not a permissions issue. Also, I don't see anything in my log files that would provide me with more information about the issue. Does anyone know why this error would just start showing up or a way to fix it? Thanks. Update: I have rebooted my machine, restarted VS several times and have even completely deleted the existing OBJ file where the issue is happening. It's still giving me the same error. This is a simple one project solution that was working fine just last week. It appears to be an issue with VS trying to build the PDB file because I can delete them out of the Release and Debug folders without issue. When I try rebuilding them VS will start creating the file (about 1.4MB is size) but I still get the error.

    Read the article

  • Are Dynamic Prepared Statements Bad? (with php + mysqli)

    - by John
    I like the flexibility of Dynamic SQL and I like the security + improved performance of Prepared Statements. So what I really want is Dynamic Prepared Statements, which is troublesome to make because bind_param and bind_result accept "fixed" number of arguments. So I made use of an eval() statement to get around this problem. But I get the feeling this is a bad idea. Here's example code of what I mean // array of WHERE conditions $param = array('customer_id'=>1, 'qty'=>'2'); $stmt = $mysqli->stmt_init(); $types = ''; $bindParam = array(); $where = ''; $count = 0; // build the dynamic sql and param bind conditions foreach($param as $key=>$val) { $types .= 'i'; $bindParam[] = '$p'.$count.'=$param["'.$key.'"]'; $where .= "$key = ? AND "; $count++; } // prepare the query -- SELECT * FROM t1 WHERE customer_id = ? AND qty = ? $sql = "SELECT * FROM t1 WHERE ".substr($where, 0, strlen($where)-4); $stmt->prepare($sql); // assemble the bind_param command $command = '$stmt->bind_param($types, '.implode(', ', $bindParam).');'; // evaluate the command -- $stmt->bind_param($types,$p0=$param["customer_id"],$p1=$param["qty"]); eval($command); Is that last eval() statement a bad idea? I tried to avoid code injection by encapsulating values behind the variable name $param. Does anyone have an opinion or other suggestions? Are there issues I need to be aware of?

    Read the article

  • When I zip up my demo FlashDevelop project..why does it break?

    - by Ryan
    I built an AS3 image gallery using FlashDevelop. Before I zip up the application, I can run the image gallery in my browser by simply opening the index.html for the project. Everything works perfectly. I then zip up the project as proj-0.1.2.zip using winrar. I then unzip this newly created zip and try to load the application using the project index.html like above. The gallery doesn't function properly. From seeing what happens, it appears as though the image metadata is not present(but I'm not sure, see below). There are other applications as well that are broken. Videos don't load. If an application doesn't depend on any external assets then everything looks fine. Another thing..If I then build the FlashDevelop project and republish the swf..then it works in the index.html like I want. What is going on here? I want people to be able to fire up my demo apps out of the box by just running the index.html. If that doesn't always work and they have to figure out that they need to rebuild the SWF then that's pretty bad.

    Read the article

  • Will Learning C++ Help for Building Fast/No-Additional-Requirements Desktop Applications?

    - by vito
    Will learning C++ help me build native applications with good speed? Will it help me as a programmer, and what are the other benefits? The reason why I want to learn C++ is because I'm disappointed with the UI performances of applications built on top of JVM and .NET. They feel slow, and start slow too. Of course, a really bad programmer can create a slower and sluggish application using C++ too, but I'm not considering that case. One of my favorite Windows utility application is Launchy. And in the Readme.pdf file, the author of the program wrote this: 0.6 This is the first C++ release. As I became frustrated with C#’s large .NET framework requirements and users lack of desire to install it, I decided to switch back to the faster language. I totally agree with the author of Launchy about the .NET framework requirement or even a JRE requirement for desktop applications. Let alone the specific version of them. And some of the best and my favorite desktop applications don't need .NET or Java to run. They just run after installing. Are they mostly built using C++? Is C++ the only option for good and fast GUI based applications? And, I'm also very interested in hearing the other benefits of learning C++.

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Mapping an instance of IList in NHibernate

    - by Martin Kirsche
    I'm trying to map a parent-child relationship using NHibernate (2.1.2), MySql.Data (6.2.2) and MySQL Server (5.1). I figured out that this must be done using a <bag> in the mapping file. I build a test app which is running without yielding any errors and is doing an insert for each entry but somehow the foreign key inside the children table (ParentId) is always empty (null). Here are the important parts of my code... Parent public class Parent { public virtual int Id { get; set; } public virtual IList<Child> Children { get; set; } } <class name="Parent"> <id name="Id"> <generator class="native"/> </id> <bag name="Children" cascade="all"> <key column="ParentId"/> <one-to-many class="Child"/> </bag> </class> Child public class Child { public virtual int Id { get; set; } } <class name="Child"> <id name="Id"> <generator class="native"/> </id> </class> Program using (ISession session = sessionFactory.OpenSession()) { session.Save( new Parent() { Children = new List<Child>() { new Child(), new Child() } }); } Any ideas what I did wrong?

    Read the article

  • What if a large number of objects are passed to my SwingWorker.process() method?

    - by Trejkaz
    I just found an interesting situation. Suppose you have some SwingWorker (I've made this one vaguely reminiscent of my own): public class AddressTreeBuildingWorker extends SwingWorker<Void, NodePair> { private DefaultTreeModel model; public AddressTreeBuildingWorker(DefaultTreeModel model) { } @Override protected Void doInBackground() { // Omitted; performs variable processing to build a tree of address nodes. } @Override protected void process(List<NodePair> chunks) { for (NodePair pair : chunks) { // Actually the real thing inserts in order. model.insertNodeInto(parent, child, parent.getChildCount()); } } private static class NodePair { private final DefaultMutableTreeNode parent; private final DefaultMutableTreeNode child; private NodePair(DefaultMutableTreeNode parent, DefaultMutableTreeNode child) { this.parent = parent; this.child = child; } } } If the work done in the background is significant then things work well - process() is called with relatively small lists of objects and everything is happy. Problem is, if the work done in the background is suddenly insignificant for whatever reason, process() receives a huge list of objects (I have seen 1,000,000, for instance) and by the time you process each object, you have spent 20 seconds on the Event Dispatch Thread, exactly what SwingWorker was designed to avoid. In case it isn't clear, both of these occur on the same SwingWorker class for me - it depends on the input data, and the type of processing the caller wanted. Is there a proper way to handle this? Obviously I can intentionally delay or yield the background processing thread so that a smaller number might arrive each time, but this doesn't feel like the right solution to me.

    Read the article

  • Maven POM: how to insist property is not overridden

    - by Joe Thomas
    I have a parent POM that uses a gmaven script to do some stuff: <plugin> <groupId>org.codehaus.gmaven</groupId> <artifactId>gmaven-plugin</artifactId> <version>1.4</version> <configuration combine.children="override"> <providerSelection>2.0</providerSelection> <scriptPath>${basedir}/build/groovy</scriptPath> </configuration> <executions> <execution> <id>groovy-properties-script</id> <phase>validate</phase> <goals> <goal>execute</goal> </goals> <configuration> <source>computeProperties.groovy</source> </configuration> </execution> <!-- ... --> All of the children are supposed to run this script as well, but they try to resolve the scriptpath based on their OWN basedir. Usually this is exactly what you want with properties, but here it doesn't work, and I can't figure out any way around it.

    Read the article

  • Visual C++ overrides/mock objects for unit testing?

    - by Mark
    When I'm running unit tests, I want to be able to "stub out" or create a mock object, but I'm running into DLL Hell. For example: There are two DLL libraries built: A.dll and B.dll -- Classes in A.dll have calls to classes in B.dll so when A.dll was built, the link line was using B.lib for the defintions. My test driver (Foo.exe) is testing classes in A.dll, so it links against A.lib. However, I want to "stub out" some of the calls A.dll makes to B.dll with simple versions (return basic value, no DB look up, etc). I can't build an Override.dll that just overrides the needed methods (not entire classes) and replace B.dll because Foo.exe will A) complain that B.dll is missing if I just remove it and put Override.dll in it's place or B) if I rename Override.dll to B.dll, Foo.exe complains that there are unresolved symbols because Override.dll is not a complete implementation of B.dll. Is there a way to do this? Is there a way to statically link Foo.exe with A.lib, B.lib and Override.lib such that it will work without having to completely rebuild A.lib and B.lib to remove the __delcspec(dllexport)? Is there another option?

    Read the article

  • ObjectiveC - Releasing objects added as parameters

    - by NobleK
    Ok, here goes. Being a Java developer I'm still struggling with the memory management in ObjectiveC. I have all the basics covered, but once in a while I encounter a challenge. What I want to do is something which in Java would look like this: MyObject myObject = new MyObject(new MyParameterObject()); The constructor of MyObject class takes a parameter of type MyParameterObject which I initiate on-the-fly. In ObjectiveC I tried to do this using following code: MyObject *myObject = [[MyObject alloc] init:[[MyParameterObject alloc] init]]; However, running the Build and Analyze tool this gives me a "Potential leak of an object" warning for the MyParameter object which indeed occurs when I test it using Instruments. I do understand why this happens since I am taking ownership of the object with the alloc method and not relinquishing it, I just don't know the correct way of doing it. I tried using MyObject *myObject = [[MyObject alloc] init:[[[MyParameterObject alloc] init] autorelease]]; but then the Analyze tool told me that "Object sent -autorelease too many times". I could solve the issue by modifying the init method of MyParameterObject to say return [self autorelease]; in stead of just return self;. Analyze still warnes about a potential leak, but it doesn't actually occur. However I believe that this approach violates the convention for managing memory in ObjectiveC and I really want to do it the right way. Thanx in advance.

    Read the article

  • Route WCF ServiceHost to another computer

    - by I2nfo
    GoodDay, I'm not a guru when it comes to WCF, but i do know the basics. My question is, how do i create a ServiceHost on machine X, while the code is on machine Y? if i build and run this code on my dev machine(localhost) : servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(),"net.tcp://my.datacenter.com/MyApp/MyService1"); //This is normally set to localhost. What implementation must be done on the datacenter server, so that if i had to point to http://my.datacenter.com/MyApp/MyService1 , it will route the service operation to my dev machine (localhost). However, the datacenter should not be accessible via the internet. It is a possible infrastructure that we researching to see if we can create a service bus type architecture so that all our customers can invoke other customer services running on their respective machines just by calling our datacenter url. We have looked at Windows Azure, but we have our own datacenter infrasture that we wish to leverage off. Come think of it, we kind of building our own Azure, on a very very basic scale. How does one go creating this? Thanks in Advance

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • Flex : providing data with a PHP Class

    - by Tristan
    Hello, i'm a very new user to flex (never use flex, nor flashbuilder, nor action script before), but i want to learn this langage because of the beautiful RIA and chart it can do. I watched the video on adobe : 1 hour to build your first program but i'm stuck : On the video it says that we have to provide a PHP class for accessing data and i used the example that flash builder gave (with zend framework and mysqli). I never used those ones and it makes a lot to learn if i count zen + mysqli. My question is : can i use a PHP class like this one ? What does flash builder except in return ? i hear that was automatic. example it may be wrong, i'm not very familiar with classes when acessing to database : <?php class DBConnection { protected $server = "localhost"; protected $username = "root"; protected $password = "root"; protected $dbname = "something"; protected $connection; function __construct() { $this->connection = mysql_connect($this->server, $this->username, $this->password); mysql_select_db($this->dbname,$this->connection); mysql_query("SET NAMES 'utf8'", $this->connection); } function query($query) { $result = mysql_query($query, $this->connection); if (!$result) { echo 'request error ' . mysql_error($this->connection); exit; } return $result; } function getAll() { $req = "select * from servers"; $result = query($req) return $result } function num_rows() { return mysql_num_rows($result); } function end() { mysql_close($this->connection); } } ?> Thank you,

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >