Search Results

Search found 9035 results on 362 pages for 'common misunderstandings'.

Page 288/362 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Java conditional compilation: how to prevent code chunks to be compiled?

    - by khachik
    My project requires Java 1.6 for compilation and running. Now I have a requirement to make it working with Java 1.5 (from the marketing side). I want to replace method body (return type and arguments remain the same) to make it compiling with Java 1.5 without errors. Details: I have an utility class called OS which encapsulates all OS-specific things. It has a method public static void openFile(java.io.File file) throws java.io.IOException { // open the file using java.awt.Desktop ... } to open files like with double-click (start Windows command or open Mac OS X command equivalent). Since it cannot be compiled with Java 1.5, I want to exclude it during compilation and replace by another method which calls run32dll for Windows or open for Mac OS X using Runtime.exec. Question: How can I do that? Can annotations help here? Note: I use ant, and I can make two java files OS4J5.java and OS4J6.java which will contain the OS class with the desired code for Java 1.5 and 1.6 and copy one of them to OS.java before compiling (or an ugly way - replace the content of OS.java conditionally depending on java version) but I don't want to do that, if there is another way. Elaborating more: in C I could use ifdef, ifndef, in Python there is no compilation and I could check a feature using hasattr or something else, in Common Lisp I could use #+feature. Is there something similar for Java? Found this post but it doesn't seem to be helpful. Any help is greatly appreciated. kh.

    Read the article

  • Some questions about dotnetopenauth

    - by chobo2
    Hi I have a couple outstanding questions mainly reguarding twitter and facebook In the FacebookGraph class there are properties such as Id,name,etc. I am wondering how do I add to this list? Like what happens if I want a users hometown? I tried to add a property called hometown but it always is null. What should I store their id(1418) or the whole url(http://www.facebook.com/profile.php?id=1418) for lookup later in my db to grab their data and to see if they have an account with my site? Is it actually good to use this id as it seems like it is common knowledge. Can't someone just find the profile id or whatever and do a fake request on my site? how do you setup dotnetopenauth to deal with the case when a user goes to facebook and deletes access to my website. I know you can send a deauthorization code to your site and then delete their account but I don't know how to do that through dotnetopenauth Twitter Is it possible to do number 4 with twitter? Ajax Is it possible to make the openid stuff ajax? I don't see a sample anywhere in the dotnetopenauth samples.

    Read the article

  • Why won't WPF databindings show text when ToString() has a collaborating object?

    - by Jay
    In a simple form, I bind to a number of different objects -- some go in listboxes; some in textblocks. A couple of these objects have collaborating objects upon which the ToString() method calls when doing its work -- typically a formatter of some kind. When I step through the code I see that when the databinding is being set up, ToString() is called the collaborating object is not null and returns the expected result when inspected in the debugger, the objects return the expected result from ToString() BUT the text does not show up in the form. The only common thread I see is that these use a collaborating object, whereas the other bindings that show up as expected simply work from properties and methods of the containing object. If this is confusing, here is the gist in code: public class ThisThingWorks { private SomeObject some_object; public ThisThingWorks(SomeObject s) { some_object = s; } public override string ToString() { return some_object.name; } } public class ThisDoesntWork { private Formatter formatter; private SomeObject some_object; public ThisDoesntWork(SomeObject o, Formatter f) { formatter = f; some_object = o; } public override string ToString() { return formatter.Format(some_object.name); } } Again, let me reiterate -- the ToString() method works in every other context -- but when I bind to the object in WPF and expect it to display the result of ToString(), I get nothing. Update: The issue seems to be what I see as a buggy behaviour in the TextBlock binding. If I bind the Text property to a property of the DataContext that is declared as an interface type, ToString() is never called. If I change the property declaration to an implementation of the interface, it works as expected. Other controls, like Label work fine when binding the Content property to a DataContext property declared as either the implementation or the interface. Because this is so far removed from the title and content of this question, I've created a new question here: http://stackoverflow.com/questions/2917878/why-doesnt-textblock-databinding-call-tostring-on-a-property-whose-compile-tim

    Read the article

  • How to DRY on CRUD parts of my Rails app?

    - by kolrie
    I am writing an app which - similarly to many apps out there - is 90% regular CRUD things and 10% "juice", where we need nasty business logic and more flexibility and customization. Regarding this 90%, I was trying to stick to the DRY principle as much as I can. As long as controllers go, I have found resource_controller to really work, and I could get rid of all the controllers on that area, replacing them with a generic one. Now I'd like to know how to get the same with the views. On this app I have an overall, application.html.erb layout and then I must have another layout layer, common for all CRUD views and finally a "core" part: On index.html.erb all I need to generate a simple table with the fields and labels I indicate. For new and edit, also generic form edition, indicating labels and fields (with a possibility of providing custom fields if needed). I am not sure I will need show, but if I do it would be the same as new and edit. What plugins and tools (or even articles and general pointer) would help me to get that done? Thanks, Felipe.

    Read the article

  • What does it mean for an OS to "execute within user processes"? Do any modern OS's use that approach

    - by Chris Cooper
    I have recently become interested in operating system, and a friend of mine lent me a book called Operating Systems: Internals and Design Principles (I have the third edition), published in 1998. It's been a very interesting book so far, but I have come to the part dealing with process control, and it's using UNIX System V as one of its examples of an operating system that executes within user processes. This concept has struck me as a little strange. First of all, does this mean that OS instructions and data are stored in each user of the processes? Probably not, because that would be an absurdly redundant scheme. But if not, then what does it mean to "execute within" a user process? Do any modern operating systems use this approach? It seems much more logical to have the operating system execute as its own process, or even independently of all processes, if you're short on memory. All the inter-accessiblilty of process data required for this layout seems to greatly complicate things. (But maybe that's just because I don't quite get the concept ;D) Here is what the book says: "Execution within User Processes: An alternative that is common with operation systems on smaller machines is to execute virtually all operating system software in the context of a user process. ... "

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • Findbugs and comparing

    - by Rob Goodwin
    I recently started using the findbugs static analysis tool in a java build I was doing. The first report came back with loads of High Priority warnings. Being the obsessive type of person, I was ready to go knock them all out. However, I must be missing something. I get most of the warnings when comparing things. Such as the following code: public void setSpacesPerLevel(int value) { if( value >= 0) { ... produces a high priority warning at the if statement that reads. File: Indenter.java, Line: 60, Type: BIT_AND_ZZ, Priority: High, Category: CORRECTNESS Check to see if ((...) & 0) == 0 in sample.Indenter.setSpacesPerLevel(int) I am comparing an int to an int, seems like a common thing. I get quite a few of that type of error with similar simple comparisons. I have alot of other high priority warnings on what appears to be simple code blocks. Am I missing something here? I realize that static analysis can produce false positives, but the errors I am seeing seem too trivial of a case to be a false positive. This one has me scratching my head as well. for(int spaces = 0;spaces < spacesPerLevel;spaces++){... Which gives the following findbugs warning: File: Indenter.java, Line: 160, Type: IL_INFINITE_LOOP, Priority: High, Category: CORRECTNESS There is an apparent infinite loop in sample.Indenter.indent() This loop doesn't seem to have a way to terminate (other than by perhaps throwing an exception). Any ideas? So basically I have a handful of files and 50-60 high priority warnings similar to the ones above. I am using findbugs 1.3.9 and calling it from the findbugs ant task

    Read the article

  • MVVM and avoiding Monolithic God object

    - by bufferz
    I am in the completion stage of a large project that has several large components: image acquisition, image processing, data storage, factory I/O (automation project) and several others. Each of these components is reasonably independent, but for the project to run as a whole I need at least one instance of each component. Each component also has a ViewModel and View (WPF) for monitoring status and changing things. My question is the safest, most efficient, and most maintainable method of instantiating all of these objects, subscribing one class to an Event in another, and having a common ViewModel and View for all of this. Would it best if I have a class called God that has a private instance of all of these objects? I've done this in the past and regretted it. Or would it be better if God relied on Singleton instances of these objects to get the ball rolling. Alternatively, should Program.cs (or wherever Main(...) is) instantiate all of these components, and pass them to God as parameters and then let Him (snicker) and His ViewModel deal with the particulars of running this projects. Any other suggestions I would love to hear. Thank you!

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • cocoa/c++ relative path to load resources

    - by moka
    Hi, I am currently working directly with cocoa for the first time, to built a screen saver. Now I came across a problem when trying to load resources from within the .saver bundle. I basically have a small c++ wrapper class to load .exr files using freeImage. That works as long as I use absoulte paths, but thats not very useful, is it? So basically I tried everything, putting the .exr file on the level of the .saver bundle itself, inside the bundles Resources folder and so on. Then I simply tried to load the .exr like this without success particleTex = [self loadExrTexture: "ball.exr"]; I also tried making it go to the .saver bundles location like this: particleTex = [self loadExrTexture: "../../../ball.exr"]; to maybe load the .exr from that location but without success. I then came across this: NSString * path = [[NSBundle mainBundle] pathForResource:@"ball" ofType:@"exr"]; const char * pChar = [path UTF8String]; which seems to be a common way to find resources in cocoa, but for some reason its emty in my case. any ideas about that? I really tried out anything that came to my mind without success so I would be glad about some input!

    Read the article

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • DefaultTableCellRenderer getTableCellRendererComponent never gets called

    - by Dean Schulze
    I need to render a java.util.Date in a JTable. I've implemented a custom renderer that extends DefaultTableCellRenderer (below). I've set it as the renderer for the column, but the method getTableCellRendererComponent() never gets called. This is a pretty common problem, but none of the solutions I've seen work. public class DateCellRenderer extends DefaultTableCellRenderer { String sdfStr = "yyyy-MM-dd HH:mm:ss.SSS"; SimpleDateFormat sdf = new SimpleDateFormat(sdfStr); public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) { super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column); if (value instanceof Date) { this.setText(sdf.format((Date) value)); } else logger.info("class: " + value.getClass().getCanonicalName()); return this; } } I've installed the custom renderer (and verified that it has been installed) like this: DateCellRenderer dcr = new DateCellRenderer(); table.getColumnModel().getColumn(2).setCellRenderer(dcr); I've also tried table.setDefaultRenderer( java.util.Date.class, dcr ); Any idea why the renderer gets installed, but never called?

    Read the article

  • Need an advice for unit testing using mock object

    - by Andree
    Hi there, I just recently read about "Mocking objects" for unit testing and currently I'm having a difficulties implementing this approach in my application. Please let me explain my problem. I have a User model class, which is dependent on 2 data sources (database and facebook web service). The controller class simply use this User model as an interface to access data and it doesn't care about where the data came from. Currently I never done any unit test to this User model because it is dependent on an external web service. But just a while ago, I read about object mocking and now I know that it is a common approach to unit test a class that depends on external resources (like in my case). Now I want to create a unit test for the User model, but then I encountered a design issue: In order for the User model to use a mocked Facebook SDK, I have to inject this mocked Facebook SDK to the User object (probably using a setter). Therefore I can't construct the Facebook SDK inside the User object. I have to construct it outside the User object, and inject the SDK into the User object. The real client of my User model is the application's controller. Therefore I have to construct the Facebook SDK inside the controller and inject it to the user object. Well, this is a problem because I want my controller to be as clean as possible. I want my controller to be ignorant about the application's data source. I'm not good at explaining something systematically, so you'll probably sleeping before reading this last paragraph. But anyway, I want to ask if anyone here ever encountered the same problem as mine? How do you solve this problem? Regards, Andree

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • C++/Win32 : XP Visual Styles - no controls are showing up?

    - by mrl33t
    Okay, so i'm pretty new to C++ & the Windows API and i'm just writing a small application. I wanted my application to make use of visual styles in both XP, Vista and Windows 7 so I added this line to the top of my code: #pragma comment(linker,"\"/manifestdependency:type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' processorArchitecture='*' publicKeyToken='6595b64144ccf1df' language='*'\"") It seemed to work perfectly on my Windows 7 machine and also Vista machine. But when I tried the application on XP the application wouldn't load any controls (e.g. buttons, labels etc.) - not even messageboxes would display. This image shows a small test application which i've just put together to demonstrate what i'm trying to explain: http://img704.imageshack.us/img704/2250/myapp.png In this test application i'm not using any particularly fancy or complicated code. I've effectively just taken the most basic sample code from the MSDN Library (http://msdn.microsoft.com/en-us/library/ff381409.aspx) and added a section to the WM_CREATE message to create a button: MyBtn = CreateWindow(L"Button", L"My Button", BS_PUSHBUTTON | WS_CHILD | WS_VISIBLE, 25, 25, 100, 30, hWnd, NULL, hInst, 0); But I just can't figure out what's going on and why its not working. Any ideas guys? Thank you in advanced. (By the way the application works in XP if i remove the manifest section from the top - obviously without visual styles though. I should also probably mention that the app was built using Visual C++ 2010 Express on a Windows 7 machine - if that makes a difference?)

    Read the article

  • What is the best / proper idiom in django for modifying a field during a .save() where you need to o

    - by MDBGuy
    Hi, say I've got: class LogModel(models.Model): message = models.CharField(max_length=512) class Assignment(models.Model): someperson = models.ForeignKey(SomeOtherModel) def save(self, *args, **kwargs): super(Assignment, self).save() old_person = #????? LogModel(message="%s is no longer assigned to %s"%(old_person, self).save() LogModel(message="%s is now assigned to %s"%(self.someperson, self).save() My goal is to save to LogModel some messages about who Assignment was assigned to. Notice that I need to know the old, presave value of this field. I have seen code that suggests, before super().save(), retrieve the instance from the database via primary key and grab the old value from there. This could work, but is a bit messy. In addition, I plan to eventually split this code out of the .save() method via signals - namely pre_save() and post_save(). Trying to use the above logic (Retrieve from the db in pre_save, make the log entry in post_save) seemingly fails here, as pre_save and post_save are two seperate methods. Perhaps in pre_save I can retrieve the old value and stick it on the model as an attribute? I was wondering if there was a common idiom for this. Thanks.

    Read the article

  • How to implement Administrator rights in Java Application?

    - by Yatendra Goel
    I am developing a Data Modeling Software that is implemented in Java. This application converts the textual data (stored in a database) to graphical form so that users can interpret the data in a more efficient form. Now, this application will be accessed by 3 kinds of persons: 1. Managers (who can fill the database with data and they can also view the visual form of the data after entering the data into the database) 2. Viewers (who can only view the visual form of data that has been filled by managers) 3. Administrators (who can create and manage other administrators, managers and viewers) Now, how to implement 3 diff. views of the same application. Note: Managers, Viewers and Administrators can be located in any part of the world and should access the application through internet. One idea that came in my mind is as follows: Step1: Code all the business logic in EJBs so that it can be used in distributed environment (means which can be accessed by several users through internet) Step2: Code 3 Swing GUI Clients: One for administrators, one for managers and one for viewers. These 3 GUI clients can access business logic written in EJBs. Step3: Distribute the clients corresponding to their users. For instance, manager client to managers. =================================QUESTIONS======================================= Q1. Is the above approach is correct? Q2. This is very common functionality that various softwares have. So, Do they implement this kind of functionality through this way or any other way? Q3. If any other approach would be more better, then what is that approach?

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • Where does the delete control go in my Cocoa user interface?

    - by Graham Lee
    Hi, I have a Cocoa application managing a collection of objects. The collection is presented in an NSCollectionView, with a "new object" button nearby so users can add to the collection. Of course, I know that having a "delete object" button next to that button would be dangerous, because people might accidentally knock it when they mean to create something. I don't like having "are you sure you want to..." dialogues, so I dispensed with the "delete object". There's a menu item under Edit for removing an object, and you can hit Cmd-backspace to do the same. The app supports undoing delete actions. Now I'm getting support emails ranging from "does it have to be so hard to delete things" to "why can't I delete objects?". That suggests I've made it a bit too hard, so what's the happy middle ground? I see applications from Apple that do it my way, or with the add/remove buttons next to each other, but I hate that latter option. Is there another good (and preferably common) convention for delete controls? I thought about an action menu but I don't think I have any other actions that would go in it, rendering the menu a bit thin.

    Read the article

  • In Mercurial, can I apply changes from one file to another file in the same branch?

    - by Stephen
    In the good old days of Subversion, I would sometimes derive a new file from an existing one using svn copy. Then if something changed in sections they had in common, I could still use svn merge to update the derived version. To use the example from hginit.com, say the "guac" recipe already exists, and I want to create a "superguac" that includes instructions on how to serve guacamole to 1000 raving soccer fans. Using the process I just described, I could: svn cp guac superguac svn ci -m "Created superguac by copying guac" (edit superguac) svn ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) svn ci -m "Fixed a typo in guac" svn merge -r3:4 guac superguac and thus the typo fix would be applied to superguac. Mercurial provides an hg copy command that marks a file as a copy of the original, but I'm not sure the repository structure supports a similar workflow. Here's the same example, and I carefully only edit a single file in the commit I want to use in the merge: hg cp guac superguac hg ci -m "Created superguac by copying guac" (edit superguac) hg ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) hg ci -m "Fixed a typo in guac" I now want to apply the change in guac to superguac. Is that possible? If so, what's the right command? Is there a different workflow in Mercurial that achieves the same results (limited to a single branch)?

    Read the article

  • Select return dynamic columns

    - by Ascalonian
    I have two tables: Standards and Service Offerings. A Standard can have multiple Service Offerings. Each Standard can have a different number of Service Offerings associated to it. What I need to be able to do is write a view that will return some common data and then list the service offerings on one line. For example: Standard Id | Description | SO #1 | SO #2 | SO #3 | ... | SO #21 | SO Count 1 | One | A | B | C | ... | G | 21 2 | Two | A | | | ... | | 1 3 | Three | B | D | E | ... | | 3 I have no idea how to write this. The number of SO columns is set to a specific number (21 in this case), so we cannot exceed past that. Any ideas on how to approach this? A place I started is below. It just returned multiple rows for each Service Offering, when they need to be on one row. SELECT * FROM SERVICE_OFFERINGS WHERE STANDARD_KEY IN (SELECT STANDARD_KEY FROM STANDARDS)

    Read the article

  • JPA - Real primary key generated ID for references

    - by Val
    I have ~10 classes, each of them, have composite key, consist of 2-4 values. 1 of the classes is a main one (let's call it "Center") and related to other as one-to-one or one-to-many. Thinking about correct way of describing this in JPA I think I need to describe all the primary keys using @Embedded / @PrimaryKey annotations. Question #1: My concern is - does it mean that on the database level I will have # of additional columns in each table referring to the "Center" equal to number of column in "Center" PK? If yes, is it possible to avoid it by using some artificial unique key for references? Could you please give an idea how real PK and the artificial one needs to be described in this case? Note: The reason why I would like to keep the real PK and not just use the unique id as PK is - my application have some data loading functionality from external data sources and sometimes they may return records which I already have in local database. If unique ID will be used as PK - for new records I won't be able to do data update, since the unique ID will not be available for just downloaded ones. At the same time it is normal case scenario for application and it just need to update of insert new records depends on if the real composite primary key matches. Question #2: All of the 10 classes have common field "date" which I described in an abstract class which each of them extends. The "date" itself is never a key, but it always a part of composite key for each class. Composite key is different for each class. To be able to use this field as a part of PK should I describe it in each class or is there any way to use it as is? I experimented with @Embedded and @PrimaryKey annotations and always got an error that eclipselink can't find field described in an abstract class. Thank you in advance! PS. I'm using latest version of eclipselink & H2 database.

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >