Search Results

Search found 13180 results on 528 pages for 'non interactive'.

Page 487/528 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Is this scenario in compliance with GPLv3?

    - by Sean Kinsey
    For arguments sake, say that we create a web application , that depends on a GPLv3 licensed component, lets say Ext JS. Based on Section 0 of the license, the common notion is that the entire web application (the client side javascript) falls under the definition of a covered work: A “covered work” means either the unmodified Program or a work based on the Program. and that it will therefor have to be distributed under the same license Ok, so here comes the fun part: This is a short 'program' that is based on Ext JS var myPanel = new Ext.Panel(); The question that arises is: Have I now violated the GPL by not including the source of Ext JS and its license? Ok, so lets take another example <!doctype html> <html> <head> <title>my title</title> <script type="text/javascript" src="http://extjs.cachefly.net/ext-3.2.1/ext-all.js"> </script> <link rel="stylesheet" type="text/css" href="http://extjs.cachefly.net/ext-3.2.1/resources/css/ext-all.css" /> <script type="text/javascript"> var myPanel = new Ext.Panel(); </script> </head> <body> </body> </html> Have I now violated the terms of the GPL? The code conveyed by me to you is in a non-functional state - it will have to be combined with the actual source of Ext JS, which you(your browser) will have to retrieve, from a source made public by someone else to be usable. Now, if the answer to the above is no, how does me conveying this code in visible form differ from the 'invisible' form conveyed by my web server? As a side note, a very similar thing is done in Linux with many projects that depends on less permissive licenses - the user has to retrieve these on its own and make these available for the primary lib/executable. How is this not the same if the user is informed on beforehand that he (the browser) will have to retrieve the needed resources from a different source? Just to make it clear, I'm pro FLOSS, and I have also published a number of projects licensed under more permissive licenses. The reason I'm asking this is that I still haven't found anyone offering a definitive answer to this.

    Read the article

  • Creating a spam list with a web crawler in python

    - by user313623
    Hey guys, I'm not trying to do anything malicious here, I just need to do some homework. I'm a fairly new programmer, I'm using python 3.0, and I having difficulty using recursion for problem-solving. I've been stuck on this question for quite a while. Here's the assignment: Write a recursive method spam(url, n) that takes a url of a web page as input and a non-negative integer n, collects all the email address contained in the web page and adds them to a global dictionary variable spam_dict, and then recursively calls itself on every http hyperlink contained in the web page. You will use a dictionary so only one copy of every email address is save; your dictionary will store (key,value) pairs (email, email). The recursive call should use the parameter n-1 instead of n. If n = 0, you should collect the email addresses but no recursive calls should be made. The parameter n is used to limit the recursion to at most depth n. You will need to use the solutions of the two above problems; you method spam() will call the methods links2() and emails() and possibly other functions as well. Notes: 1. running spam() directly will produce no output on the screen; to find your spam_dict, you will need to read the value of spam_dict, and you will also need to reset it to the empty dictionary before every run of spam. 2. Recall how global variables are used. Usage: spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',0) spam_dict.keys() dict_keys([]) spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',1) spam_dict.keys() dict_keys(['[email protected]', '[email protected]']) So far, I've written a function that traverses web pages and puts all the links in a nice little list, and what I wanted to do was call that functions. And why would I use recursion on a dictionary? And how? I don't understand how n ties into all of this. def links2(url): content = str(urlopen(url).read()) myparser = MyHTMLParser() myparser.feed(content) lst = myparser.get() mergelst = [] for link in lst: mergelst.append(urljoin(lst[0],link)) print(mergelst) Any input (except why spam is bad) would be greatly appreciated. Also, I realize that the above function could probably look better, if you have a way to do it, I'm all ears. However, all I need is the point is for the program to produce the proper output.

    Read the article

  • One-to-many relationship with JDO in Google App Engine

    - by Marvin
    I've followed the GAE docs on setting up one-to-many relationship in JDO but I'm still having trouble in retrieving the collection data back. I have no problem getting the other non-collection fields back. Here are my classes: @PersistenceCapable public class User{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent private String uniqueId; @Persistent private String email; @Persistent private List<Address> addresses = new ArrayList<Address>() ; ... } @PersistenceCapable public class Phone{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent private String number; ... } public class UserDaoImpl implements UserDao { public void insertUser(User user) { if(user.getKey() == null) { com.google.appengine.api.datastore.Key key = KeyFactory.createKey(User.class.getSimpleName(), user.getEmail()); user.setKey(key); } PersistenceManager pm = PersistenceManagerWrapper.getPersistenceManager(); notNull(user); try { pm.makePersistent(user); } finally { pm.close(); } } @SuppressWarnings("unchecked") public User getUser(String uniqueId) { PersistenceManager pm = PersistenceManagerWrapper.getPersistenceManager(); Query query = pm.newQuery(User.class); query.setFilter("uniqueId == uniqueIdParam"); query.declareParameters("String uniqueIdParam"); User user = null; try { List<User> users = (List<User>)(query.execute(uniqueId)); //TODO abstract this if(users.size() > 0) user = users.get(0); } finally { pm.close(); } return user; } } public class UserDaoImplTest { @Test public void getUserTest() { User user = createTestUser(); assertNotNull("The user object should not be null", user); userDao.insertUser(user); User returnedUser = userDao.getUser(TEST_USER_ID); assertNotNull("The returnedUser object should not be null", returnedUser); Assert.assertPropertyEqualsExcludeProperties("User Object", user, returnedUser, ""); } } When I run the test, all the properties for User is populated but the list of Phone if I get is empty.

    Read the article

  • .NET: Avoidance of custom exceptions by utilising existing types, but which?

    - by Mr. Disappointment
    Consider the following code (ASP.NET/C#): private void Application_Start(object sender, EventArgs e) { if (!SetupHelper.SetUp()) { throw new ShitHitFanException(); } } I've never been too hesitant to simply roll my own exception type, basically because I have found (bad practice, or not) that mostly a reasonable descriptive type name gives us enough as developers to go by in order to know what happened and why something might have happened. Sometimes the existing .NET exception types even accommodate these needs - regardless of the message. In this particular scenario, for demonstration purposes only, the application should die a horrible, disgraceful death should SetUp not complete properly (as dictated by its return value), but I can't find an already existing exception type in .NET which would seem to suffice; though, I'm sure one will be there and I simply don't know about it. Brad Abrams posted this article that lists some of the available exception types. I say some because the article is from 2005, and, although I try to keep up to date, it's a more than plausible assumption that more have been added to future framework versions that I am still unaware of. Of course, Visual Studio gives you a nicely formatted, scrollable list of exceptions via Intellisense - but even on analysing those, I find none which would seem to suffice for this situation... ApplicationException: ...when a non-fatal application error occurs The name seems reasonable, but the error is very definitely fatal - the app is dead. ExecutionEngineException: ...when there is an internal error in the execution engine of the CLR Again, sounds reasonable, superficially; but this has a very definite purpose and to help me out here certainly isn't it. HttpApplicationException: ...when there is an error processing an HTTP request Well, we're running an ASP.NET application! But we're also just pulling at straws here. InvalidOperationException: ...when a call is invalid for the current state of an instance This isn't right but I'm adding it to the list of 'possible should you put a gun to my head, yes'. OperationCanceledException: ...upon cancellation of an operation the thread was executing Maybe I wouldn't feel so bad using this one, but I'd still be hijacking the damn thing with little right. You might even ask why on earth I would want to raise an exception here but the idea is to find out that if I were to do so then do you know of an appropriate exception for such a scenario? And basically, to what extent can we piggy-back on .NET while keeping in line with rationality?

    Read the article

  • Why does the rename() syscall prohibit moving a directory that I can't write to a different director

    - by Daniel Papasian
    I am trying to understand why this design decision was made with the rename() syscall in 4.2BSD. There's nothing I'm trying to solve here, just understand the rationale for the behavior itself. 4.2BSD saw the introduction of the rename() syscall for the purpose of allowing atomic renames/moves of files. From 4.3BSD-Reno/src/sys/ufs/ufs_vnops.c: /* * If ".." must be changed (ie the directory gets a new * parent) then the source directory must not be in the * directory heirarchy above the target, as this would * orphan everything below the source directory. Also * the user must have write permission in the source so * as to be able to change "..". We must repeat the call * to namei, as the parent directory is unlocked by the * call to checkpath(). */ if (oldparent != dp->i_number) newparent = dp->i_number; if (doingdirectory && newparent) { VOP_LOCK(fndp->ni_vp); error = ufs_access(fndp->ni_vp, VWRITE, tndp->ni_cred); VOP_UNLOCK(fndp->ni_vp); So clearly this check was added intentionally. My question is - why? Is this behavior supposed to be intuitive? The effect of this is that one cannot move a directory (located in a directory that one can write) that one cannot write to another directory that one can write to atomically. You can, however, create a new directory, move the links over (assuming one has read access to the directory), and then remove one's write bit on the directory. You just can't do so atomically. % cd /tmp % mkdir stackoverflow-question % cd stackoverflow-question % mkdir directory-1 % mkdir directory-2 % mkdir directory-1/directory-i-cant-write % echo "foo" > directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write % mv directory-1/directory-i-cant-write directory-2 mv: rename directory-1/directory-i-cant-write to directory-2/directory-i-cant-write: Permission denied We now have a directory I can't write with contents I can't read that I can't move atomically. I can, however, achieve the same effect non-atomically by changing permissions, making the new directory, using ln to create the new links, and changing permissions. (Left as an exercise to the reader) . and .. are special cased already, so I don't particularly buy that it is intuitive that if I can't write a directory I can't "change .." which is what the source suggests. Is there any reason for this besides it being the perceived correct behavior by the author of the code? Is there anything bad that can happen if we let people atomically move directories (that they can't write) between directories that they can write?

    Read the article

  • Write Scheme data structures so they can be eval-d back in, or alternative

    - by Jesse Millikan
    I'm writing an application (A juggling pattern animator) in PLT Scheme that accepts Scheme expressions as values for some fields. I'm attempting to write a small text editor that will let me "explode" expressions into expressions that can still be eval'd but contain the data as literals for manual tweaking. For example, (4hss->sexp "747") is a function call that generates a legitimate pattern. If I eval and print that, it becomes (((7 3) - - -) (- - (4 2) -) (- (7 2) - -) (- - - (7 1)) ((4 0) - - -) (- - (7 0) -) (- (7 2) - -) (- - - (4 3)) ((7 3) - - -) (- - (7 0) -) (- (4 1) - -) (- - - (7 1))) which can be "read" as a string, but will not "eval" the same as the function. For this statement, of course, what I need would be as simple as (quote (((7 3... but other examples are non-trivial. This one, for example, contains structs which print as vectors: pair-of-jugglers ; --> (#(struct:hand #(struct:position -0.35 2.0 1.0) #(struct:position -0.6 2.05 1.1) 1.832595714594046) #(struct:hand #(struct:position 0.35 2.0 1.0) #(struct:position 0.6 2.0500000000000003 1.1) 1.308996938995747) #(struct:hand #(struct:position 0.35 -2.0 1.0) #(struct:position 0.6 -2.05 1.1) -1.3089969389957472) #(struct:hand #(struct:position -0.35 -2.0 1.0) #(struct:position -0.6 -2.05 1.1) -1.8325957145940461)) I've thought of at least three possible solutions, none of which I like very much. Solution A is to write a recursive eval-able output function myself for a reasonably large subset of the values that I might be using. There (probably...) won't be any circular references by the nature of the data structures used, so that wouldn't be such a long job. The output would end up looking like `(((3 0) (... ; ex 1 `(,(make-hand (make-position ... ; ex 2 Or even worse if I could't figure out how to do it properly with quasiquoting. Solution B would be to write out everything as (read (open-input-string "(big-long-s-expression)")) which, technically, solves the problem I'm bringing up but is... ugly. Solution C might be a different approach of giving up eval and using only read for parsing input, or an uglier approach where the s-expression is used as directly data if eval fails, but those both seem unpleasant compared to using scheme values directly. Undiscovered Solution D would be a PLT Scheme option, function or library I haven't located that would match Solution A. Help me out before I start having bad recursion dreams again.

    Read the article

  • How best to store Subversion version information in EAR's?

    - by Rene
    When receiving a bug report or an it-doesnt-work message one of my initials questions is always what version? With a different builds being at many stages of testing, planning and deploying this is often a non-trivial question. I the case of releasing Java JAR (ear, jar, rar, war) files I would like to be able to look in/at the JAR and switch to the same branch, version or tag that was the source of the released JAR. How can I best adjust the ant build process so that the version information in the svn checkout remains in the created build? I was thinking along the lines of: adding a VERSION file, but with what content? storing information in the META-INF file, but under what property with which content? copying sources into the result archive added svn:properties to all sources with keywords in places the compiler leaves them be I ended up using the svnversion approach (the accepted anwser), because it scans the entire subtree as opposed to svn info which just looks at the current file / directory. For this I defined the SVN task in the ant file to make it more portable. <taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask"> <classpath> <pathelement location="${dir.lib}/ant/svnant.jar"/> <pathelement location="${dir.lib}/ant/svnClientAdapter.jar"/> <pathelement location="${dir.lib}/ant/svnkit.jar"/> <pathelement location="${dir.lib}/ant/svnjavahl.jar"/> </classpath> </taskdef> Not all builds result in webservices. The ear file before deployment must remain the same name because of updating in the application server. Making the file executable is still an option, but until then I just include a version information file. <target name="version"> <svn><wcVersion path="${dir.source}"/></svn> <echo file="${dir.build}/VERSION">${revision.range}</echo> </target> Refs: svnrevision: http://svnbook.red-bean.com/en/1.1/re57.html svn info http://svnbook.red-bean.com/en/1.1/re13.html subclipse svn task: http://subclipse.tigris.org/svnant/svn.html svn client: http://svnkit.com/

    Read the article

  • Is it possible to reliably auto-decode user files to Unicode? [C#]

    - by NVRAM
    I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files. Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding. This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms? I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files. My questions boil down to: Is BOM-aware detection sufficient for the vast majority of files? In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.") Under what circumstances will a "valid" file fail with the C# encoder/decoder framework? Is there a repository anywhere that has a multitude of files with various encodings to use for testing? While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this. So far I've found: A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.) Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh? Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible. My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files. Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy. Thanks.

    Read the article

  • Python - Polymorphism in wxPython, What's wrong?

    - by Wallter
    I am trying to wright a simple custom button in wx.Python. My code is as follows, an error is thrown on line 19 of my "Custom_Button.py" - What is going on? I can find no help online for this error and have a suspicion that it has to do with the Polymorphism. (As a side note: I am relatively new to python having come from C++ and C# any help on syntax and function of the code would be great! - knowing that, it could be a simple error. thanks!) Error def __init__(self, parent, id=-1, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, **kwargs): SyntaxError: non-default argument follows default argument Main.py class MyFrame(wx.Frame): def __init__(self, parent, ID, title): wxFrame.__init__(self, parent, ID, title, wxDefaultPosition, wxSize(400, 400)) self.CreateStatusBar() self.SetStatusText("Program testing custom button overlays") menu = wxMenu() menu.Append(ID_ABOUT, "&About", "More information about this program") menu.AppendSeparator() menu.Append(ID_EXIT, "E&xit", "Terminate the program") menuBar = wxMenuBar() menuBar.Append(menu, "&File"); self.SetMenuBar(menuBar) self.Button1 = Custom_Button(self, parent, -1, "D:/Documents/Python/Normal.bmp", "D:/Documents/Python/Clicked.bmp", "D:/Documents/Python/Over.bmp", "None", wx.Point(200,200), wx.Size(300,100)) EVT_MENU(self, ID_ABOUT, self.OnAbout) EVT_MENU(self, ID_EXIT, self.TimeToQuit) def OnAbout(self, event): dlg = wxMessageDialog(self, "Testing the functions of custom " "buttons using pyDev and wxPython", "About", wxOK | wxICON_INFORMATION) dlg.ShowModal() dlg.Destroy() def TimeToQuit(self, event): self.Close(true) class MyApp(wx.App): def OnInit(self): frame = MyFrame(NULL, -1, "wxPython | Buttons") frame.Show(true) self.SetTopWindow(frame) return true app = MyApp(0) app.MainLoop() Custom Button import wx from wxPython.wx import * class Custom_Button(wx.PyControl): ############################################ ##THE ERROR IS BEING THROWN SOME WHERE IN HERE ## ############################################ # The BMP's Mouse_over_bmp = wx.Bitmap(0) # When the mouse is over Norm_bmp = wx.Bitmap(0) # The normal BMP Push_bmp = wx.Bitmap(0) # The down BMP Pos_bmp = wx.Point(0,0) # The posisition of the button def __init__(self, parent, id=-1, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, text="", pos, size, **kwargs): wx.PyControl.__init__(self,parent, id, **kwargs) # Set the BMP's to the ones given in the constructor self.Mouse_over_bmp = wx.Bitmap(MOUSE_OVER_BMP) self.Norm_bmp = wx.Bitmap(NORM_BMP) self.Push_bmp = wx.Bitmap(PUSH_BMP) self.Pos_bmp = pos ############################################ ##THE ERROR IS BEING THROWN SOME WHERE IN HERE ## ############################################ self.Bind(wx.EVT_LEFT_DOWN, self._onMouseDown) self.Bind(wx.EVT_LEFT_UP, self._onMouseUp) self.Bind(wx.EVT_LEAVE_WINDOW, self._onMouseLeave) self.Bind(wx.EVT_ENTER_WINDOW, self._onMouseEnter) self.Bind(wx.EVT_ERASE_BACKGROUND,self._onEraseBackground) self.Bind(wx.EVT_PAINT,self._onPaint) self._mouseIn = self._mouseDown = False def _onMouseEnter(self, event): self._mouseIn = True def _onMouseLeave(self, event): self._mouseIn = False def _onMouseDown(self, event): self._mouseDown = True def _onMouseUp(self, event): self._mouseDown = False self.sendButtonEvent() def sendButtonEvent(self): event = wx.CommandEvent(wx.wxEVT_COMMAND_BUTTON_CLICKED, self.GetId()) event.SetInt(0) event.SetEventObject(self) self.GetEventHandler().ProcessEvent(event) def _onEraseBackground(self,event): # reduce flicker pass def _onPaint(self, event): dc = wx.BufferedPaintDC(self) dc.SetFont(self.GetFont()) dc.SetBackground(wx.Brush(self.GetBackgroundColour())) dc.Clear() dc.DrawBitmap(self.Norm_bmp) # draw whatever you want to draw # draw glossy bitmaps e.g. dc.DrawBitmap if self._mouseIn: # If the Mouse is over the button dc.DrawBitmap(self, self.Mouse_over_bmp, self.Pos_bmp, useMask=False) if self._mouseDown: # If the Mouse clicks the button dc.DrawBitmap(self, self.Push_bmp, self.Pos_bmp, useMask=False)

    Read the article

  • CLR 4.0 inlining policy? (maybe bug with MethodImplOptions.NoInlining)

    - by ControlFlow
    I've testing some new CLR 4.0 behavior in method inlining (cross-assembly inlining) and found some strage results: Assembly ClassLib.dll: using System.Diagnostics; using System; using System.Reflection; using System.Security; using System.Runtime.CompilerServices; namespace ClassLib { public static class A { static readonly MethodInfo GetExecuting = typeof(Assembly).GetMethod("GetExecutingAssembly"); public static Assembly Foo(out StackTrace stack) // 13 bytes { // explicit call to GetExecutingAssembly() stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } public static Assembly Bar(out StackTrace stack) // 25 bytes { // reflection call to GetExecutingAssembly() stack = new StackTrace(); return (Assembly) GetExecuting.Invoke(null, null); } public static Assembly Baz(out StackTrace stack) // 9 bytes { stack = new StackTrace(); return null; } public static Assembly Bob(out StackTrace stack) // 13 bytes { // call of non-inlinable method! return SomeSecurityCriticalMethod(out stack); } [SecurityCritical, MethodImpl(MethodImplOptions.NoInlining)] static Assembly SomeSecurityCriticalMethod(out StackTrace stack) { stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } } } Assembly ConsoleApp.exe using System; using ClassLib; using System.Diagnostics; class Program { static void Main() { Console.WriteLine("runtime: {0}", Environment.Version); StackTrace stack; Console.WriteLine("Foo: {0}\n{1}", A.Foo(out stack), stack); Console.WriteLine("Bar: {0}\n{1}", A.Bar(out stack), stack); Console.WriteLine("Baz: {0}\n{1}", A.Baz(out stack), stack); Console.WriteLine("Bob: {0}\n{1}", A.Bob(out stack), stack); } } Results: runtime: 4.0.30128.1 Foo: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Foo(StackTrace& stack) at Program.Main() Bar: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Bar(StackTrace& stack) at Program.Main() Baz: at Program.Main() Bob: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at Program.Main() So questions are: Why JIT does not inlined Foo and Bar calls as Baz does? They are lower than 32 bytes of IL and are good candidates for inlining. Why JIT inlined call of Bob and inner call of SomeSecurityCriticalMethod that is marked with the [MethodImpl(MethodImplOptions.NoInlining)] attribute? Why GetExecutingAssembly returns a valid assembly when is called by inlined Baz and SomeSecurityCriticalMethod methods? I've expect that it performs the stack walk to detect the executing assembly, but stack will contains only Program.Main() call and no methods of ClassLib assenbly, to ConsoleApp should be returned.

    Read the article

  • Twisted: why is it that passing a deferred callback to a deferred thread makes the thread blocking a

    - by surtyaarthoughts
    I unsuccessfully tried using txredis (the non blocking twisted api for redis) for a persisting message queue I'm trying to set up with a scrapy project I am working on. I found that although the client was not blocking, it became much slower than it could have been because what should have been one event in the reactor loop was split up into thousands of steps. So instead, I tried making use of redis-py (the regular blocking twisted api) and wrapping the call in a deferred thread. It works great, however I want to perform an inner deferred when I make a call to redis as I would like to set up connection pooling in attempts to speed things up further. Below is my interpretation of some sample code taken from the twisted docs for a deferred thread to illustrate my use case: #!/usr/bin/env python from twisted.internet import reactor,threads from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(): print 'doing lookup... this may take a while' time.sleep(10) return 'results from redis' def result(res): print res def main(): lc = LoopingCall(main_loop) lc.start(2) d = threads.deferToThread(aBlockingRedisCall) d.addCallback(result) reactor.run() if __name__=='__main__': main() And here is my alteration for connection pooling that makes the code in the deferred thread blocking : #!/usr/bin/env python from twisted.internet import reactor,defer from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(x): if x<5: #all connections are busy, try later print '%s is less than 5, get a redis client later' % x x+=1 d = defer.Deferred() d.addCallback(aBlockingRedisCall) reactor.callLater(1.0,d.callback,x) return d else: print 'got a redis client; doing lookup.. this may take a while' time.sleep(10) # this is now blocking.. any ideas? d = defer.Deferred() d.addCallback(gotFinalResult) d.callback(x) return d def gotFinalResult(x): return 'final result is %s' % x def result(res): print res def aBlockingMethod(): print 'going to sleep...' time.sleep(10) print 'woke up' def main(): lc = LoopingCall(main_loop) lc.start(2) d = defer.Deferred() d.addCallback(aBlockingRedisCall) d.addCallback(result) reactor.callInThread(d.callback, 1) reactor.run() if __name__=='__main__': main() So my question is, does anyone know why my alteration causes the deferred thread to be blocking and/or can anyone suggest a better solution?

    Read the article

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

  • symfony/zend integration - blank screen

    - by user142176
    Hi, I need to use ZendAMF on a symfony project and I'm currently working on integrating the two. I have a frontend app with two modules, one of which is 'gateway' - the AMF gateway. In my frontend app config, I have the following in the configure function: // load symfony autoloading first parent::initialize(); // Integrate Zend Framework require_once('[MY PATH TO ZEND]\Loader.php'); spl_autoload_register(array('Zend_Loader', 'autoload')); The executeIndex function my the gateway actions.class.php looks like this // No Layout $this->setLayout(false); // Set MIME Type $this->getResponse()->setContentType('application/x-amf; charset='.sfConfig::get('sf_charset')); // Disable cause this is a non-html page sfConfig::set('sf_web_debug', false); // Create AMF Server $server = new Zend_Amf_Server(); $server->setClass('MYCLASS'); echo $server->handle(); return sfView::NONE; Now when I try to visit the url for the gateway module, or even the other module which was working perfectly fine until this attempt, I only see a blank screen, with not even the symfony dev bar loaded. Oddly enough, my symfony logs are not being updated as well, which suggests that Synfony is not even being 'reached'. So presumably the error has something to do with Zend, but I have no idea how to figure out what the error could be. One thing I do know for sure is that this is not a file path error, because if I change the path in the following line (a part of frontendConfiguration as shown above), I get a Zend_Amf_Server not found error. So the path must be correct. Also if I comment out this very same line, the second module resumes to normality, and my gateway broadcasts a blank x-amf stream. spl_autoload_register(array('Zend_Loader', 'autoload')); Does anyone have any tips on how I could attach this problem? Thanks P.S. I'm currently running an older version of Zend, which is why I am using Zend_Loader instead of Zend_autoLoader (I think). But I've tried switching to the new lib, but the error still remains. So it's not a version problem as well.

    Read the article

  • How do access a secure website within a sharepoint webpart?

    - by Bill
    How do access a secure website within a sharepoint webpart? The following code works fine as a console application but if you run it in a webpart, you will get a access violation WebRequest request = WebRequest.Create("https://somesecuresite.com"); WebResponse firstResponse = null; try { firstResponse = request.GetResponse(); } catch (WebException ex) { writer.WriteLine("Error: " + ex.ToString()); return; } if you access a non secure site, it also works. Any ideas? Error: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at System.Net.UnsafeNclNativeMethods.NativePKI.CertVerifyCertificateChainPolicy(IntPtr policy, SafeFreeCertChain chainContext, ChainPolicyParameter& cpp, ChainPolicyStatus& ps) at System.Net.PolicyWrapper.VerifyChainPolicy(SafeFreeCertChain chainContext, ChainPolicyParameter& cpp) at System.Net.Security.SecureChannel.VerifyRemoteCertificate(RemoteCertValidationCallback remoteCertValidationCallback) at System.Net.Security.SslState.CompleteHandshake() at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Excel UDF calculation should return 'original' value

    - by LeChe
    Hi all, I've been struggling with a VBA problem for a while now and I'll try to explain it as thoroughly as possible. I have created a VSTO plugin with my own RTD implementation that I am calling from my Excel sheets. To avoid having to use the full-fledged RTD syntax in the cells, I have created a UDF that hides that API from the sheet. The RTD server I created can be enabled and disabled through a button in a custom Ribbon component. The behavior I want to achieve is as follows: If the server is disabled and a reference to my function is entered in a cell, I want the cell to display Disabled If the server is disabled, but the function had been entered in a cell when it was enabled (and the cell thus displays a value), I want the cell to keep displaying that value If the server is enabled, I want the cell to display Loading Sounds easy enough. Here is an example of the - non functional - code: Public Function RetrieveData(id as Long) Dim result as String // This returns either 'Disabled' or 'Loading' result = Application.Worksheet.Function.RTD("SERVERNAME", "", id) RetrieveData = result If(result = "Disabled") Then // Obviously, this recurses (and fails), so that's not an option If(Not IsEmpty(Application.Caller.Value2)) Then // So does this RetrieveData = Application.Caller.Value2 End If End If End Function The function will be called in thousands of cells, so storing the 'original' values in another data structure would be a major overhead and I would like to avoid it. Also, the RTD server does not know the values, since it also does not keep a history of it, more or less for the same reason. I was thinking that there might be some way to exit the function which would force it to not change the displayed value, but so far I have been unable to find anything like that. Any ideas on how to solve this are greatly appreciated! Thanks, Che EDIT: By popular demand, some additional info on why I want to do all this: As I said, the function will be called in thousands of cells and the RTD server needs to retrieve quite a bit of information. This can be quite hard on both network and CPU. To allow the user to decide for himself whether he wants this load on his machine, he or she can disable the updates from the server. In that case, he or she should still be able to calculate the sheets with the values currently in the fields, yet no updates are pushed into them. Once new data is required, the server can be enabled and the fields will be updated. Again, since we are talking about quite a bit of data here, I would rather not store it somewhere in the sheet. Plus, the data should be usable even if the workbook is closed and loaded again.

    Read the article

  • Linq to sql C# updating reference Tables

    - by Laurence Burke
    ok reclarification I am adding a new address and I know the structure as AddressID = PK and all other entities are non nullable. Now on insert of a new row the addrID Pk is autogened and I am wondering if I would have to get that to create a new row in the referencing table or does that automatically get generated also. also I want to be able to repopulate the dropdownlist that lists the current employee's addresses with the newly created address. static uint _curEmpID; protected void btnAdd_Click(object sender, EventArgs e) { if (txtZip.Text != "" && txtAdd1.Text != "" && txtCity.Text != "") { TestDataClassDataContext dc = new TestDataClassDataContext(); Address addr = new Address() { AddressLine1 = txtAdd1.Text, AddressLine2 = txtAdd2.Text, City = txtCity.Text, PostalCode = txtZip.Text, StateProvinceID = Convert.ToInt32(ddlState.SelectedValue) }; dc.Addresses.InsertOnSubmit(addr); lblSuccess.Visible = true; lblErrMsg.Visible = false; dc.SubmitChanges(); // // TODO: add reference from new address to CurEmp Table // SetAddrList(); } else { lblErrMsg.Text = "Invalid Input"; lblErrMsg.Visible = true; } } protected void ddlAddList_SelectedIndexChanged(object sender, EventArgs e) { lblErrMsg.Visible = false; lblSuccess.Visible = false; TestDataClassDataContext dc = new TestDataClassDataContext(); dc.ObjectTrackingEnabled = false; if (ddlAddList.SelectedValue != "-1") { var addr = (from a in dc.Addresses where a.AddressID == Convert.ToInt32(ddlAddList.SelectedValue) select a).FirstOrDefault(); txtAdd1.Text = addr.AddressLine1; txtAdd2.Text = addr.AddressLine2; txtCity.Text = addr.City; txtZip.Text = addr.PostalCode; ddlState.SelectedValue = addr.StateProvinceID.ToString(); btnSubmit.Visible = true; btnAdd.Visible = false; } else { txtAdd1.Text = ""; txtAdd2.Text = ""; txtCity.Text = ""; txtZip.Text = ""; btnAdd.Visible = true; btnSubmit.Visible = false; } } protected void SetAddrList() { TestDataClassDataContext dc = new TestDataClassDataContext(); dc.ObjectTrackingEnabled = false; var addList = from addr in dc.Addresses from eaddr in dc.EmployeeAddresses where eaddr.EmployeeID == _curEmpID && addr.AddressID == eaddr.AddressID select new { AddValue = addr.AddressID, AddText = addr.AddressID, }; ddlAddList.DataSource = addList; ddlAddList.DataValueField = "AddValue"; ddlAddList.DataTextField = "AddText"; ddlAddList.DataBind(); ddlAddList.Items.Add(new ListItem("<Add Address>", "-1")); } OK I am hoping that I did not include too much code. I would really appreciate any other comments about I could otherwise improve this code in any other ways also.

    Read the article

  • SQL efficiency argument, add a column or solvable by query?

    - by theTurk
    I am a recent college graduate and a new hire for software development. Things have been a little slow lately so I was given a db task. My db skills are limited to pet projects with Rails and Django. So, I was a little surprised with my latest task. I have been asked by my manager to subclass Person with a 'Parent' table and add a reference to their custodian in the Person table. This is to facilitate going from Parent to Form when the custodian, not the Parent, is the FormContact. Here is a simplified, mock structure of a sql-db I am working with. I would have drawn the relationship tables if I had access to Visio. We have a table 'Person' and we have a table 'Form'. There is a table, 'FormContact', that relates a Person to a Form, not all Persons are related to a Form. There is a relationship table for Person to Person relationships (Employer, Parent, etc.) I've asked, "Why this couldn't be handled by a query?" Response, Inefficient. (Really!?!) So, I ask, "Why not have a reference to the Form? That would be more efficient since you wouldn't be querying the FormContacts table with the reference from child/custodian." Response, this would essentially make the Parent is a FormContact. (Fair enough.) I went ahead an wrote a query to get from non-FormContact Parent to Form, and tested on the production server. The response time was instantaneous. *SOME_VALUE* is the Parent's fk ID. SELECT FormID FROM FormContact WHERE FormContact.ContactID IN (SELECT SourceContactID FROM ContactRelationship WHERE (ContactRelationship.RelatedContactID = *SOME_VALUE*) AND (ContactRelationship.Relationship = 'Parent')); If I am right, "This is an unnecessary change." What should I do, defend my position or should I concede to the managers request? If I am wrong. What is my error? Is there a better solution than the manager's?

    Read the article

  • Adding rows with linq trouble with reference table

    - by Laurence Burke
    I am adding a new address and I know the structure as AddressID = PK and all other entities are non nullable. Now on insert of a new row the addrID Pk is autogened and I am wondering if I would have to get that to create a new row in the referencing table EmployeeAddress or does that automatically get generated also. also I want to be able to repopulate the dropdownlist that lists the current employee's addresses with the newly created address. static uint _curEmpID; protected void btnAdd_Click(object sender, EventArgs e) { if (txtZip.Text != "" && txtAdd1.Text != "" && txtCity.Text != "") { TestDataClassDataContext dc = new TestDataClassDataContext(); Address addr = new Address() { AddressLine1 = txtAdd1.Text, AddressLine2 = txtAdd2.Text, City = txtCity.Text, PostalCode = txtZip.Text, StateProvinceID = Convert.ToInt32(ddlState.SelectedValue) }; dc.Addresses.InsertOnSubmit(addr); lblSuccess.Visible = true; lblErrMsg.Visible = false; dc.SubmitChanges(); // // TODO: insert new row in EmployeeAddress to reference CurEmp to newly created address // SetAddrList(); } else { lblErrMsg.Text = "Invalid Input"; lblErrMsg.Visible = true; } } protected void ddlAddList_SelectedIndexChanged(object sender, EventArgs e) { lblErrMsg.Visible = false; lblSuccess.Visible = false; TestDataClassDataContext dc = new TestDataClassDataContext(); dc.ObjectTrackingEnabled = false; if (ddlAddList.SelectedValue != "-1") { var addr = (from a in dc.Addresses where a.AddressID == Convert.ToInt32(ddlAddList.SelectedValue) select a).FirstOrDefault(); txtAdd1.Text = addr.AddressLine1; txtAdd2.Text = addr.AddressLine2; txtCity.Text = addr.City; txtZip.Text = addr.PostalCode; ddlState.SelectedValue = addr.StateProvinceID.ToString(); btnSubmit.Visible = true; btnAdd.Visible = false; } else { txtAdd1.Text = ""; txtAdd2.Text = ""; txtCity.Text = ""; txtZip.Text = ""; btnAdd.Visible = true; btnSubmit.Visible = false; } } protected void SetAddrList() { TestDataClassDataContext dc = new TestDataClassDataContext(); dc.ObjectTrackingEnabled = false; var addList = from addr in dc.Addresses from eaddr in dc.EmployeeAddresses where eaddr.EmployeeID == _curEmpID && addr.AddressID == eaddr.AddressID select new { AddValue = addr.AddressID, AddText = addr.AddressID, }; ddlAddList.DataSource = addList; ddlAddList.DataValueField = "AddValue"; ddlAddList.DataTextField = "AddText"; ddlAddList.DataBind(); ddlAddList.Items.Add(new ListItem("<Add Address>", "-1")); } OK I am hoping that I did not include too much code. I would really appreciate any other comments about I could otherwise improve this code in any other ways also.

    Read the article

  • Multiprogramming in Django, writing to the Database

    - by Marcus Whybrow
    Introduction I have the following code which checks to see if a similar model exists in the database, and if it does not it creates the new model: class BookProfile(): # ... def save(self, *args, **kwargs): uniqueConstraint = {'book_instance': self.book_instance, 'collection': self.collection} # Test for other objects with identical values profiles = BookProfile.objects.filter(Q(**uniqueConstraint) & ~Q(pk=self.pk)) # If none are found create the object, else fail. if len(profiles) == 0: super(BookProfile, self).save(*args, **kwargs) else: raise ValidationError('A Book Profile for that book instance in that collection already exists') I first build my constraints, then search for a model with those values which I am enforcing must be unique Q(**uniqueConstraint). In addition I ensure that if the save method is updating and not inserting, that we do not find this object when looking for other similar objects ~Q(pk=self.pk). I should mention that I ham implementing soft delete (with a modified objects manager which only shows non-deleted objects) which is why I must check for myself rather then relying on unique_together errors. Problem Right thats the introduction out of the way. My problem is that when multiple identical objects are saved in quick (or as near as simultaneous) succession, sometimes both get added even though the first being added should prevent the second. I have tested the code in the shell and it succeeds every time I run it. Thus my assumption is if say we have two objects being added Object A and Object B. Object A runs its check upon save() being called. Then the process saving Object B gets some time on the processor. Object B runs that same test, but Object A has not yet been added so Object B is added to the database. Then Object A regains control of the processor, and has allready run its test, even though identical Object B is in the database, it adds it regardless. My Thoughts The reason I fear multiprogramming could be involved is that each Object A and Object is being added through an API save view, so a request to the view is made for each save, thus not a single request with multiple sequential saves on objects. It might be the case that Apache is creating a process for each request, and thus causing the problems I think I am seeing. As you would expect, the problem only occurs sometimes, which is characteristic of multiprogramming or multiprocessing errors. If this is the case, is there a way to make the test and set parts of the save() method a critical section, so that a process switch cannot happen between the test and the set?

    Read the article

  • How does the rsync algorithm correctly identify repeating blocks?

    - by Kai
    I'm on a personal quest to learn how the rsync algorithm works. After some reading and thinking, I've come up with a situation where I think the algorithm fails. I'm trying to figure out how this is resolved in an actual implementation. Consider this example, where A is the receiver and B is the sender. A = abcde1234512345fghij B = abcde12345fghij As you can see, the only change is that 12345 has been removed. Now, to make this example interesting, let's choose a block size of 5 bytes (chars). Hashing the values on the sender's side using the weak checksum gives the following values list. abcde|12345|fghij abcde -> 495 12345 -> 255 fghij -> 520 values = [495, 255, 520] Next we check to see if any hash values differ in A. If there's a matching block we can skip to the end of that block for the next check. If there's a non-matching block then we've found a difference. I'll step through this process. Hash the first block. Does this hash exist in the values list? abcde -> 495 (yes, so skip) Hash the second block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the third block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the fourth block. Does this hash exist in the values list? fghij -> 520 (yes, so skip) No more data, we're done. Since every hash was found in the values list, we conclude that A and B are the same. Which, in my humble opinion, isn't true. It seems to me this will happen whenever there is more than one block that share the same hash. What am I missing?

    Read the article

  • should I ever put a major version number into a C#/Java namespace?

    - by Andrew Patterson
    I am designing a set of 'service' layer objects (data objects and interface definitions) for a WCF web service (that will be consumed by third party clients i.e. not in-house, so outside my direct control). I know that I am not going to get the interface definition exactly right - and am wanting to prepare for the time when I know that I will have to introduce a breaking set of new data objects. However, the reality of the world I am in is that I will also need to run my first version simultaneously for quite a while. The first version of my service will have URL of http://host/app/v1service.svc and when the times comes by new version will live at http://host/app/v2service.svc However, when it comes to the data objects and interfaces, I am toying with putting the 'major' version of the interface number into the actual namespace of the classes. namespace Company.Product.V1 { [DataContract(Namespace = "company-product-v1")] public class Widget { [DataMember] string widgetName; } public interface IFunction { Widget GetWidgetData(int code); } } When the time comes for a fundamental change to the service, I will introduce some classes like namespace Company.Product.V2 { [DataContract(Namespace = "company-product-v2")] public class Widget { [DataMember] int widgetCode; [DataMember] int widgetExpiry; } public interface IFunction { Widget GetWidgetData(int code); } } The advantages as I see it are that I will be able to have a single set of code serving both interface versions, sharing functionality where possible. This is because I will be able to reference both interface versions as a distinct set of C# objects. Similarly, clients may use both interface versions simultaneously, perhaps using V1.Widget in some legacy code whilst new bits move on to V2.Widget. Can anyone tell why this is a stupid idea? I have a nagging feeling that this is a bit smelly.. notes: I am obviously not proposing every single new version of the service would be in a new namespace. Presumably I will do as many non-breaking interface changes as possible, but I know that I will hit a point where all the data modelling will probably need a significant rewrite. I understand assembly versioning etc but I think this question is tangential to that type of versioning. But I could be wrong.

    Read the article

  • deleted gen folder, eclipse isn't generating it now :(

    - by LuxuryMode
    I accidentally deleted my gen folder and now, predictably, my resources are all messed up. I just created a gen folder myself and tried to project clean - that didn't work. Tried right-clicking project and going to android tools fix project properties - didn't work. Tried unchecking build automatically...didn't work. cleaned, closed project, closed eclipse, restarted, etc, etc. Nothing is working and I keep seeing this error: gen already exists but is not a source folder. Convert to a source folder or rename it. EDIT - OK was able to generate R.java, but now I'm getting crazy stuff in the console: [2011-06-14 17:06:11 - fastapp] Conversion to Dalvik format failed with error 1 [2011-06-14 17:06:42 - fastapp] Dx trouble processing "java/awt/font/NumericShaper.class": Ill-advised or mistaken usage of a core class (java.* or javax.*) when not building a core library. This is often due to inadvertently including a core library file in your application's project, when using an IDE (such as Eclipse). If you are sure you're not intentionally defining a core class, then this is the most likely explanation of what's going on. However, you might actually be trying to define a class in a core namespace, the source of which you may have taken, for example, from a non-Android virtual machine project. This will most assuredly not work. At a minimum, it jeopardizes the compatibility of your app with future versions of the platform. It is also often of questionable legality. If you really intend to build a core library -- which is only appropriate as part of creating a full virtual machine distribution, as opposed to compiling an application -- then use the "--core-library" option to suppress this error message. If you go ahead and use "--core-library" but are in fact building an application, then be forewarned that your application will still fail to build or run, at some point. Please be prepared for angry customers who find, for example, that your application ceases to function once they upgrade their operating system. You will be to blame for this problem. If you are legitimately using some code that happens to be in a core package, then the easiest safe alternative you have is to repackage that code. That is, move the classes in question into your own package namespace. This means that they will never be in conflict with core system classes. JarJar is a tool that may help you in this endeavor. If you find that you cannot do this, then that is an indication that the path you are on will ultimately lead to pain, suffering, grief, and lamentation. [2011-06-14 17:06:42 - fastapp] Dx 1 error; aborting [2011-06-14 17:06:42 - fastapp] Conversion to Dalvik format failed with error 1 And eclipse can't resolve the import of my resources import com.me.fastapp.R;

    Read the article

  • Ruby on Rails check box not updating on form submission

    - by user284194
    I have an entries controller that allows users to add contact information the website. The user-submitted information isn't visible to users until the administrator checks a check box and submits the form. So basically my problem is that if I check the check box as an administrator while initially creating an entry (entries#new) the entry will be publicly visible as expected, but if a non-admin user creates an entry (the normal user view doesn't include the 'live' check box, only the admin one does) then that entry is stuck in limbo because the entries#edit view for some reason doesn't update the boolean check box value when logged in as an admin. entries#new view: <% form_for(@entry) do |f| %> <%= f.error_messages %> Name<br /> <%= f.text_field :name %> Mailing Address<br /> <%= f.text_field :address %> #... <%- if current_user -%> <%= f.label :live %><br /> <%= f.check_box :live %> <%- end -%> <%= f.submit 'Create' %> <% end %> entries#edit (only accessible by admin) view: <% form_for(@entry) do |f| %> <%= f.error_messages %> <%= f.label :name %><br /> <%= f.text_field :name %> Mailing Address<br /> <%= f.text_field :address %> <%= f.label :live %><br /> <%= f.check_box :live %> <%= f.submit 'Update' %> <% end %> Any ideas as to why an administrator can't update the :live check box from the edit view? I would greatly appreciate any suggestions. I'm new to rails. I can post more code if it's needed. Thanks for reading my question.

    Read the article

  • How to know when a user has really released a key in Java?

    - by Luis Soeiro
    (Edited for clarity) I want to detect when a user presses and releases a key in Java Swing, ignoring the keyboard auto repeat feature. I also would like a pure Java approach the works on Linux, Mac OS and Windows. Requirements: When the user presses some key I want to know what key is that; When the user releases some key, I want to know what key is that; I want to ignore the system auto repeat options: I want to receive just one keypress event for each key press and just one key release event for each key release; If possible, I would use items 1 to 3 to know if the user is holding more than one key at a time (i.e, she hits 'a' and without releasing it, she hits "Enter"). The problem I'm facing in Java is that under Linux, when the user holds some key, there are many keyPress and keyRelease events being fired (because of the keyboard repeat feature). I've tried some approaches with no success: Get the last time a key event occurred - in Linux, they seem to be zero for key repeat, however, in Mac OS they are not; Consider an event only if the current keyCode is different from the last one - this way the user can't hit twice the same key in a row; Here is the basic (non working) part of code: import java.awt.event.KeyListener; public class Example implements KeyListener { public void keyTyped(KeyEvent e) { } public void keyPressed(KeyEvent e) { System.out.println("KeyPressed: "+e.getKeyCode()+", ts="+e.getWhen()); } public void keyReleased(KeyEvent e) { System.out.println("KeyReleased: "+e.getKeyCode()+", ts="+e.getWhen()); } } When a user holds a key (i.e, 'p') the system shows: KeyPressed: 80, ts=1253637271673 KeyReleased: 80, ts=1253637271923 KeyPressed: 80, ts=1253637271923 KeyReleased: 80, ts=1253637271956 KeyPressed: 80, ts=1253637271956 KeyReleased: 80, ts=1253637271990 KeyPressed: 80, ts=1253637271990 KeyReleased: 80, ts=1253637272023 KeyPressed: 80, ts=1253637272023 ... At least under Linux, the JVM keeps resending all the key events when a key is being hold. To make things more difficult, on my system (Kubuntu 9.04 Core 2 Duo) the timestamps keep changing. The JVM sends a key new release and new key press with the same timestamp. This makes it hard to know when a key is really released. Any ideas? Thanks

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >