Search Results

Search found 16987 results on 680 pages for 'second'.

Page 519/680 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • Does a VCL OrgChart component with decent features exists? Is there a viable alternative?

    - by user193655
    I am using DevExpress OrgChart component that is still maintained but not developed since 2003 (fortunately bugs are fixed, but nothing more). Honestly this component, even if it starts to look too old still suffices my requirements except for 2 things: 1) it doesn't support at all the staff feature, for understanding what I mean see this image (where the items in staff are Administration, Communication, IT, Special Projects). 2) it arranges the items without optimizing the space, for example if there are 3 items at top level, and only the second item has 2 childs, the top items items are drawn more distantly, because of the 2 childs, there is no an option for "shirinking" the diagram. Of course the component misses tons of the features one would expect from an OrgChart tool, but in my case Those 2, and expecially (1) are important, the rest is lack of eye-candy. I look for VCL components, but if (as I fear, since I never found it) such component doesn't exist) I can see the following alternatives: i) using Hydra with .net winforms components ii) using ActiveX components. Between the 2 I would prefer ActiveX because of the .NET deployment hell (what I like about Delphi is that you ship the exe to the customer witn Win2k and it works). Anyway I never used an activeX control and I don't know which are the deployment issues, but I fear I will lose the opportunity of replacing an exe and upgrading the software. iii) hire a delphi component develoeper that can customize the DevEx component by adding feature (1) and maybe (2). I am stuck.

    Read the article

  • ListView Final Column Autosize creates scrollbar

    - by Courtney de Lautour
    Hi There, I am implementing a custom control which derives from ListView. I would like for the final column to fill the remaining space (Quite a common task), I have gone about this via overriding the OnResize method: protected override void OnResize(EventArgs e) { base.OnResize(e); if (Columns.Count == 0) return; Columns[Columns.Count - 1].Width = -2; // -2 = Fill remaining space } or via another method: protected override void OnResize(EventArgs e) { base.OnResize(e); if (!_autoFillLastColumn) return; if (Columns.Count == 0) return; int TotalWidth = 0; int i = 0; for (; i < Columns.Count - 1; i++) { TotalWidth += Columns[i].Width; } Columns[i].Width = this.DisplayRectangle.Width - TotalWidth; } Edit: This works fine until I dock the ListView into a parent container and resize via that control. Every second time the control size shrinks (IE, drag the the border one pixel), I get a scroll bar on the bottom which can't move at all (not even one pixel). The result of which is when I drag the size of the parent I am left with a flickering scroll bar in the ListView, and a 50% chance it will be there when the dragging stops.

    Read the article

  • How to load JPG file into NSBitmapImageRep?

    - by Adam
    Objective-C / Cocoa: I need to load the image from a JPG file into a two dimensional array so that I can access each pixel. I am trying (unsuccessfully) to load the image into a NSBitmapImageRep. I have tried several variations on the following two lines of code: NSString *filePath = [NSString stringWithFormat: @"%@%@",@"/Users/adam/Documents/phoneimages/", [outLabel stringValue]]; //this coming from a window control NSImageRep *controlBitmap = [[NSImageRep alloc] imageRepWithContentsOfFile:filePath]; With the code shown, I get a runtime error: -[NSImageRep imageRepWithContentsOfFile:]: unrecognized selector sent to instance 0x100147070. I have tried replacing the second line of code with: NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath]; NSBitmapImageRep *controlBitmap = [[NSBitmapImageRep alloc] initWithData:controlImage]; But this yields a compiler error 'incompatible type' saying that initWithData wants a NSData variable not an NSImage. I have also tried various other ways to get this done, but all are unsuccessful either due to compiler or runtime error. Can someone help me with this? I will eventually need to load some PNG files in the same way (so it would be nice to have a consistent technique for both). And if you know of an easier / simpler way to accomplish what I am trying to do (i.e., get the images into a two-dimensional array), rather than using NSBitmapImageRep, then please let me know! And by the way, I know the path is valid (confirmed with fileExistsAtPath) -- and the filename in outLabel is a file with .jpg extension. Thanks for any help!

    Read the article

  • Is there only one distribution certificate per team leader or one per application?

    - by dbonneville
    I'm confused by the dev and dist cert. I got one app in the store, but I named my certs after my first app. Was this a mistake? I'm ready to go on my second app. But XCode is selected the dist cert with the old app name on it. It built without error. Though I named it wrong, will it still work? XCode is automatically picking this cert for me. Is this right? You need a new app ID for each app so you can 1) put in plist, 2) put in code signing section on Build tab of Target info, but you don't need a new dev and dist cert for each new app. Therefore is this right too? For each app you develop, you only need your original dev and dist cert, but a new app ID for each app. This is so obscure! I wish apple had done a better job!

    Read the article

  • Appropriate wx.Sizer(s) for the job?

    - by MetaHyperBolic
    I have a space in which I would like certain elements (represented here by A, B, D, and G) to each be in its own "corner" of the design. The corners ought to line up as if each of the four elements was repelling the other; a rectangle. This is to be contained within an unresizable panel. I will have several similar panels and want to keep the location of the elements as identical as possible. (I needed something a little more complex than a wx.Wizard, but with the same general idea.) AAAAAAAAAA BB CCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCC CCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCC DDD EEE FFF GGG A represents a text in a large font. B represents a numeric progress meter (e.g. "1 of 7") in a small font. C represents a large block of text. D, E, F, and G are buttons. The G button is separated from others for functionality. I have attempted nested wx.BoxSizers (horizontal boxes inside of one vertical box) without luck. My first problem with wx.BoxSizer is that the .SetMinSize on my last row has not been honored. The second problem is that I have no idea how to make the G button "take up space" without growing comically large, or how I can jam it up against the right edge and bottom edge. I have tried to use a wx.GridBagSizer, but ran into entirely different issues. After plowing through the various online tutorials and wxPython in Action, I'm a little frustrated. The relevant forums appear to see activity once every two weeks. "Playing around with it" has gotten me nowhere; I feel as if I am trying to smooth out a hump in ill-laid carpet.

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

  • How do I combine two arrays in PHP based on a common key?

    - by Eoghan O'Brien
    Hi, I'm trying to join two associative arrays together based on an entry_id key. Both arrays come from individual database resources, the first stores entry titles, the second stores entry authors, the key=value pairs are as follows: array ( 'entry_id' => 1, 'title' => 'Test Entry' ) array ( 'entry_id' => 1, 'author_id' => 2 I'm trying to achieve an array structure like: array ( 'entry_id' => 1, 'author_id' => 2, 'title' => 'Test Entry' ) Currently, I've solved the problem by looping through each array and formatting the array the way I want, but I think this is a bit of a memory hog. $entriesArray = array(); foreach ($entryNames as $names) { foreach ($entryAuthors as $authors) { if ($names['entry_id'] === $authors['entry_id']) { $entriesArray[] = array( 'id' => $names['entry_id'], 'title' => $names['title'], 'author_id' => $authors['author_id'] ); } } } I'd like to know is there an easier, less memory intensive method of doing this?

    Read the article

  • iphone SDK buttons control views

    - by mangnv
    Hi, all I'v just started to learn SDK. I have several questions. First. I will do the project about university APP. if u clike the app, then u can c several buttons on one page, then each button( eg: event, BBS,courses, map....) has specific function...each one connect to anohter view. Are they buttons on the main page? Second, how to deal with one view has several buttons, one button connect to other view? Third, how to deal with controller classes? I'd like to make a app like this way. Main Page has 6 buttonss(event, community, directory...), take community for eg: community has two functions( Notices and BBS). if i clike Notices, then i can read notices, and if i clike BBS, then i could also read BBS. The thing that i do not get is..how to deal with classes. I mean main Page has one controller classes that control 6 buttons? If my question is not so clearly, then let me know....I really need help~

    Read the article

  • Could Python's logging SMTP Handler be freezing my thread for 2 minutes?

    - by Oddthinking
    A rather confusing sequence of events happened, according to my log-file, and I am about to put a lot of the blame on the Python logger, which is a bold claim. I thought I should get some second opinions about whether what I am saying could be true. I am trying to explain why there is are several large gaps in my log file (around two minutes at a time) during stressful periods for my application when it is missing deadlines. I am using Python's logging module on a remote server, and have set-up, with a configuration file, for all logs of severity of ERROR or higher to be emailed to me. Typically, only one error will be sent at a time, but during periods of sustained problems, I might get a dozen in a minute - annoying, but nothing that should stress SMTP. I believe that, after a short spurt of such messages, the Python logging system (or perhaps the SMTP system it is sitting on) is encountering errors or congestion. The call to Python's log is then BLOCKING for two minutes, causing my thread to miss its deadlines. (I was smart enough to move the logging until after the critical path of the application - so I don't care if logging takes me a few seconds, but two minutes is far too long.) This seems like a rather awkward architecture (for both a logging system that can freeze up, and for an SMTP system (Ubuntu, sendmail) that cannot handle dozens of emails in a minute**), so this surprises me, but it exactly fits the symptoms. Has anyone had any experience with this? Can anyone describe how to stop it from blocking? ** EDIT: I actually counted. A little under 4000 short emails in two hours. So far more than I suggested, sorry. But enough to over-fill a Sendmail's buffers?

    Read the article

  • Authentication using cookie key with asynchronous callback

    - by greg
    I need to write authentication function with asynchronous callback from remote Auth API. Simple authentication with login is working well, but authorization with cookie key, does not work. It should checks if in cookies present key "lp_login", fetch API url like async and execute on_response function. The code almost works, but I see two problems. First, in on_response function I need to setup secure cookie for authorized user on every page. In code user_id returns correct ID, but line: self.set_secure_cookie("user", user_id) does't work. Why it can be? And second problem. During async fetch API url, user's page has loaded before on_response setup cookie with key "user" and the page will has an unauthorized section with link to login or sign on. It will be confusing for users. To solve it, I can stop loading page for user who trying to load first page of site. Is it possible to do and how? Maybe the problem has more correct way to solve it? class BaseHandler(tornado.web.RequestHandler): @tornado.web.asynchronous def get_current_user(self): user_id = self.get_secure_cookie("user") user_cookie = self.get_cookie("lp_login") if user_id: self.set_secure_cookie("user", user_id) return Author.objects.get(id=int(user_id)) elif user_cookie: url = urlparse("http://%s" % self.request.host) domain = url.netloc.split(":")[0] try: username, hashed_password = urllib.unquote(user_cookie).rsplit(',',1) except ValueError: # check against malicious clients return None else: url = "http://%s%s%s/%s/" % (domain, "/api/user/username/", username, hashed_password) http = tornado.httpclient.AsyncHTTPClient() http.fetch(url, callback=self.async_callback(self.on_response)) else: return None def on_response(self, response): answer = tornado.escape.json_decode(response.body) username = answer['username'] if answer["has_valid_credentials"]: author = Author.objects.get(email=answer["email"]) user_id = str(author.id) print user_id # It returns needed id self.set_secure_cookie("user", user_id) # but session can's setup

    Read the article

  • How does CDI injection work in MDBs and @Scheduled beans?

    - by Nils-Petter Nilsen
    I'm working on a large Java EE 6 application that is deployed on JBoss 6 Final. My current tasks involve using @Inject consistently instead of @EJB, but I'm running into some problems on some types of beans, specifically @MessageDriven beans and beans with @Scheduled methods. What happens is that if I'm unlucky with the timing (for @Schedule) or if there are messages in the MDBs' queues at startup, instantiation of the beans will fail because the injected resources (which are EJBs themselves) are not bound yet. Because I use @Inject, I'm guessing that the EJB container considers my beans to be ready, since the container itself does not care about @Inject; it probably simply assumes that since there are no @EJB injections, the beans are ready for use. The injected CDI proxies will then fail because the resources to inject aren't actually bound yet. Tiny example: @Stateless @LocalBean public class MySupportingBean { public void doSomething() { ... } } @Singleton public class MyScheduledBean { @Inject private MySupportingBean supportingBean; @Schedule(second = "*/1", hour = "*", minute = "*", persistent = false) public void onTimeout() { supportingBean.doSomething(); } } The above example will probably not fail often because there are only two beans, but the project I'm working on binds lots of EJBs, which will amplify the problem. But it might fail because there is no guarantee that MySupportingBean is bound first, and if onTimeout is invoked before MySupportingBean is bound, then instantiation of MyScheduledBean will fail. If I used @EJB instead, MyScheduledBean wouldn't be bound until the dependency to MySupportingBean was satisfied. Note that the example will not fail in onTimeout itself, but when CDI attempts to inject MySupportingBean. I've read a lot of posts on different forums where many people argue that @Inject is always better. Generally, I agree, but how do they handle @Schedule or @MessageDriven combined with @Inject? In my experience, it comes down to dumb luck whether the beans will work or not in those cases, and the beans will fail arbitrarily, depending on which order the EJBs are deployed in, and when @Schedule or onMessage are invoked.

    Read the article

  • Trimming bit of the beginning off a recorder waveform

    - by Lowgain
    I've got a flash 10.1 app that lets me record microphone input to a wav without a media server, which I am saving to an Amazon S3 bucket. I have another process running on a server which gets wavs from this bucket, converts to mp3 using LAME and puts them into another bucket. This all works fine, but in converting wav mp3, about 0.1sec or so of silence is added to my sound. In the application this are being used in, perfect sync is critical, so I need to trim off that little bit. If I have to trim it off the original waveform that is okay, I don't expect anything important to happen in that first fraction of a second. What is the best way to go about this? I am using Adobe's WavWriter to convert by ByteArray into a proper waveform. Is there a way I can easily trim off the first few samples from my ByteArray without invalidating the structure? Alternatively, is there a good server-side tool I can use to trim the wav before running it through LAME, or an argument I can give LAME? Or, could I even trim that sound off the mp3 after it has been converted? Thanks!

    Read the article

  • A more elegant way to start a multithread alarm in Rebol VID ? (What's the equivalent of load event?

    - by Rebol Tutorial
    I want to trigger an alarm when the GUI starts. I can't see what's the equivalent of load event of other language in Rebol VID, so I put it in the periodic handler which is quite circumvoluted. So how to do this more cleanly ? alarm-data: none set-alarm: func [ "Set alarm for future time." seconds "Seconds from now to ring alarm." message [string! unset!] "Message to print on alarm." ] [ alarm-data: reduce [now/time + seconds message] ] ring: func [ "Action for when alarm comes due." message [string! unset!] ] [ set-face monitor either message [message]["RIIIING!"] ; Your sound playing can also go here (my computer doesn't have speakers). ] periodic: func [ "Called every second, checks alarms." fact action event ] [ either alarm-data [ ; Update alarm countdown. set-face monitor rejoin [ "Alarm will ring in " to integer! alarm-data/1 - now/time " seconds." ] ; Check alarm. if now/time > alarm-data/1 [ ring alarm-data/2 ;alarm-data: none ; Reset once fired. ] ][ either value? 'message [ set-alarm seconds message ][ set-alarm seconds "Alarm triggered!" ] ] ] alarm: func[seconds message [string! unset!]][ system/words/seconds: seconds if value? 'message [ system/words/message: message ] view layout [ monitor: text 256 "" rate 1 feel [engage: :periodic] button 256 "re/start countdown" [ either value? 'message [ set-alarm seconds message ][ set-alarm seconds "Alarm triggered!" ] set-face monitor "Alarm set." ] ] ]

    Read the article

  • how to parametrize an import in a View?

    - by Gianluca Colucci
    Hello everybody, I am looking for some help and I hope that some good soul out there will be able to give me a hint :) In the app I am building, I have a View. When an instance of this View gets created, it "imports" (MEF) the corresponding ViewModel. Here some code: public partial class ContractEditorView : Window { public ContractEditorView () { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import(ViewModelTypes.ContractEditorViewModel)] public object ViewModel { set { DataContext = value; } } } and here is the export for the ViewModel: [PartCreationPolicy(CreationPolicy.NonShared)] [Export(ViewModelTypes.ContractEditorViewModel)] public class ContractEditorViewModel: ViewModelBase { public ContractEditorViewModel() { _contract = new Models.Contract(); } } Now, this works if I want to open a new window to create a new contract... But it does not if I want to use the same window to edit an existing contract... In other words, I would like to add a second construct both in the View and in the ViewModel: the original constructor would accept no parameters and therefore it would create a new contract; the new construct - the one I'd like to add - I'd like to pass an ID so that I can load a given entity... What I don't understand is how to pass this ID to the ViewModel import... Thanks in advance, Cheers, Gianluca.

    Read the article

  • Alternate widgets and logic for ManyToManyField with Django forms

    - by Jaearess
    In my Django project, I have a simple ticket system. When creating a ticket, certain users have the ability to assign the ticket to other users, and to email the ticket to other users as well (this is used as an FYI for those users, so they're aware of the ticket, even though it's not assigned to them.) At the moment, the form for adding a ticket is simply the default Django form, with the "assigned_to" and "email_to" fields being ManyToManyFields, and therefore displayed as MultipleSelect widgets, each with a list of all users. Due to the relatively large number of users, and general awkwardness of the MultipleSelect widget, and alternate layout is now required. The desired layout is a pair of simple Select widgets side-by-side. The first has the option of "Assign to" or "Email to" and the second is a list of the users. Essentially, like this: [Assign to] [John Doe] [Email to] [Jane Roe] [Jack Smith], etc. Of course, since an arbitrary number of users can be assigned or emailed a ticket, there's a simple button that runs some Javascript to add another set of widgets, to allow the user to assign and email as many people as they need to. So far all of that is fairly simple and straight forward. However, the problem I have is using this widget setup/logic setup with Django forms. Instead of lists of users to assign to and email, instead we're getting back pairs of information, one a user and the other which list that user should be placed in. What I'm looking for, but have yet to find, is a way to offload the translation between how the user uses the form, and how Django understands the model to the form itself, so I don't have to manually do the processing of the data before passing it to the form in each place this form is used. Additionally, there's a review screen with the option to go back and change the form before submitting it, so a way to have the form translate both to and from this format would be extremely helpful.

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Volatile fields in C#

    - by Danny Chen
    From the specification 10.5.3 Volatile fields: The type of a volatile field must be one of the following: A reference-type. The type byte, sbyte, short, ushort, int, uint, char, float, bool, System.IntPtr, or System.UIntPtr. An enum-type having an enum base type of byte, sbyte, short, ushort, int, or uint. First I want to confirm my understanding is correct: I guess the above types can be volatile because they are stored as a 4-bytes unit in memory(for reference types because of its address), which guarantees the read/write operation is atomic. A double/long/etc type can't be volatile because they are not atomic reading/writing since they are more than 4 bytes in memory. Is my understanding correct? And the second, if the first guess is correct, why a user defined struct with only one int field in it(or something similar, 4 bytes is ok) can't be volatile? Theoretically it's atomic right? Or it's not allowed simply because that all user defined structs(which is possibly more than 4 bytes) are not allowed to volatile by design?

    Read the article

  • GAE datastore querying integer fields

    - by ParanoidAndroid
    I notice strange behavior when querying the GAE datastore. Under certain circumstances Filter does not work for integer fields. The following java code reproduces the problem: log.info("start experiment"); DatastoreService datastore = DatastoreServiceFactory.getDatastoreService(); int val = 777; // create and store the first entity. Entity testEntity1 = new Entity(KeyFactory.createKey("Test", "entity1")); Object value = new Integer(val); testEntity1.setProperty("field", value); datastore.put(testEntity1); // create the second entity by using BeanUtils. Test test2 = new Test(); // just a regular bean with an int field test2.setField(val); Entity testEntity2 = new Entity(KeyFactory.createKey("Test", "entity2")); Map<String, Object> description = BeanUtilsBean.getInstance().describe(test2); for(Entry<String,Object> entry:description.entrySet()){ testEntity2.setProperty(entry.getKey(), entry.getValue()); } datastore.put(testEntity2); // now try to retrieve the entities from the database... Filter equalFilter = new FilterPredicate("field", FilterOperator.EQUAL, val); Query q = new Query("Test").setFilter(equalFilter); Iterator<Entity> iter = datastore.prepare(q).asIterator(); while (iter.hasNext()) { log.info("found entity: " + iter.next().getKey()); } log.info("experiment finished"); the log looks like this: INFO: start experiment INFO: found entity: Test("entity1") INFO: experiment finished For some reason it only finds the first entity even though both entities are actually stored in the datastore and both 'field' values are 777 (I see it in the Datastore Viewer)! Why does it matter how the entity is created? I would like to use BeanUtils, because it is convenient. The same problem occurs on the local devserver and when deployed to GAE.

    Read the article

  • Strange Behaviour in Swift: constant defined with LET but behaving like a variable defined with VAR

    - by Sam
    Stuck on the below for a day! Any insight would be greatly appreciated. The constant in the first block match0 behaves as expected. The constant defined in the second block does not behave as nicely in the face of a change to its "source": var str = "+y+z*1.0*sum(A1:A3)" if let range0 = str.rangeOfString("^\\+|^\\-|^\\*|^\\/", options: NSStringCompareOptions.RegularExpressionSearch){ let match0 = str[range0] println(match0) //yields "+" - as expexted str.removeRange(range0) println(match0) //yields "+" - as expected str.removeRange(range0) println(match0) //yields "+" - as expected } if let range1 = str.rangeOfString("^\\+|^\\-|^\\*|^\\/", options: NSStringCompareOptions.RegularExpressionSearch){ let match1 = str[range1] println(match1) //yields "+" as expected str.removeRange(range1) println(match1) //!@#$ OMG!!!!!!!!!!! a constant variable has changed! This prints "z" } The following are the options I can see: match1 has somehow obtained a reference to its source instead of being copied by value [Problem: Strings are value types in Swift] match1 has somehow obtained a closure to its source instead of just being a normal constant/variable? [Problem: sounds like science fiction & then why does match0 behave so well?] Could there be a bug in the Swift compiler? [Problem: Experience has taught me that this is very very very rarely the solution to your problem...but it is still in beta]

    Read the article

  • Is it legal to have different SOAP namespaces/versions between the request and response?

    - by Lord Torgamus
    THIRD EDIT: I now believe that this problem is due to a SOAP version mismatch (1.1 request, 1.2 response) masquerading as a namespace problem. Is it illegal to mix versions, or just bad style? Am I completely out of luck if I can't change my SOAP version or the service's? SECOND EDIT: Clarified error message, and tried to reduce "tl;dr"-ness. EDIT: [Link deleted, not related] Using soapUI, I'm sending a request that starts with: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" ... and getting a response that starts with: <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" ... I know the service is getting the info, because processes down the line are working. However, my soapUI teststep fails. It has two active assertions: "SOAP Response" and "Not SOAP Fault." The failure marker is next to "SOAP Response," with the following message: line -1: Element Envelope@http://www.w3.org/2003/05/soap-envelope is not a valid Envelope@http://schemas.xmlsoap.org/soap/envelope/ document or a valid substitution. I have tried mixing and matching the namespace prefixes and schema URLs. Changing prefixes seems to have no effect; changing URLs causes a VersionMismatch error. I have also tried to use a substitution group, but that doesn't seem to be legal.

    Read the article

  • Parse numbers in singe textbox for query

    - by Joshua Slocum
    I’ve built a webform in Visual Web Developer Express 2008 to help me with my work. I use a webform to run query requests that are emailed to me. The inputs are in this format 12312 12312 12312 12312 12312 12312 12312 12312 I enter the first number in a textbox and the second number in another textbox and click a button that runs a query and returns the results in a gridview(single row). string strConn, strSQL; strConn = AppConfig.Connection strSQL = 'select fields from table where FirstNum=:FirstNum and SecondNum=:SecondNum'; using (OracleConnection cn = new OracleConnection(strConn)) { OracleCommand cmd = new OracleCommand(strSQL, cn); cmd.Parameters.AddWithValue(":FirstNum", txtFirstNum.Text); cmd.Parameters.AddWithValue(":SeconNum", txtSecondNum.Text); cn.Open(); using (OracleDataReader rdr = cmd.ExecuteReader()) { dgResults.DataSource = rdr; dgResults.DataBind(); } cn.Close(); } I had an idea to help me speed up my work. I’d like to be able to past both numbers in a single textbox ( like this 12312 12312 ) and have the code parse out the nubmers for the query. Or even better would be to past all of them in a multiline textbox like this 12312 12312 12312 12312 12312 12312 12312 12312 And have them all parsed and the query run for each line and the results all output to one gridview. I’m just not sure how to approach this. Any suggestions would be appreciated. Thank you.

    Read the article

  • Using the Apple Scripting bridge with Mail to send an attachment causes the message background to go

    - by Naym
    Hello, When I use the Apple Scripting Bridge to send a message with an attachment the background of the message is set to black which is a problem because the text is also black. The code in question is: MailApplication *mail = [SBApplication applicationWithBundleIdentifier:@"com.apple.Mail"]; /* create a new outgoing message object */ MailOutgoingMessage *emailMessage = [[[mail classForScriptingClass:@"outgoing message"] alloc] initWithProperties: [NSDictionary dictionaryWithObjectsAndKeys: emailSubject, @"subject", [self composeEmailBody], @"content", nil]]; /* add the object to the mail app */ [[mail outgoingMessages] addObject: emailMessage]; /* set the sender, show the message */ emailMessage.sender = [NSString stringWithFormat:@"%@ <%@>",[[[mail accounts] objectAtIndex:playerOptions.mailAccount] fullName],[[[[mail accounts] objectAtIndex:playerOptions.mailAccount] emailAddresses] objectAtIndex:0]]; emailMessage.visible = YES; /* create a new recipient and add it to the recipients list */ MailToRecipient *theRecipient = [[[mail classForScriptingClass:@"to recipient"] alloc] initWithProperties: [NSDictionary dictionaryWithObjectsAndKeys: opponentEmail, @"address", nil]]; [emailMessage.toRecipients addObject: theRecipient]; /* add an attachment, if one was specified */ if ( [playerInfo.gameFile length] > 0 ) { /* create an attachment object */ MailAttachment *theAttachment = [[[mail classForScriptingClass:@"attachment"] alloc] initWithProperties: [NSDictionary dictionaryWithObjectsAndKeys: playerInfo.gameFile, @"fileName", nil]]; /* add it to the list of attachments */ [[emailMessage.content attachments] addObject: theAttachment]; } /* send the message */ [emailMessage send]; The actual change to the background colour occurs on the second last line, which is: [[emailMessage.content attachments] addObject: theAttachment]; The code sections above are essentially lifted from the SBSendMail example code from Apple. At this stage I've really only made the changes necessary to integrate with the data from my application. If I build and run the SBSendMail example after freshly downloading it from Apple the message background is also changed to black with execution of the same line. It does not appear to matter which type of file is attached, where it is located, or on which computer or operating system is used. This could be a bug in Apple's scripting bridge but has anyone come across this problem and found a solution? ALternatively does anyone know if the background colour of a MailOutgoingMessage instance can be changed with the scripting bridge? Thanks for any assistance with this, Naym.

    Read the article

  • Asynchronous event loop design and issues.

    - by Artyom
    Hello, I'm designing event loop for asynchronous socket IO using epoll/devpoll/kqueue/poll/select (including windows-select). I have two options of performing, IO operation: Non-blocking mode, poll on EAGAIN Set socket to non-blocking mode. Read/Write to socket. If operation succeeds, post completion notification to event loop. If I get EAGAIN, add socket to "select list" and poll socket. Polling mode: poll and then execute Add socket to select list and poll it. Wait for notification that it is readable writable read/write Post completion notification to event loop of sucseeds To me it looks like first would require less system calls when using in normal mode, especially for writing to socket (buffers are quite big). Also it looks like that it would be possible to reduce the overhead over number of "select" executions, especially it is nice when you do not have something that scales well as epoll/devpoll/kqueue. Questions: Are there any advantages of the second approach? Are there any portability issues with non-blocking operations on sockets/file descriptors over numerous operating systems: Linux, FreeBSD, Solaris, MacOSX, Windows. Notes: Please do not suggest using existing event-loop/socket-api implementations

    Read the article

  • Optimization in Python - do's, don'ts and rules of thumb.

    - by JV
    Well I was reading this post and then I came across a code which was: jokes=range(1000000) domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)] I thought wouldn't it be better to calculate the value of len(jokes) once outside the list comprehension? Well I tried it and timed three codes jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)]' 10000000 loops, best of 3: 0.0352 usec per loop jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes);domain=[(0,(l*2)-i-1) for i in range(0,l*2)]' 10000000 loops, best of 3: 0.0343 usec per loop jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes)*2;domain=[(0,l-i-1) for i in range(0,l)]' 10000000 loops, best of 3: 0.0333 usec per loop Observing the marginal difference 2.55% between the first and the second made me think - is the first list comprehension domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)] optimized internally by python? or is 2.55% a big enough optimization (given that the len(jokes)=1000000)? If this is - What are the other implicit/internal optimizations in Python ? What are the developer's rules of thumb for optimization in Python? Edit1: Since most of the answers are "don't optimize, do it later if its slow" and I got some tips and links from Triptych and Ali A for the do's. I will change the question a bit and request for don'ts. Can we have some experiences from people who faced the 'slowness', what was the problem and how it was corrected? Edit2: For those who haven't here is an interesting read Edit3: Incorrect usage of timeit in question please see dF's answer for correct usage and hence timings for the three codes.

    Read the article

  • iPhone and OpenCV

    - by gn-mithun
    Hello, I am writing an application to create a movie file from a bunch of images on an iPhone. I am using OpenCv. I downloaded openCv static libraries for arm(iPhones native instruction architecture) and the libraries were generated just fine. There were no problems linking to them libraries. As a first step, i was trying to create a .avi file using one image, to see if it works. But cvCreateVideoWriter always returns me a NULL value. I did some searchin and i believe its due to the codec not being present. I am trying this on the iPhone simulator. this is what i do... (void)viewDidLoad { [super viewDidLoad]; UIImage *anImage = [UIImage imageNamed:@"1.jpg"]; IplImage *img_color = [self CreateIplImageFromUIImage:anImage]; //The image gets created just fine CvVideoWriter *writer = cvCreateVideoWriter("out.avi",CV_FOURCC('P','I','M','1'), 25,cvSize(320,480),1); //writer is always null int result = cvWriteFrame(writer, img_color); NSLog(@"\n%d",result); //hence this is also 0 all the time cvReleaseVideoWriter(&writer); } I am not sure about the the second parameter. What sort of codec or what exactly it does... I am a n00B in this. Any suggestions?

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >