Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 507/654 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • SVN supports historical merges so how is Mercurial better?

    - by radman
    Hi, I'm a long time SVN user and have been hearing a lot of brou ha ha with regard to mercurial and decentralised version control systems in general. The main touted feature that I am aware of is that merging in Mercurial is much easier because it records information for each merge so each successive merge is aware of the previous ones. Now as stated in the red book, in the section to do with merging, SVN already supports this with mergeinfo. Now I have not actually used this feature (although I wanted to, our repo version wasn't recent enough) but is this SVN feature particularly different to what Mercurial offers? For anyone who is not aware the suggested work flow for historical merging in svn is this: branch from the development trunk to do your own thing. Regularly merge changes from trunk into your branch to stay up to date. Merge back when your done with the mergeinfo to smooth the process. Without historical data merging this is a nightmare because the comparison is strictly on the differences in the files and does not take into account the steps taken on the way. So each change in the development trunk puts you further into possible conflict when you merge back. Now what I would like to know is: Does merging using Mercurial provide a significant advantage when compared with mergeinfo in SVN or is this just a lot of hot air about nothing? Has anyone used the mergeinfo feature in SVN and how good is it actually in practice?

    Read the article

  • Using nohup mysqldump from php script is inserting a '!' and breaking to a new line.

    - by Aglystas
    I'm trying to run a mysqldump from php using the nohup command to prevent the script from hanging. Here's the command (The database is mc6_erik_test, everything else is just a table list until you get to the end) exec("mysqldump -u root -pPassword -h vfmy1-dev.mountainmedia.com mc6_erik_test access_log admin affiliate affiliate_2_product authorized_ip category category_2_product claim_code claim_code_log country_exclude customer customer_2_subscription customer_account_log customer_address customer_bill customer_discount customer_ip customer_key email_bulk_log email_draft email_queue email_queue_log email_template endicia_log gift_wrap image_bulk_upload log mailing_list manufacturer merchant merchant_checkout merchant_ip merchant_ship merchant_ship_conf new_account_temp order_dest order_item order_item_2_dest order_item_2_package order_item_log order_item_registrant order_note order_package order_package_label orders package package_2_product pref product product_2_supplier product_also product_event_date product_image product_option product_related product_review product_review_helpful product_ship_disable report search_log subscription supplier temp_product transaction_account transactions wish_list wish_list_fill wish_list_item --opt --where='merchant_id=\'6\'' /tmp/sync_db_card_20100519105358.sql"); As you can see it's really long, because I have to specifically include only the tables I want to dump. The command works great from the command line, however when I run it through a web script towards the end the following is being used as the command... supplier temp_product transaction_account transactio! ns wish_list wish_list_fill wish_list_item --opt --where='merchant_id="6"' > /tmp/sync_db_card_20100519105358.sql So the table 'transactions' is being split by an exclamation point and newline. The rest of the command is exactly the same. And if I run this through the php-cli interface it doesn't happen only when I try running it via the webserver using nohup. I'm wondering if there is some inherit string length to using the exec command within a php script, or really if anyone has any general idea what is going on here.

    Read the article

  • Combinatorial optimisation of a distance metric

    - by Jose
    I have a set of trajectories, made up of points along the trajectory, and with the coordinates associated with each point. I store these in a 3d array ( trajectory, point, param). I want to find the set of r trajectories that have the maximum accumulated distance between the possible pairwise combinations of these trajectories. My first attempt, which I think is working looks like this: max_dist = 0 for h in itertools.combinations ( xrange(num_traj), r): for (m,l) in itertools.combinations (h, 2): accum = 0. for ( i, j ) in itertools.izip ( range(k), range(k) ): A = [ (my_mat[m, i, z] - my_mat[l, j, z])**2 \ for z in xrange(k) ] A = numpy.array( numpy.sqrt (A) ).sum() accum += A if max_dist < accum: selected_trajectories = h This takes forever, as num_traj can be around 500-1000, and r can be around 5-20. k is arbitrary, but can typically be up to 50. Trying to be super-clever, I have put everything into two nested list comprehensions, making heavy use of itertools: chunk = [[ numpy.sqrt((my_mat[m, i, :] - my_mat[l, j, :])**2).sum() \ for ((m,l),i,j) in \ itertools.product ( itertools.combinations(h,2), range(k), range(k)) ]\ for h in itertools.combinations(range(num_traj), r) ] Apart from being quite unreadable (!!!), it is also taking a long time. Can anyone suggest any ways to improve on this?

    Read the article

  • Python and a "time value of money" problem.

    - by jamieb
    (I asked this question earlier today, but I did a poor job of explaining myself. Let me try again) I have a client who is an industrial maintenance company. They sell service agreements that are prepaid 20 hour blocks of a technician's time. Some of their larger customers might burn through that agreement in two weeks while customers with fewer problems might go eight months on that same contract. I would like to use Python to help model projected sales revenue and determine how many billable hours per month that they'll be on the hook for. If each customer only ever bought a single service contract (never renewed) it would be easy to figure sales as monthly_revenue = contract_value * qty_contracts_sold. Billable hours would also be easy: billable_hrs = hrs_per_contract * qty_contracts_sold. However, how do I account for renewals? Assuming that 90% (or some other arbitrary amount) of customers renew, then their monthly revenue ought to grow geometrically. Another important variable is how long the average customer burns through a contract. How do I determine what the revenue and billable hours will be 3, 6, or 12 months from now, based on various renewal and burn rates? I assume that I'd use some type of recursive function but math was never one of my strong points. Any suggestions please? Edit: I'm thinking that the best way to approach this is to think of it as a "time value of money" problem. I've retitled the question as such. The problem is probably a lot more common if you think of "monthly sales" as something similar to annuity payments.

    Read the article

  • tipfy for Google App Engine: Is it stable? Can auth/session components of tipfy be used with webapp?

    - by cv12
    I am building a web application on Google App Engine that requires users to register with the application and subsequently authenticate with it and maintain sessions. I don't want to force users to have Google accounts. Also, the target audience for the application is the average non-geek, so I'm not very keen on using OpenID or OAuth. I need something simple like: User registers with an e-mail and password, and then can log back in with those credentials. I understand that this approach does not provide the security benefits of Google or OpenID authentication, but I am prepared to trade foolproof security for end-user convenience and hassle-free experience. I explored Django, but decided that consecutive deprecations from appengine-helper to app-engine-patch to django-nonrel may signal that path may be a bit risky in the long-term. I'd like to use a code base that is likely to be maintained consistently. I also explored standalone session/auth packages like gaeutilities and suas. GAEUtilities looked a bit immature (e.g., the code wasn't pythonic in places, in my opinion) and SUAS did not give me a lot of comfort with the cookie-only sessions. I could be wrong with my assessment of these two, so I would appreciate input on those (or others that may serve my objective). Finally, I recently came across tipfy. It appears to be based on Werkzeug and Alex Martelli spoke highly of it here on stackoverflow. I have two primary questions related to tipfy: As a framework, is it as mature as webapp? Is it stable and likely to be maintained for some time? Since my primary interest is the auth/session components, can those components of the tipfy framework be used with webapp, independent of the broader tipfy framework? If yes, I would appreciate a few pointers to how I could go about doing that.

    Read the article

  • Display image background fully for last repeat in a div

    - by Stiggler
    I have a 700x300 background repeating seamlessly under the main content-div. Now I'd like to attach a div at the bottom of the content-div, containing a continuation-to-end of the background image, connecting seamlessly with the background above it. Due to the nature of the pattern, unless the full 300px height of the background image is visible in the last repeat of the content-div's backround, the background in the div below won't seamlessly connect. Basically, I need the content div's height to be a multiple of 300px under all circumstances. What's a good approach to this sort of problem? I've tried resizing the content-div on loading the page, but this only works as long as the content div doesn't contain any resizing, dynamic content, which is not my case: function adjustContentHeight() { // Setting content div's height to nearest upper multiple of column backgrounds height, // forcing it not to be cut-off when repeated. var contentBgHeight = 300; var contentHeight = $("#content").height(); var adjustedHeight = Math.ceil(contentHeight / contentBgHeight); $("#content").height(adjustedHeight * contentBgHeight); } $(document).ready(adjustContentHeight); What I'm looking for there is a way to respond to a div resizing event, but there doesn't seem to be such a thing. Also, please assume I have no access to the JS controlling the resizing of content in the content-div, though this is potentially a way of solving the problem. Another potential solution I was thinking off was to offset the background image in the bottom div by a certain amount depending on the height of the content-div. Again, the missing piece seems to be the ability to respond to a resize event.

    Read the article

  • Python: slow read & write for millions of small files

    - by Jami
    I am building directory tree which has tons of subdirectories and files. The total directory count is somewhere along 256^32 subdirectories with 256 files in each end which are only a few bytes long. I did this so I would have fast access to these files (since i'm not searching and i'm just directly accessing then via a known file path) I have a python script that builds this filesystem and reads & writes those files. The problem is that when I reach more than 1Gb of total filesize, the read and write methods become extremely slow. Here's the function I have that reads the contents of a file (the file contains an integer string), adds a certain number to it, then writes it back to the original file. def addInFile(path, scoreToAdd): num = scoreToAdd try: shutil.copyfile(path, '/tmp/tmp.txt') fp = open('/tmp/tmp.txt', 'r') num += int(fp.readlines()[0]) fp.close() except: pass fp = open('/tmp/tmp.txt', 'w') fp.write(str(num)) fp.close() shutil.copyfile('/tmp/tmp.txt', path) I previously tried performing linux console commands but it was slower. I copy the file to a temporary file first then access/modify it then copy it back because i found this was faster than directly accessing the file. I think the cause of the slowdown is because there're tons of files. performing this function 1000 times sometimes reach 1 minute now, but before (when there were only a few files, 1000 calls was performed for only less than 1 second) How do you suggest I fix this?

    Read the article

  • How to increase the performance of a loop which runs for every 'n' minutes.

    - by GustlyWind
    Hi Giving small background to my requirement and what i had accomplished so far: There are 18 Scheduler tasks run at regular intervals (least being 30 mins) takes input of nearly 5000 eligible employees run into a static method for iteration and generates a mail content for that employee and mails. An average task takes about 9 min multiplied by 18 will be roughly 162 mins meanwhile there would be next tasks which will be in queue (I assume). So my plan is something like the below loop try { // Handle Arraylist of alerts eligible employees Iterator employee = empIDs.iterator(); while (employee.hasNext()) { ScheduledImplementation.getInstance().sendAlertInfoToEmpForGivenAlertType((Long)employee.next(), configType,schedType); } } catch (Exception vEx) { _log.error("Exception Caught During sending " + configType + " messages:" + configType, vEx); } Since I know how many employees would come to my method I will divide the while loop into two and perform simultaneous operations on two or three employees at a time. Is this possible. Or is there any other ways I can improve the performance. Some of the things I had implemented so far 1.Wherever possible made methods static and variables too Didn't bother to catch exceptions and send back because these are background tasks. (And I assume this improves performance) Get the DB values in one query instead of multiple hits. If am successful in optimizing the while loop I think i can save couple of mins. Thanks

    Read the article

  • Why is my GUI unresponsive while a SwingWorker thread runs?

    - by Starchy
    Hello, I have a SwingWorker thread with an IOBound task which is totally locking up the interface while it runs. Swapping out the normal workload for a counter loop has the same result. The SwingWorker looks basically like this: public class BackupWorker extends SwingWorker<String, String> { private static String uname = null; private static String pass = null; private static String filename = null; static String status = null; BackupWorker (String uname, String pass, String filename) { this.uname = uname; this.pass = pass; this.filename = filename; } @Override protected String doInBackground() throws Exception { BackupObject bak = newBackupObject(uname,pass,filename); return "Done!"; } } The code that kicks it off lives in a class that extends JFrame: public void actionPerformed(ActionEvent event) { String cmd = event.getActionCommand(); if (BACKUP.equals(cmd)) { SwingUtilities.invokeLater(new Runnable() { public void run() { final StatusFrame statusFrame = new StatusFrame(); statusFrame.setVisible(true); SwingUtilities.invokeLater(new Runnable() { public void run () { statusFrame.beginBackup(uname,pass,filename); } }); } }); } } Here's the interesting part of StatusFrame: public void beginBackup(final String uname, final String pass, final String filename) { worker = new BackupWorker(uname, pass, filename); worker.execute(); try { System.out.println(worker.get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } } } So far as I can see, everything "long-running" is handled by the worker, and everything that touches the GUI on the EDT. Have I tangled things up somewhere, or am I expecting too much of SwingWorker?

    Read the article

  • Method for defining simultaneous has-many and has-one associations between two models in CakePHP?

    - by Hobonium
    One thing with which I have long had problems, within the CakePHP framework, is defining simultaneous hasOne and hasMany relationships between two models. For example: BlogEntry hasMany Comment BlogEntry hasOne MostRecentComment (where MostRecentComment is the Comment with the most recent created field) Defining these relationships in the BlogEntry model properties is problematic. CakePHP's ORM implements a has-one relationship as an INNER JOIN, so as soon as there is more than one Comment, BlogEntry::find('all') calls return multiple results per BlogEntry. I've worked around these situations in the past in a few ways: Using a model callback (or, sometimes, even in the controller or view!), I've simulated a MostRecentComment with: $this->data['MostRecentComment'] = $this->data['Comment'][0]; This gets ugly fast if, say, I need to order the Comments any way other than by Comment.created. It also doesn't Cake's in-built pagination features to sort by MostRecentComment fields (e.g. sort BlogEntry results reverse-chronologically by MostRecentComment.created. Maintaining an additional foreign key, BlogEntry.most_recent_comment_id. This is annoying to maintain, and breaks Cake's ORM: the implication is BlogEntry belongsTo MostRecentComment. It works, but just looks...wrong. These solutions left much to be desired, so I sat down with this problem the other day, and worked on a better solution. I've posted my eventual solution below, but I'd be thrilled (and maybe just a little mortified) to discover there is some mind-blowingly simple solution that has escaped me this whole time. Or any other solution that meets my criteria: it must be able to sort by MostRecentComment fields at the Model::find level (ie. not just a massage of the results); it shouldn't require additional fields in the comments or blog_entries tables; it should respect the 'spirit' of the CakePHP ORM. (I'm also not sure the title of this question is as concise/informative as it could be.)

    Read the article

  • I can't figure out why this fps counter is inaccurate.

    - by rmetzger
    I'm trying to track frames per second in my game. I don't want the fps to show as an average. I want to see how the frame rate is affected when I push keys and add models etc. So I am using a variable to store the current time and previous time, and when they differ by 1 second, then I update the fps. My problem is that it is showing around 33fps but when I move the mouse around really fast, the fps jumps up to 49fps. Other times, if I change a simple line of code elsewhere not related to the frame counter, or close the project and open it later, the fps will be around 60. Vsync is on so I can't tell if the mouse is still effecting the fps. Here is my code which is in an update function that happens every frame: FrameCount++; currentTime = timeGetTime (); static unsigned long prevTime = currentTime; TimeDelta = (currentTime - prevTime) / 1000; if (TimeDelta > 1.0f) { fps = FrameCount / TimeDelta; prevTime = currentTime; FrameCount = 0; TimeDelta = 0; } Here are the variable declarations: int FrameCount; double fps, currentTime, prevTime, TimeDelta, TimeElapsed; Please let me know what is wrong here and how to fix it, or if you have a better way to count fps. Thanks!!!!!! I am using DirectX 9 btw but I doubt that is relevant, and I am using PeekMessage. Should I be using an if else statement instead? Here is my message processing loop: MSG msg; ZeroMemory (&msg, sizeof (MSG)); while (msg.message != WM_QUIT) { if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage (&msg); DispatchMessage (&msg); } Update (); RenderFrame (); }

    Read the article

  • How to set Delphi WebBrowser Base directory different that HTML location

    - by Steve
    I have a Delphi program that creates HTML files. Later when a button is pressed a TWebBrowser is created and a WebBrowser.Navigate causes the html page to display. Is there anyway to set the WebBrowser "default directory" so it will always be the location of the Delphi executable not the HTML file? I do not want to set the Base value in the HTML to a hardcoded value because then when the HTML is ran from another Delphi exe they are not found. for example: if the exe is run from D:\data\delphi\pgm.exe then the base location D:\data\delphi\ and the jpgs are at D:\data\delphi\jpgs\ but if the exe is run from: C:\stuff\pgm.exe i want the base location to be C:\stuff\ and the jpgs to be at C:\stuff\jpgs\ So I cannot write a line in the HTML with the base location since when it is ran from another exe it would point to wrong location for that exe. So I either need to set the base location when I create the webbrowser and before I read the HTML or I need a way to pass into the webbrowser the location where I can then set the base location. Sorry for being so long-winded but I could not figure out how to saw what I needed.

    Read the article

  • Improving the performance of XSL

    - by Rachel
    I am using the below XSL 2.0 code to find the ids of the text nodes that contains the list of indices that i give as input. the code works perfectly but in terms for performance it is taking a long time for huge files. Even for huge files if the index values are small then the result is quick in few ms. I am using saxon9he Java processor to execute the XSL. <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each-group select="doc($insert-file)/insert-data/data" group-by="xsd:integer(@index)"> <xsl:sort select="current-grouping-key()"/> <data index="{current-grouping-key()}" text-id="{generate-id( $main-root/descendant::text()[ sum((preceding::text(), .)/string-length(.)) ge current-grouping-key() ][1] )}"> <xsl:copy-of select="current-group()/node()"/> </data> </xsl:for-each-group> </xsl:variable> In the above solution if the index value is too huge say 270962 then the time taken for the XSL to execute is 83427ms. In huge files if the index value is huge say 4605415, 4605431 it takes several minutes to execute. Seems the computation of the variable "insert-data" takes time though it is a global variable and computed only once. Should the XSL be addessed or the processor? How can i improve the performance of the XSL.

    Read the article

  • Do you feel threatened by younger developers?

    - by bobsmith123
    When an opportunity for a new project in my company presented itself, I volunteered with a little bit of skepticism but a lot of excitement. I have been working with this company for 5 years on a different project in which I was a team lead for several releases. I have a consistent track record of delivering good quality projects. The new project has a team that is built from ground up and the management decided to hire new developers. The technologies used are the latest ones that are different from the previous project I had worked on (although they both are in .NET). The newer and the younger developers in the team have more experience with the latest technologies from their previous gigs. The younger developers have a lot more energy, can work 14 hours a day (they remind me of my early 20s) and live on ramen noodles. Since they are new, I feel they are obligated to show what they are made of and to prove their mettle. I feel like I am sidelined in this project and the management obviously like the younger developers as they bring more to the table. I know ageism is rampant in our field and the more I think about this, the more it makes me feel that there are very few transferable skills from one project to the other and one day you could be a team lead managing a number of developers, the next day some one you hired becomes your boss. Do you see yourself in situations like this very often? What plans do you have to make yourself valuable in a long term perspective?

    Read the article

  • UriMapper Problem

    - by jsop
    I have the following xaml (nonessential markup removed in the interest of brevity): <navigation:Frame x:Name="ContentFrame" > <navigation:Frame.UriMapper> <uriMapper:UriMapper> <uriMapper:UriMapping Uri="/{pageName}" MappedUri="/Views/{pageName}.xaml"/> <uriMapper:UriMapping Uri="/FMChart/{metricID}/{orgID}" MappedUri="/Views/FMChart.xaml?metricID={metricID}&orgID={orgID}"/> </navigation:Frame.UriMapper> </navigation:Frame.UriMapper> </navigation:Frame> I'm creating the HyperLinkButton objects dynamically (in code), like so: int metricID = 1; int orgID = 1; HyperlinkButton button = new HyperlinkButton(); button.Name = Guid.NewGuid().ToString(); button.TargetName = "ContentFrame"; // this string doesn't work string url = string.Format("/FMChart/{0}/{1}", metricID, orgID); button.NavigateUri = new Uri(url, UriKind.Relative); When I click the bbutton, the browser renders a blank page, and eventually presents me with a REALLY long stack trace (InvalidOperation exception). If I take the parameters out of th indicated line: string url = "/FMChart"; ...it works as expected (brings up the desired page). I've also tried the following strings: /FMChart/{0}&{1} /FMChart/{0}& amp;{1} What am I doing wrong?

    Read the article

  • Downloading attachments to directory with IMAP in PHP, randomly works

    - by Nick
    I found PHP code online to download attachments to a directory using IMAP from here. http://www.nerdydork.com/download-pop3imap-email-attachments-with-php.html I modified it slightly changing $structure = imap_fetchstructure($mbox, $jk); $parts = ($structure->parts); to $structure = imap_fetchstructure($mbox, $jk); $parts = ($structure); to get it to run properly, as otherwise I got an error about how stdClass doesn't define a property called $parts. Doing that, I was able to download all the attachments. I tested it again recently though, and it didn't work. Well, it didn't work 6 times, worked the 7th, and then hasn't worked since. I'm thinking it has something to do with me screwing up the parts handling, since count($parts) keeps returning 1 for each message, so it's not finding any attachments I think. Since it downloaded the attachments at one point with no issues, I feel confident that the area things are getting screwed up is right here. Before this block of code is a for loop that goes through each message in the box, and after it is loop that just goes through $parts for each imap structure. Thanks for any help you can provide. I looked at the imap_fetchstructure page on php.net and can't figure out what I'm doing wrong. Edit: I just double-checked the folder after typing up my question and it all popped up. I feel like I'm going nuts. I hadn't run the code since a few minutes before I started typing this, and it doesn't make sense to me that it would take this long to trigger. I have some 800 messages in the mailbox, but I figured since it printed my statement at the very end of the PHP that all of the file creation work was done.

    Read the article

  • ClassLoader exceptions being memoized

    - by Jim
    Hello, I am writing a classloader for a long-running server instance. If a user has not yet uploaded a class definition, I through a ClassNotFoundException; seems reasonable. The problem is this: There are three classes (C1, C2, and C3). C1 depends on C2, C2 depends on C3. C1 and C2 are resolvable, C3 isn't (yet). C1 is loaded. C1 subsequently performs an action that requires C2, so C2 is loaded. C2 subsequently performs an action that requires C3, so the classloader attempts to load C3, but can't resolve it, and an exception is thrown. Now C3 is added to the classpath, and the process is restarted (starting from the originally-loaded C1). The issue is, C2 seems to remember that C3 couldn't be loaded, and doesn't bother asking the classloader to find the class... it just re-throws the memoized exception. Clearly I can't reload C1 or C2 because other classes may have linked to them (as C1 has already linked to C2). I tried throwing different types of errors, hoping the class might not memoize them. Unfortunately, no such luck. Is there a way to prevent the loaded class from binding to the exception? That is, I want the classloader to be allowed to keep trying if it didn't succeed the first time. Thanks!

    Read the article

  • Returning a variable in a public void...

    - by James Rattray
    Hello, I'm abit new to programming Android App's, however I have come across a problem, I can't find a way to make global variables -unlike other coding like php or VB.NET, are global variables possible? If not can someone find a way (and if possible implement the way into the code I will provide below) to get a value from the variable 'songtoplay' so I can use in another Public Void... Here is the code: final Spinner hubSpinner = (Spinner) findViewById(R.id.myspinner); ArrayAdapter adapter = ArrayAdapter.createFromResource( this, R.array.colours, android.R.layout.simple_spinner_item); adapter .setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); hubSpinner.setAdapter(adapter); // hubSpinner.setOnItemSelectedListener(new OnItemSelectedListener() { public void onItemSelected(AdapterView<?> parentView, View selectedItemView, int position, long id) { //code Object ttestt = hubSpinner.getSelectedItem(); final String test2 = ttestt.toString(); Toast message1 = Toast.makeText(Textbox.this, test2, Toast.LENGTH_LONG); message1.show(); String songtoplay = test2; // Need songtoplay to be available in another 'Public Void' } public void onNothingSelected(AdapterView<?> parentView) { //Code } }); Basically, it gets the value from the Spinner 'hubSpinner' and displays it in a Toast. I then want it to return a value for string variable 'songtoplay' -or find a way to make it global or useable in another Public Void, (Which is will a button, -loading the song to be played) Please help me, Thanks alot. James

    Read the article

  • SQL Server search filter and order by performance issues

    - by John Leidegren
    We have a table value function that returns a list of people you may access, and we have a relation between a search and a person called search result. What we want to do is that wan't to select all people from the search and present them. The query looks like this SELECT qm.PersonID, p.FullName FROM QueryMembership qm INNER JOIN dbo.GetPersonAccess(1) ON GetPersonAccess.PersonID = qm.PersonID INNER JOIN Person p ON p.PersonID = qm.PersonID WHERE qm.QueryID = 1234 There are only 25 rows with QueryID=1234 but there are almost 5 million rows total in the QueryMembership table. The person table has about 40K people in it. QueryID is not a PK, but it is an index. The query plan tells me 97% of the total cost is spent doing "Key Lookup" witht the seek predicate. QueryMembershipID = Scalar Operator (QueryMembership.QueryMembershipID as QM.QueryMembershipID) Why is the PK in there when it's not used in the query at all? and why is it taking so long time? The number of people total 25, with the index, this should be a table scan for all the QueryMembership rows that have QueryID=1234 and then a JOIN on the 25 people that exists in the table value function. Which btw only have to be evaluated once and completes in less than 1 second.

    Read the article

  • Mac CoreLocation Services does not ask for permissions

    - by Ryan Nichols
    I'm writing a Mac App that needs to use CoreLocation services. The code and location works fine, as long as I manually authenticate the service inside the security preference pane. However the framework is not automatically popping up with a permission dialog. The documentation states: Important The user has the option of denying an application’s access to the location service data. During its initial uses by an application, the Core Location framework prompts the user to confirm that using the location service is acceptable. If the user denies the request, the CLLocationManager object reports an appropriate error to its delegate during future requests. I do get an error to my delegate, and the value of +locationServicesEnabled is correct on CLLocationManager. The only part missing is the prompt to the user about permissions. This occurs on my development MPB and a friends MBP. Neither of us can figure out whats wrong. Has anyone run into this? Relevant code: _locationManager = [CLLocationManager new]; [_locationManager setDelegate:self]; [_locationManager setDesiredAccuracy:kCLLocationAccuracyKilometer]; ... [_locationManager startUpdatingLocation]; UPDATE: Answer It seems there is a problem with Sandboxing in which the CoreLocation framework is not allowed to talk to com.apple.CoreLocation.agent. I suspect this agent is responsible for prompting the user for permissions. If you add the Location Services Entitlement (com.apple.security.personal-information.location) it only gives your app the ability to use the CL framework. However you also need access to the CoreLocation agent to ask the user for permissions. You can give your app access by adding the entitlement 'com.apple.security.temporary-exception.mach-lookup.global-name' with a value of 'com.apple.CoreLocation.agent'. Users will be prompted for access automatically like you would expect. I've filed a bug to apple on this already.

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Fill in word form field with more than 255 characters

    - by user1308743
    I am trying to programmaticly fill in a microsoft word form. I am successfully able to do so if the string is under 255 chars with the following code below, however it says the string is too long if i try and use a string over 255 chars... How do I get past this limitation? If I open the word doc in word I can type in more than 255 chars without a problem. Does anyone know how to input more characters via c# code? object fileName = strFileName; object readOnly = false; object isVisible = true; object missing = System.Reflection.Missing.Value; //open doc _oDoc = _oWordApplic.Documents.Open(ref fileName, ref missing, ref readOnly, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref isVisible, ref missing, ref missing, ref missing, ref missing); _oDoc.Activate(); //write string _oDoc.FormFields[oBookMark].Result = value; //save and close oDoc.SaveAs(ref fileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); _oWordApplic.Application.Quit(ref missing, ref missing, ref missing);

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >