Search Results

Search found 6638 results on 266 pages for 'boost range'.

Page 238/266 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • String Index Out Of Bound Exception error

    - by Fd Fehfhd
    Im not really sure why a am getting this error. But here is my code it is meant to test palindromes disregarding punctuation. So here is my code import java.util.Scanner; public class PalindromeTester { public static void main(String [] args) { Scanner kb = new Scanner(System.in); String txt = ""; int left; int right; int cntr = 0; do { System.out.println("Enter a word, phrase, or sentence (blank line to stop):"); txt = kb.nextLine(); txt = txt.toLowerCase(); char yP; String noP = ""; for (int i = 0; i < txt.length(); i++) { yP = txt.charAt(i); if (Character.isLetterOrDigit(txt.charAt(yP))) { noP += yP; } } txt = noP; left = 0; right = txt.length() -1; while (txt.charAt(left) == txt.charAt(right) && right > left) { left++; right--; } if (left > right) { System.out.println("Palindrome"); cntr++; } else { System.out.println("Not a palindrome"); } } while (!txt.equals("")); System.out.println("You found " + cntr + " palindromes. Thank you for using palindromeTester."); } } And if i test it and then i put enter so it will tell me how many palindromes you found the error i am getting is javav.lang.StringIndexOutOfBoundException : String index out of range 0 at PalindromeTester.main(PalindromeTester.java:38) and line 28 is while (txt.charAt(left) == txt.charAt(right) && right > left) Thanks for the help in advance

    Read the article

  • Is there any better way to capture the screen than PIL.ImageGrab.grab()?

    - by user1474837
    I am making a screen capture program with python. My current problem is PIL.ImageGrab.grab() gives me the same output as 2 seconds later. For instance, for I think I am not being clear, in the following program, almost all the images are the same, have the same Image.tostring() value, even though I was moving my screen during the time the PIL.ImageGrab.grab loop was executing. >>> from PIL.ImageGrab import grab >>> l = [] >>> import time >>> for a in l: l.append(grab()) time.sleep(0.01) >>> for a in range(0, 30): l.append(grab()) time.sleep(0.01) >>> b = [] >>> for a in l: b.append(a.tostring()) >>> len(b) 30 >>> del l >>> last = [] >>> a = 0 >>> a = -1 >>> last = "" >>> same = -1 >>> for pic in b: if b == last: same = same + 1 last = b >>> same 28 >>> This is a problem, as all the images are the same but 1. 1 out of 30 is different. That would make for a absolutly horrable quality video. Please, tell me if there is any better quality alternative to PIL.ImageGrab.grab(). I need to capture the whole screen. Thanks!

    Read the article

  • search for the maximum

    - by peril brain
    I need to know a code that will automatically:- search a specific word in excel notes it row or column number (depends on data arrangement) searches numerical type values in the respective row or column with that numeric value(suppose a[7][0]or a[0][7]) it compares all other values of respective row or column(ie. a[i][0] or a[0][i]) sets that value to the highest value only if IT HAS GOT NO FORMULA FOR DERIVATION i know most of coding but at a few places i got myself stuck... i'm writing a part of my program upto which i know: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; using Microsoft.Office.Interop; using Excel = Microsoft.Office.Interop.Excel; Excel.Application oExcelApp; namespace a{ class b{ static void main(){ try { oExcelApp = (Excel.Application)System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); ; if(oExcelApp.ActiveWorkbook != null) {Excel.Workbook xlwkbook = (Excel.Workbook)oExcelApp.ActiveWorkbook; Excel.Worksheet ws = (Excel.Worksheet)xlwkbook.ActiveSheet; Excel.Range rn; rn = ws.Cells.Find("maximum", Type.Missing, Excel.XlFindLookIn.xlValues, Excel.XlLookAt.xlPart,Excel.XlSearchOrder.xlByRows, Excel.XlSearchDirection.xlNext, false, Type.Missing, Type.Missing); }}} now ahead of this i only know tat i have to use cell.value2 ,cell.hasformula methods..... & no more idea can any one help me with this..

    Read the article

  • Problem with DB2 Over clause

    - by silent1mezzo
    I'm trying to do pagination with a very old version of DB2 and the only way I could figure out selecting a range of rows was to use the OVER command. This query provide's the correct results (the results that I want to paginate over). select MIN(REFID) as REFID, REFGROUPID from ARMS_REFERRAL where REFERRAL_ID<>'Draft' and REFERRAL_ID not like 'Demo%' group by REFGROUPID order by REFID desc Results: REFID REFGROUPID 302 242 301 241 281 221 261 201 225 142 221 161 ... ... SELECT * FROM ( SELECT row_number() OVER () AS rid, MIN(REFID) AS REFID, REFGROUPID FROM arms_referral where REFERRAL_ID<>'Draft' and REFERRAL_ID not like 'Demo%' group by REFGROUPID order by REFID desc ) AS t WHERE t.rid BETWEEN 1 and 5 Results: REFID REFGROUPID 26 12 22 11 14 8 11 7 6 4 As you can see, it does select the first five rows, but it's obviously not selecting the latest. If I add a Order By clause to the OVER() it gets closer, but still not totally correct. SELECT * FROM ( SELECT row_number() OVER (ORDER BY REFGROUPID desc) AS rid, MIN(REFID) AS REFID, REFGROUPID FROM arms_referral where REFERRAL_ID<>'Draft' and REFERRAL_ID not like 'Demo%' group by REFGROUPID order by REFID desc ) AS t WHERE t.rid BETWEEN 1 and 5 REFID REFGROUPID 302 242 301 241 281 221 261 201 221 161 It's really close but the 5th result isn't correct (actually the 6th result). How do I make this query correct so it can group by a REFGROUPID and then order by the REFID?

    Read the article

  • Adding fields to Django form dynamically (and cleanly)

    - by scott
    Hey guys, I know this question has been brought up numerous times, but I'm not quite getting the full implementation. As you can see below, I've got a form that I can dynamically tell how many rows to create. How can I create an "Add Row" link that tells the view how many rows to create? I would really like to do it without augmenting the url... # views.py def myView(request): if request.method == "POST": form = MyForm(request.POST, num_rows=1) if form.is_valid(): return render_to_response('myform_result.html', context_instance=RequestContext(request)) else: form = MyForm(num_rows=1) return render_to_response('myform.html', {'form':form}, context_instance=RequestContext(request)) # forms.py class MyForm(forms.Form): def __init__(self, *args, **kwargs): num_rows = kwargs.pop('num_rows',1) super(MyForm, self).__init__(*args, **kwargs) for row in range(0, num_rows): field = forms.CharField(label="Row") self.fields[str(row)] = field # myform.html http://example.com/myform <form action="." method="POST" accept-charset="utf-8"> <ul> {% for field in form %} <li style="margin-top:.25em"> <span class="normal">{{ field.label }}</span> {{ field }} <span class="formError">{{ field.errors }}</span> </li> {% endfor %} </ul> <input type="submit" value="Save"> </form> <a href="ADD_ANOTHER_ROW?">+ Add Row</a>

    Read the article

  • Using classes for the first time,help in debugging

    - by kaushik
    here is post my code:this is no the entire code but enough to explain my doubt.please discard any code line which u find irrelavent enter code here saving_tree={} isLeaf=False class tree: global saving_tree rootNode=None lispTree=None def __init__(self,x): file=x string=file.readlines() #print string self.lispTree=S_expression(string) self.rootNode=BinaryDecisionNode(0,'Root',self.lispTree) class BinaryDecisionNode: global saving_tree def __init__(self,ind,name,lispTree,parent=None): self.parent=parent nodes=lispTree.getNodes(ind) print nodes self.isLeaf=(nodes[0]==1) nodes=nodes[1]#Nodes are stored self.name=name self.children=[] if self.isLeaf: #Leaf Node print nodes #Set the leaf data self.attribute=nodes print "LeafNode is ",nodes else: #Set the question self.attribute=lispTree.getString(nodes[0]) self.attribute=self.attribute.split() print "Question: ",self.attribute,self.name tree={} tree={str(self.name):self.attribute} saving_tree=tree #Add the children for i in range(1,len(nodes)):#Since node 0 is a question # print "Adding child ",nodes[i]," who has ",len(nodes)-1," siblings" self.children.append(BinaryDecisionNode(nodes[i],self.name+str(i),lispTree,self)) print saving_tree i wanted to save some data in saving_tree{},which i have declared previously and want to use that saving tree in the another function outside the class.when i asked to print saving_tree it printing but,only for that instance.i want the saving_tree{} to have the data to store data of all instance and access it outside. when i asked for print saving_tree outside the class it prints empty{}.. please tell me the required modification to get my required output and use saving_tree{} outside the class..

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • Python: Removing particular character (u"\u2610") from string

    - by duhaime
    I have been wrestling with decoding and encoding in Python, and I can't quite figure out how to resolve my problem. I am looping over xml text files (sample) that are apparently coded in utf-8, using Beautiful Soup to parse each file, then looking to see if any sentence in the file contains one or more words from two different list of words. Because the xml files are from the eighteenth century, I need to retain the em dashes that are in the xml. The code below does this just fine, but it also retains a pesky box character that I wish to remove. I believe the box character is this character. (You can find an example of the character I wish to remove in line 3682 of the sample file above. On this webpage, the character looks like an 'or' pipe, but when I read the xml file in Komodo, it looks like a box. When I try to copy and paste the box into a search engine, it looks like an 'or' pipe. When I print to console, though, the character looks like an empty box.) To sum up, the code below runs without errors, but it prints the empty box character that I would like to remove. for work in glob.glob(pathtofiles): openfile = open(work) readfile = openfile.read() stringfile = str(readfile) decodefile = stringfile.decode('utf-8', 'strict') #is this the dodgy line? soup = BeautifulSoup(decodefile) textwithtags = soup.findAll('text') textwithtagsasstring = str(textwithtags) #this method strips everything between anglebrackets as it should textwithouttags = stripTags(textwithtagsasstring) #clean text nonewlines = textwithouttags.replace("\n", " ") noextrawhitespace = re.sub(' +',' ', nonewlines) print noextrawhitespace #the boxes appear I tried to remove the boxes by using noboxes = noextrawhitespace.replace(u"\u2610", "") But Python threw an error flag: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 280: ordinal not in range(128) Does anyone know how I can remove the boxes from the xml files? I would be grateful for any help others can offer.

    Read the article

  • Silverlight 4 caching issue?

    - by DavidS
    I am currently experiencing a weird caching problem it would seem. When I load my data intially, I return all the data within given dates and my graph looks as follows: Then I filter the data to return a subset of the original data for the same date range (not that it matters) and I get the following view of my data: However, I intermittently get the following when I refresh the same filterd view of the data: One can see that not all the data gets cached but only some of it i.e. for 12 Dec 2010 and 5 dec 2010(not shown here). I've looked at my queries and the correct data is getting pulled out. It is only on the presentation layer i.e. on Mainpage.xaml.cs that this erroneous data seems to exist. I've stepped through the code and the data is corect through all the layers except on the presentation layer. Has anyone experienced this before? Is there some sort of caching going in the background that is keeping that data in the background as I've got browser caching off? I am using the LoadOperation in the callback method within the Load method of the DomainContext if that helps...

    Read the article

  • Capture data from jQuery Mobile GUI / form elements like sliders, or the flip switch/toggle, and store in a var?

    - by Carl Foggin
    I have this code, which executes without error, it shows "** slider change **" in the debug console every time the slider changes. But I cannot figure out how to capture the value of the slider so I can store it in a var. Can you help, I'm hoping it's a simple thing. $( '#page4_Options' ).live( 'pageinit', function(event){ var slideTime = userOptions.getSlideTime() / 1000; // userOptions is my Object to get/set params from localStorage. $("input[id='slider']").val(slideTime).slider("refresh"); // set default slide time when page init's. console.log("userOptions.getSlideTime()", userOptions.getSlideTime() ); $( "#slider" ).bind( "slidestop", function(event, ui) { console.log("** slider change **"); // How do I capture the new slider value into a var? }); }); Here's the HTML with the slider, it's in a tag: <div data-role="fieldcontain"> <label for="slider">Slide Duration (seconds):</label> <input type="range" name="slider" id="slider" value="2" min="0" max="60" /> </div>

    Read the article

  • Python script not working when run from browser directly

    - by splatterdash
    I'm trying to run this script: import re, os def build_pool(cwd): global xtn_pool, file_pool xtn, xtn_pool = re.compile('\\.[0-9a-zA-Z]{1,4}$'), [] file_pool = [files for files in os.listdir(cwd) if os.path.isfile(files) and xtn.search(files)] # Lists all the file extension in the folder for file in file_pool: if not xtn_pool.__contains__(xtn.search(file).group()): xtn_pool.append(xtn.search(file).group()) return xtn_pool.sort(), file_pool if __name__ == '__main__': import sys #if path is given, change working directory to path if len(sys.argv) >= 2: os.chdir(sys.argv[1]) build_pool(os.getcwd()) #if no path is given when running, do renaming in current folder else: build_pool(os.getcwd()) print('The folder contains the following extensions: ') for i in range(0, len(xtn_pool)): print(repr(i+1) + '. ' + xtn_pool[i][1:]) opt = int(input('Which one would you like to replace? ')) xtn_pick = xtn_pool[opt-1] # Lists all the file with the chosen extension xtn_file_pool = [file for file in file_pool if file.endswith(xtn_pick)] print('There are {0} files with the {1} extension.'.format(len(xtn_file_pool), xtn_pick)) xtn_new = input('Input replacement extension: ') # The actual renaming process for file in xtn_file_pool: os.rename(file, file[:-len(xtn_pick)+1] + xtn_new) directly from my file browser (Nautilus), but for some reason it's not working. When I run it from terminal (python3 scriptname.py) it works fine as intended. But when I just click the script file in Nautilus, choose 'Run in Terminal', it always stops after asking 'Input replacement extension: '. How can I make this script run without using the terminal?

    Read the article

  • Why can't we just use a hash of passphrase as the encryption key (and IV) with symmetric encryption algorithms?

    - by TX_
    Inspired by my previous question, now I have a very interesting idea: Do you really ever need to use Rfc2898DeriveBytes or similar classes to "securely derive" the encryption key and initialization vector from the passphrase string, or will just a simple hash of that string work equally well as a key/IV, when encrypting the data with symmetric algorithm (e.g. AES, DES, etc.)? I see tons of AES encryption code snippets, where Rfc2898DeriveBytes class is used to derive the encryption key and initialization vector (IV) from the password string. It is assumed that one should use a random salt and a shitload of iterations to derive secure enough key/IV for the encryption. While deriving bytes from password string using this method is quite useful in some scenarios, I think that's not applicable when encrypting data with symmetric algorithms! Here is why: using salt makes sense when there is a possibility to build precalculated rainbow tables, and when attacker gets his hands on hash he looks up the original password as a result. But... with symmetric data encryption, I think this is not required, as the hash of password string, or the encryption key, is never stored anywhere. So, if we just get the SHA1 hash of password, and use it as the encryption key/IV, isn't that going to be equally secure? What is the purpose of using Rfc2898DeriveBytes class to generate key/IV from password string (which is a very very performance-intensive operation), when we could just use a SHA1 (or any other) hash of that password? Hash would result in random bit distribution in a key (as opposed to using string bytes directly). And attacker would have to brute-force the whole range of key (e.g. if key length is 256bit he would have to try 2^256 combinations) anyway. So either I'm wrong in a dangerous way, or all those samples of AES encryption (including many upvoted answers here at SO), etc. that use Rfc2898DeriveBytes method to generate encryption key and IV are just wrong.

    Read the article

  • Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

    - by EmbeddedProg
    I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.

    Read the article

  • Is a python dictionary the best data structure to solve this problem?

    - by mikip
    Hi I have a number of processes running which are controlled by remote clients. A tcp server controls access to these processes, only one client per process. The processes are given an id number in the range of 0 - n-1. Were 'n' is the number of processes. I use a dictionary to map this id to the client sockets file descriptor. On startup I populate the dictionary with the ids as keys and socket fd of 'None' for the values, i.e no clients and all pocesses are available When a client connects, I map the id to the sockets fd. When a client disconnects I set the value for this id to None, i.e. process is available. So everytime a client connects I have to check each entry in the dictionary for a process which has a socket fd entry of None. If there are then the client is allowed to connect. This solution does not seem very elegant, are there other data structures which would be more suitable for solving this? Thanks

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • Iterate over defined elements of a JS array

    - by sibidiba
    I'm using a JS array to Map IDs to actual elements, i.e. a key-value store. I would like to iterate over all elements. I tried several methods, but all have its caveats: for (var item in map) {...} Does iterates over all properties of the array, therefore it will include also functions and extensions to Array.prototype. For example someone dropping in the Prototype library in the future will brake existing code. var length = map.lenth; for (var i = 0; i < length; i++) { var item = map[i]; ... } does work but just like $.each(map, function(index, item) {...}); They iterate over the whole range of indexes 0..max(id) which has horrible drawbacks: var x = []; x[1]=1; x[10]=10; $.each(x, function(i,v) {console.log(i+": "+v);}); 0: undefined 1: 1 2: undefined 3: undefined 4: undefined 5: undefined 6: undefined 7: undefined 8: undefined 9: undefined 10: 10 Of course my IDs wont resemble a continuous sequence either. Moreover there can be huge gaps between them so skipping undefined in the latter case is unacceptable for performance reasons. How is it possible to safely iterate over only the defined elements of an array (in a way that works in all browsers and IE)?

    Read the article

  • How can I have a Label change dynamically based on a Slider Value?

    - by duney
    I'm writing a grade calculator and I currently have a slider with a textbox beside it which displays the current value of the slider: <Slider Name="gradeSlider" Grid.Row="3" Grid.Column="2" VerticalAlignment="Center" Minimum="40" Maximum="100" IsSnapToTickEnabled="True" TickFrequency="5" TickPlacement="BottomRight"/> <TextBox Name="targetGrade" Grid.Row="3" Grid.Column="3" Width="30" Height="23" Text="{Binding ElementName=gradeSlider, Path=Value}" TextAlignment="Center"/> However I'm struggling to include a label which will show display a different grade classification based on the slider's value range. I'd have thought that I could create the label: <Label Name="gradeClass" Grid.Row="2" Grid.Column="2" HorizontalAlignment="Center" VerticalAlignment="Bottom"/> And then use code: string gradeText; if (gradeSlider.Value >= 40 && gradeSlider.Value < 50) { gradeText = "Pass"; gradeClass.Content = gradeText; } else if (gradeSlider.Value >= 50 && gradeSlider.Value < 60) { gradeText = "2:2"; gradeClass.Content = gradeText; } else { gradeText = "so on..."; gradeClass.Content = gradeText; } But the label just stays as "Pass" whatever the slider value. Could somebody please advise me as to where I'm going wrong? I tried using Content = "{Binding Source = gradeText}" on the Label xaml and removing the gradeClass.Content's in the code but it complained that gradeText was declared but never used. Many thanks to anyone who can help.

    Read the article

  • Error In VB.Net code

    - by Michael
    I am getting the error "Format Exception was unhandled at "Do While objectReader.Peek < -1 " and after. any help would be wonderful. 'Date: Class 03/20/2010 'Program Purpose: When code is executed data will be pulled from a text file 'that contains the named storms to find the average number of storms during the time 'period chosen by the user and to find the most active year. between the range of 'years 1990 and 2008 Option Strict On Public Class frmHurricane Private _intNumberOfHuricanes As Integer = 5 Public Shared _intSizeOfArray As Integer = 7 Public Shared _strHuricaneList(_intSizeOfArray) As String Private _strID(_intSizeOfArray) As String Private _decYears(_intSizeOfArray) As Decimal Private _decFinal(_intSizeOfArray) As Decimal Private _intNumber(_intSizeOfArray) As Integer Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'The frmHurricane load event reads the Hurricane text file and 'fill the combotBox object with the data. 'Initialize an instance of the StreamReader Object and declare variable page 675 on the book Dim objectReader As IO.StreamReader Dim strLocationAndNameOfFile As String = "C:\huricanes.txt" Dim intCount As Integer = 0 Dim intFill As Integer Dim strFileError As String = "The file is not available. Please restart application when available" 'This is where we code the file if it exist. If IO.File.Exists(strLocationAndNameOfFile) Then objectReader = IO.File.OpenText(strLocationAndNameOfFile) 'Read the file line by line until the file is complete Do While objectReader.Peek <> -1 **_strHuricaneList(intCount) = objectReader.ReadLine() _strID(intCount) = objectReader.ReadLine() _decYears(intCount) = Convert.ToDecimal(objectReader.ReadLine()) _intNumber(intCount) = Convert.ToInt32(objectReader.ReadLine()) intCount += 1** Loop objectReader.Close() 'With any luck the data will go to the Data Box For intFill = 0 To (_strID.Length - 1) Me.cboByYear.Items.Add(_strID(intFill)) Next Else MsgBox(strFileError, , "Error") Me.Close() End If End Sub

    Read the article

  • User input being limited to the alphabet in python

    - by Danger Cat
    I am SUPER new to programming and have my first assignment coming up in python. I am writing a hangman type game, where users are required to guess the word inputted from the other user. I have written most of the code, but the only problem I am having is when users have to input the word, making sure it is only limited to the alphabet. The code I have so far is : word = str.lower(raw_input("Type in your secret word! Shhhh... ")) answer = True while answer == True: for i in range(len(word)): if word[i] not in ("abcdefghijklmnopqrstuvwxyz"): word = raw_input("Sorry, words only contain letters. Please enter a word ") break else: answer = False This works while I input a few tries, but eventually will either exit the loop or displays an error. Is there any easier way to use this? We've really only covered topics up to loops in class, and break and continue are also very new to me. Thank you! (Pardon if the code is sloppy, but as I said I am very new to this....)

    Read the article

  • iOS: Best Way to do This w/o Calling Method 32 Times?

    - by user1886754
    I'm currently retrieving the Top 100 Scores for one of my leaderboards the following way: - (void) retrieveTop100Scores { __block int totalScore = 0; GKLeaderboard *myLB = [[GKLeaderboard alloc] init]; myLB.identifier = [Team currentTeam]; myLB.timeScope = GKLeaderboardTimeScopeAllTime; myLB.playerScope = GKLeaderboardPlayerScopeGlobal; myLB.range = NSMakeRange(1, 100); [myLB loadScoresWithCompletionHandler:^(NSArray *scores, NSError *error) { if (error != nil) { NSLog(@"%@", [error localizedDescription]); } if (scores != nil) { for (GKScore *score in scores) { NSLog(@"%lld", score.value); totalScore += score.value; } NSLog(@"Total Score: %d", totalScore); [self loadingDidEnd]; } }]; } The problem is I want to do this for 32 leaderboards. What's the best way of achieving this? Using a third party tool (Game Center Manager), I can get the following line to return a dictionary with leaderboard ID's as keys and the Top 1 highest score as values NSDictionary *highScores = [[GameCenterManager sharedManager] highScoreForLeaderboards:leaderboardIDs]; So my question is, how can I combine those 2 segments of code to pull in the 100 values for each leaderboard, resulting in a dictionary with all of the leaderboard names as keys, and the Total (of each 100 scores) of the leaderboards for values.

    Read the article

  • C/C++ Bit Array or Bit Vector

    - by MovieYoda
    Hi, I am learning C/C++ programming & have encountered the usage of 'Bit arrays' or 'Bit Vectors'. Am not able to understand their purpose? here are my doubts - Are they used as boolean flags? Can one use int arrays instead? (more memory of course, but..) What's this concept of Bit-Masking? If bit-masking is simple bit operations to get an appropriate flag, how do one program for them? is it not difficult to do this operation in head to see what the flag would be, as apposed to decimal numbers? I am looking for applications, so that I can understand better. for Eg - Q. You are given a file containing integers in the range (1 to 1 million). There are some duplicates and hence some numbers are missing. Find the fastest way of finding missing numbers? For the above question, I have read solutions telling me to use bit arrays. How would one store each integer in a bit?

    Read the article

  • Data Structure / Hash Function to link Sets of Ints to Value

    - by Gaminic
    Given n integer id's, I wish to link all possible sets of up to k id's to a constant value. What I'm looking for is a way to translate sets (e.g. {1, 5}, {1, 3, 5} and {1, 2, 3, 4, 5, 6, 7}) to unique values. Guarantees: n < 100 and k < 10 (again: set sizes will range in [1, k]). The order of id's doesn't matter: {1, 5} == {5, 1}. All combinations are possible, but some may be excluded. All sets and values are constant and made only once. No deletes or inserts, no value updates. Once generated, the only operations taking place will be look-ups. Look-ups will be frequent and one-directional (given set, look up value). There is no need to sort (or otherwise organize) the values. Additionally, it would be nice (but not obligatory) if "neighboring" sets (drop one id, add one id, swap one id, etc) are easy to reach, as well as "all sets that include at least this set". Any ideas?

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • Database PK-FK design for future-effective-date entries?

    - by Scott Balmos
    Ultimately I'm going to convert this into a Hibernate/JPA design. But I wanted to start out from purely a database perspective. We have various tables containing data that is future-effective-dated. Take an employee table with the following pseudo-definition: employee id INT AUTO_INCREMENT ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Very simplistic. But let's say Employee A has id = 1, effectiveFrom = 1/1/2011, effectiveTo = 1/1/2099. That employee is going to be changing jobs in the future, which would in theory create a new row, id = 2 with effectiveFrom = 7/1/2011, effectiveTo = 1/1/2099, and id = 1's effectiveTo updated to 6/30/2011. But now, my program would have to go through any table that has a FK relationship to employee every night, and update those FK to reference the newly-effective employee entry. I have seen various postings in both pure SQL and Hibernate forums that I should have a separate employee_versions table, which is where I would have all effective-dated data stored, resulting in the updated pseudo-definition below: employee id INT AUTO_INCREMENT employee_versions id INT AUTO_INCREMENT employee_id INT FK employee.id ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Then to get any actual data, one would have to actually select from employee_versions with the proper employee_id and date range. This feels rather unnatural to have this secondary "versions" table for each versioned entity. Anyone have any opinions, suggestions from your own prior work, etc? Like I said, I'm taking this purely from a general SQL design standpoint first before layering in Hibernate on top. Thanks!

    Read the article

  • Generating all unique crossword puzzle grids

    - by heydenberk
    I want to generate all unique crossword puzzle grids of a certain grid size (4x4 is a good size). All possible puzzles, including non-unique puzzles, are represented by a binary string with the length of the grid area (16 in the case of 4x4), so all possible 4x4 puzzles are represented by the binary forms of all numbers in the range 0 to 2^16. Generating these is easy, but I'm curious if anyone has a good solution for how to programmatically eliminate invalid and duplicate cases. For example, all puzzles with a single column or single row are functionally identical, hence eliminating 7 of those 8 cases. Also, according to crossword puzzle conventions, all squares must be contiguous. I've had success removing all duplicate structures, but my solution took several minutes to execute and probably was not ideal. I'm at something of a loss for how to detect contiguity so if anyone has ideas on this it'd be much appreciated. I'd prefer solutions in python but write in whichever language you prefer. If anyone wants, I can post my python code for generating all grids and removing duplicates, slow as it may be.

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >