Search Results

Search found 4 results on 1 pages for 'johnthexiii'.

Page 1/1 | 1 

  • IE Tab 2 vs IE Tab Plus

    - by johnthexiii
    I'm wonder what the differences are, especially as it pertains to SharePoint. Why should I pick one over the other or does it matter? Also are there any Linux browser solutions for SharePoint, besides Wine + IE 6.

    Read the article

  • ColdFusion's cfquery failing silently

    - by johnthexiii
    I have a query that retrieves a large amount of data. <cfsetting requesttimeout="9999999" > <cfquery name="randomething" datasource="ds" timeout="9999999" > SELECT col1, col2 FROM table </cfquery> <cfdump var="#randomething.recordCount#" /> <!---should be about 5 million rows ---> I can successfully retrieve the data with python's cx_Oracle and using sys.getsizeof on the python list returns 22621060, so about 21 megabytes. ColdFusion does not return an error on the page, and I can't find anything in any of the logs. Why is cfdump not showing the number of rows? Additional Information The reason for doing it this way is because I have about 8000 smaller queries to run against the randomthing query. In other words when I run those 8000 queries against the database it takes hours for that process to complete. I suspect this is because I am competing with several other database users, and the database is getting bogged down. The 8000 smaller queries are getting counts of col1 over a period of col2. SELECT count(col1) as count WHERE col2 < 20121109 AND col2 > 20121108 According to Adam Cameron's suggestions. cflog is suggesting that the query isn't finishing. I tried changing the queries timeout both in the code and in the CFIDE/administrator, apparently CF9 no long respects the timeout attribute, regardless of what I tried I couldn't get the query to timeout. I also started playing around with the maxrows attribute to see if I could discern any information that way. when maxrows is set to 1300000 everything works fine when maxrows is 1400000 or greater I get this error when maxrows is 2000000 I observe my original problem Update So this isn't a limit of cfquery. By using QueryNew then looping over it to add data and I can get well past the 2 million mark without any problems. I also created a ThinClient datasource using the information in this question, I didn't observe any change in behavior. The messages on the database end are SQL*Net message from client and SQL*Net more data to client I just discovered that by using the thin client along with blockfactor1="100" I can retrieve more rows (appx. 3000000).

    Read the article

  • Parsing names with pyparsing

    - by johnthexiii
    I have a file of names and ages, john 25 bob 30 john bob 35 Here is what I have so far from pyparsing import * data = ''' john 25 bob 30 john bob 35 ''' name = Word(alphas + Optional(' ') + alphas) rowData = Group(name + Suppress(White(" ")) + Word(nums)) table = ZeroOrMore(rowData) print table.parseString(data) the output I am expecting is [['john', 25], ['bob', 30], ['john bob', 35]] Here is the stacktrace Traceback (most recent call last): File "C:\Users\mccauley\Desktop\client.py", line 11, in <module> eventType = Word(alphas + Optional(' ') + alphas) File "C:\Python27\lib\site-packages\pyparsing.py", line 1657, in __init__ self.name = _ustr(self) File "C:\Python27\lib\site-packages\pyparsing.py", line 122, in _ustr return str(obj) File "C:\Python27\lib\site-packages\pyparsing.py", line 1743, in __str__ self.strRepr = "W:(%s)" % charsAsStr(self.initCharsOrig) File "C:\Python27\lib\site-packages\pyparsing.py", line 1735, in charsAsStr if len(s)>4: TypeError: object of type 'And' has no len()

    Read the article

  • Need help specifying a ending while condition

    - by johnthexiii
    I have written a Python script to download all of the xkcd comic images. The only problem is I can't tell it to stop when it gets to the last one... Here is what I have so far. import re, mechanize from urllib import urlretrieve from BeautifulSoup import BeautifulSoup as bs baseUrl = "http://xkcd.com/1/" #Specify the first comic page br = mechanize.Browser() #Create a browser response = br.open(baseUrl) #Create an initial response x = 1 #Assign an initial file name while (SomeCondition): soup = bs(response.get_data()) #Create an instance of bs that contains the response data img = soup.findAll('img')[1] #Get the online file path of the image localFile = "C:\\Comics\\xkcd\\" + str(x) + ".jpg" #Come up with a local file name urlretrieve(img["src"], localFile) #Download the image file response = br.follow_link(text = "Next >") #Store the response of the next button x += 1 #Increase x by 1 print "All xkcd comics downloaded" #Let the user know the images have been downloaded Initially what I had was something like while br.follow_link(text = "Next >") != br.follow_link(text = ">|"): but by doing this I actually send skip to the last page before the script has a chance to perform the intended purpose.

    Read the article

1