Search Results

Search found 10391 results on 416 pages for 'sys dm exec requests'.

Page 56/416 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Installing rtorrent on my ubuntu server

    - by Shishant
    Hello, I am try to install rtorrent on my ubuntu server. I ran these commands and they worked fine. ./autogen.sh ./configure --with-xmlrpc-c make and then when i tried to use make install i guess it didnt get install because no .rtorrent.rc' was created in home directory and running rtorrent returned this error rtorrent: error while loading shared libraries: libtorrent.so.11: cannot open shared object file: No such file or directory below is the log of my make install. root@ubuntu:~/rtorrent-0.8.6# make install Making install in doc make[1]: Entering directory `/root/rtorrent-0.8.6/doc' make[2]: Entering directory `/root/rtorrent-0.8.6/doc' make[2]: Nothing to be done for `install-exec-am'. test -z "/usr/local/share/man/man1" || /bin/mkdir -p "/usr/local/share/man/man1" /usr/bin/install -c -m 644 './rtorrent.1' '/usr/local/share/man/man1/rtorrent.1 ' make[2]: Leaving directory `/root/rtorrent-0.8.6/doc' make[1]: Leaving directory `/root/rtorrent-0.8.6/doc' Making install in src make[1]: Entering directory `/root/rtorrent-0.8.6/src' Making install in core make[2]: Entering directory `/root/rtorrent-0.8.6/src/core' make[3]: Entering directory `/root/rtorrent-0.8.6/src/core' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/core' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/core' Making install in display make[2]: Entering directory `/root/rtorrent-0.8.6/src/display' make[3]: Entering directory `/root/rtorrent-0.8.6/src/display' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/display' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/display' Making install in input make[2]: Entering directory `/root/rtorrent-0.8.6/src/input' make[3]: Entering directory `/root/rtorrent-0.8.6/src/input' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/input' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/input' Making install in rpc make[2]: Entering directory `/root/rtorrent-0.8.6/src/rpc' make[3]: Entering directory `/root/rtorrent-0.8.6/src/rpc' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/rpc' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/rpc' Making install in ui make[2]: Entering directory `/root/rtorrent-0.8.6/src/ui' make[3]: Entering directory `/root/rtorrent-0.8.6/src/ui' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/ui' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/ui' Making install in utils make[2]: Entering directory `/root/rtorrent-0.8.6/src/utils' make[3]: Entering directory `/root/rtorrent-0.8.6/src/utils' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/utils' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/utils' make[2]: Entering directory `/root/rtorrent-0.8.6/src' make[3]: Entering directory `/root/rtorrent-0.8.6/src' test -z "/usr/local/bin" || /bin/mkdir -p "/usr/local/bin" /bin/bash ../libtool --mode=install /usr/bin/install -c 'rtorrent' '/usr/loc al/bin/rtorrent' libtool: install: /usr/bin/install -c rtorrent /usr/local/bin/rtorrent make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src' make[2]: Leaving directory `/root/rtorrent-0.8.6/src' make[1]: Leaving directory `/root/rtorrent-0.8.6/src' make[1]: Entering directory `/root/rtorrent-0.8.6' make[2]: Entering directory `/root/rtorrent-0.8.6' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/root/rtorrent-0.8.6' make[1]: Leaving directory `/root/rtorrent-0.8.6' Thank You.

    Read the article

  • Unicorn installation error on Debian 5

    - by Luc
    I am running ruby1.9 on Debian 5, and did not manage to install 'unicorn' with rubygems. I got this error and do not really know how to solve it. Do you have any idea of the possible root cause ? > gem install unicorn Building native extensions. This could take a while... ERROR: Error installing unicorn: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for CLOCK_MONOTONIC in time.h... yes checking for clockid_t in time.h... yes checking for clock_gettime() in -lrt... yes checking for t_open() in -lnsl... no checking for socket() in -lsocket... no checking for poll() in poll.h... yes checking for getaddrinfo() in sys/types.h,sys/socket.h,netdb.h... yes checking for getnameinfo() in sys/types.h,sys/socket.h,netdb.h... yes checking for struct sockaddr_storage in sys/types.h,sys/socket.h... yes checking for accept4() in sys/socket.h... no checking for sys/select.h... yes checking for ruby/io.h... yes checking for rb_io_t.fd in ruby.h,ruby/io.h... yes checking for rb_io_t.mode in ruby.h,ruby/io.h... yes checking for rb_io_t.pathv in ruby.h,ruby/io.h... no checking for struct RFile in ruby.h,ruby/io.h... yes checking size of struct RFile in ruby.h,ruby/io.h... 24 checking for struct RObject... no checking size of int... 4 checking for rb_io_ascii8bit_binmode()... no checking for rb_thread_blocking_region()... yes checking for rb_thread_io_blocking_region()... no checking for rb_str_set_len()... yes checking for rb_time_interval()... yes checking for rb_wait_for_single_fd()... no creating Makefile make cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o kgio_ext.o -c kgio_ext.c cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o autopush.o -c autopush.c cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o wait.o -c wait.c cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o connect.o -c connect.c cc -I. -I/usr/include/ruby-1.9.0/x86_64-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_TYPE_CLOCKID_T -DHAVE_POLL -DHAVE_GETADDRINFO -DHAVE_GETNAMEINFO -DHAVE_TYPE_STRUCT_SOCKADDR_STORAGE -DHAVE_SYS_SELECT_H -DHAVE_RUBY_IO_H -DHAVE_RB_IO_T_FD -DHAVE_ST_FD -DHAVE_RB_IO_T_MODE -DHAVE_ST_MODE -DHAVE_TYPE_STRUCT_RFILE -DSIZEOF_STRUCT_RFILE=24 -DSIZEOF_INT=4 -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_RB_STR_SET_LEN -DHAVE_RB_TIME_INTERVAL -D_GNU_SOURCE -DPOSIX_C_SOURCE=1-D_POSIX_C_SOURCE=200112L -fPIC -fno-strict-aliasing -g -g -O2 -O2 -g -Wall -Wno-parentheses -fPIC -o poll.o -c poll.c poll.c:11:18: error: st.h: No such file or directory poll.c: In function 'do_poll': poll.c:148: error: 'RUBY_UBF_IO' undeclared (first use in this function) poll.c:148: error: (Each undeclared identifier is reported only once poll.c:148: error: for each function it appears in.) make: *** [poll.o] Error 1 Gem files will remain installed in /usr/lib/ruby/gems/1.9.0/gems/kgio-2.5.0 for inspection. Results logged to /usr/lib/ruby/gems/1.9.0/gems/kgio-2.5.0/ext/kgio/gem_make.out

    Read the article

  • Parallel processing slower than sequential?

    - by zebediah49
    EDIT: For anyone who stumbles upon this in the future: Imagemagick uses a MP library. It's faster to use available cores if they're around, but if you have parallel jobs, it's unhelpful. Do one of the following: do your jobs serially (with Imagemagick in parallel mode) set MAGICK_THREAD_LIMIT=1 for your invocation of the imagemagick binary in question. By making Imagemagick use only one thread, it slows down by 20-30% in my test cases, but meant I could run one job per core without issues, for a significant net increase in performance. Original question: While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same. Thus, we have this demonstration. Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads Results: /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 0m3.784s user 0m2.240s sys 0m0.230s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 0m9.097s user 0m28.020s sys 0m0.910s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 0m9.844s user 0m33.200s sys 0m1.270s Can anyone think of a reason why running two instances of this program takes more than twice as long in real time, and more than ten times as long in processor time to complete the same task? After that initial hit, more processes do not seem to have as significant of an effect. I thought it might have to do with disk seeking, so I did that test entirely in ram. Could it have something to do with how Convert works, and having more than one copy at once means it cannot use processor cache as efficiently or something? EDIT: When done with 1000x 769KB files, performance is as expected. Interesting. /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.679s user 5m6.980s sys 0m6.340s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.152s user 5m6.140s sys 0m6.530s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 2m7.578s user 5m35.410s sys 0m6.050s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 4 convert -auto-level real 1m36.959s user 5m48.900s sys 0m6.350s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 1m36.392s user 5m54.840s sys 0m5.650s

    Read the article

  • Returning a list from a function in Python

    - by Jasper
    Hi, I'm creating a game for my sister, and I want a function to return a list variable, so I can pass it to another variable. The relevant code is as follows: def startNewGame(): while 1: #Introduction: print print """Hello, You will now be guided through the setup process. There are 7 steps to this. You can cancel setup at any time by typing 'cancelSetup' Thankyou""" #Step 1 (Name): print print """Step 1 of 7: Type in a name for your PotatoHead: """ inputPHName = raw_input('|Enter Name:|') if inputPHName == 'cancelSetup': sys.exit() #Step 2 (Gender): print print """Step 2 of 7: Choose the gender of your PotatoHead: input either 'm' or 'f' """ inputPHGender = raw_input('|Enter Gender:|') if inputPHGender == 'cancelSetup': sys.exit() #Step 3 (Colour): print print """Step 3 of 7: Choose the colour your PotatoHead will be: Only Red, Blue, Green and Yellow are currently supported """ inputPHColour = raw_input('|Enter Colour:|') if inputPHColour == 'cancelSetup': sys.exit() #Step 4 (Favourite Thing): print print """Step 4 of 7: Type your PotatoHead's favourite thing: """ inputPHFavThing = raw_input('|Enter Favourite Thing:|') if inputPHFavThing == 'cancelSetup': sys.exit() # Step 5 (First Toy): print print """Step 5 of 7: Choose a first toy for your PotatoHead: """ inputPHFirstToy = raw_input('|Enter First Toy:|') if inputPHFirstToy == 'cancelSetup': sys.exit() #Step 6 (Check stats): while 1: print print """Step 6 of 7: Check the following details to make sure that they are correct: """ print print """Name:\t\t\t""" + inputPHName + """ Gender:\t\t\t""" + inputPHGender + """ Colour:\t\t\t""" + inputPHColour + """ Favourite Thing:\t""" + inputPHFavThing + """ First Toy:\t\t""" + inputPHFirstToy + """ """ print print "Enter 'y' or 'n'" inputMCheckStats = raw_input('|Is this information correct?|') if inputMCheckStats == 'cancelSetup': sys.exit() elif inputMCheckStats == 'y': break elif inputMCheckStats == 'n': print "Re-enter info: ..." print break else: "The value you entered was incorrect, please re-enter your choice" if inputMCheckStats == 'y': break #Step 7 (Define variables for the creation of the PotatoHead): MFCreatePH = [] print print """Step 7 of 7: Your PotatoHead will now be created... Creating variables... """ MFCreatePH = [inputPHName, inputPHGender, inputPHColour, inputPHFavThing, inputPHFirstToy] time.sleep(1) print "inputPHName" print time.sleep(1) print "inputPHFirstToy" print return MFCreatePH print "Your PotatoHead varibles have been successfully created!" Then it is passed to another function that was imported from another module from potatohead import * ... welcomeMessage() MCreatePH = startGame() myPotatoHead = PotatoHead(MCreatePH) the code for the PotatoHead object is in the potatohead.py module which was imported above, and is as follows: class PotatoHead: #Initialise the PotatoHead object: def __init__(self, data): self.data = data #Takes the data from the start new game function - see main.py #Defines the PotatoHead starting attributes: self.name = data[0] self.gender = data[1] self.colour = data[2] self.favouriteThing = data[3] self.firstToy = data[4] self.age = '0.0' self.education = [self.eduScience, self.eduEnglish, self.eduMaths] = '0.0', '0.0', '0.0' self.fitness = '0.0' self.happiness = '10.0' self.health = '10.0' self.hunger = '0.0' self.tiredness = 'Not in this version' self.toys = [] self.toys.append(self.firstToy) self.time = '0' #Sets data lists for saving, loading and general use: self.phData = (self.name, self.gender, self.colour, self.favouriteThing, self.firstToy) self.phAdvData = (self.name, self.gender, self.colour, self.favouriteThing, self.firstToy, self.age, self.education, self.fitness, self.happiness, self.health, self.hunger, self.tiredness, self.toys) However, when I run the program this error appears: Traceback (most recent call last): File "/Users/Jasper/Documents/Programming/Potato Head Game/Current/main.py", line 158, in <module> myPotatoHead = PotatoHead(MCreatePH) File "/Users/Jasper/Documents/Programming/Potato Head Game/Current/potatohead.py", line 15, in __init__ self.name = data[0] TypeError: 'NoneType' object is unsubscriptable What am i doing wrong? -----EDIT----- The program finishes as so: Step 7 of 7: Your PotatoHead will now be created... Creating variables... inputPHName inputPHFirstToy Then it goes to the Tracback -----EDIT2----- This is the EXACT code I'm running in its entirety: #+--------------------------------------+# #| main.py |# #| A main module for the Potato Head |# #| Game to pull the other modules |# #| together and control through user |# #| input |# #| Author: |# #| Date Created / Modified: |# #| 3/2/10 | 20/2/10 |# #+--------------------------------------+# Tested: No #Import the required modules: import time import random import sys from potatohead import * from toy import * #Start the Game: def welcomeMessage(): print "----- START NEW GAME -----------------------" print "==Print Welcome Message==" print "loading... \t loading... \t loading..." time.sleep(1) print "loading..." time.sleep(1) print "LOADED..." print; print; print; print """Hello, Welcome to the Potato Head Game. In this game you can create a Potato Head, and look after it, like a Virtual Pet. This game is constantly being updated and expanded. Please look out for updates. """ #Choose whether to start a new game or load a previously saved game: def startGame(): while 1: print "--------------------" print """ Choose an option: New_Game or Load_Game """ startGameInput = raw_input('>>> >') if startGameInput == 'New_Game': startNewGame() break elif startGameInput == 'Load_Game': print "This function is not yet supported" print "Try Again" print else: print "You must have mistyped the command: Type either 'New_Game' or 'Load_Game'" print #Set the new game up: def startNewGame(): while 1: #Introduction: print print """Hello, You will now be guided through the setup process. There are 7 steps to this. You can cancel setup at any time by typing 'cancelSetup' Thankyou""" #Step 1 (Name): print print """Step 1 of 7: Type in a name for your PotatoHead: """ inputPHName = raw_input('|Enter Name:|') if inputPHName == 'cancelSetup': sys.exit() #Step 2 (Gender): print print """Step 2 of 7: Choose the gender of your PotatoHead: input either 'm' or 'f' """ inputPHGender = raw_input('|Enter Gender:|') if inputPHGender == 'cancelSetup': sys.exit() #Step 3 (Colour): print print """Step 3 of 7: Choose the colour your PotatoHead will be: Only Red, Blue, Green and Yellow are currently supported """ inputPHColour = raw_input('|Enter Colour:|') if inputPHColour == 'cancelSetup': sys.exit() #Step 4 (Favourite Thing): print print """Step 4 of 7: Type your PotatoHead's favourite thing: """ inputPHFavThing = raw_input('|Enter Favourite Thing:|') if inputPHFavThing == 'cancelSetup': sys.exit() # Step 5 (First Toy): print print """Step 5 of 7: Choose a first toy for your PotatoHead: """ inputPHFirstToy = raw_input('|Enter First Toy:|') if inputPHFirstToy == 'cancelSetup': sys.exit() #Step 6 (Check stats): while 1: print print """Step 6 of 7: Check the following details to make sure that they are correct: """ print print """Name:\t\t\t""" + inputPHName + """ Gender:\t\t\t""" + inputPHGender + """ Colour:\t\t\t""" + inputPHColour + """ Favourite Thing:\t""" + inputPHFavThing + """ First Toy:\t\t""" + inputPHFirstToy + """ """ print print "Enter 'y' or 'n'" inputMCheckStats = raw_input('|Is this information correct?|') if inputMCheckStats == 'cancelSetup': sys.exit() elif inputMCheckStats == 'y': break elif inputMCheckStats == 'n': print "Re-enter info: ..." print break else: "The value you entered was incorrect, please re-enter your choice" if inputMCheckStats == 'y': break #Step 7 (Define variables for the creation of the PotatoHead): MFCreatePH = [] print print """Step 7 of 7: Your PotatoHead will now be created... Creating variables... """ MFCreatePH = [inputPHName, inputPHGender, inputPHColour, inputPHFavThing, inputPHFirstToy] time.sleep(1) print "inputPHName" print time.sleep(1) print "inputPHFirstToy" print return MFCreatePH print "Your PotatoHead varibles have been successfully created!" #Run Program: welcomeMessage() MCreatePH = startGame() myPotatoHead = PotatoHead(MCreatePH) The potatohead.py module is as follows: #+--------------------------------------+# #| potatohead.py |# #| A module for the Potato Head Game |# #| Author: |# #| Date Created / Modified: |# #| 24/1/10 | 24/1/10 |# #+--------------------------------------+# Tested: Yes (24/1/10) #Create the PotatoHead class: class PotatoHead: #Initialise the PotatoHead object: def __init__(self, data): self.data = data #Takes the data from the start new game function - see main.py #Defines the PotatoHead starting attributes: self.name = data[0] self.gender = data[1] self.colour = data[2] self.favouriteThing = data[3] self.firstToy = data[4] self.age = '0.0' self.education = [self.eduScience, self.eduEnglish, self.eduMaths] = '0.0', '0.0', '0.0' self.fitness = '0.0' self.happiness = '10.0' self.health = '10.0' self.hunger = '0.0' self.tiredness = 'Not in this version' self.toys = [] self.toys.append(self.firstToy) self.time = '0' #Sets data lists for saving, loading and general use: self.phData = (self.name, self.gender, self.colour, self.favouriteThing, self.firstToy) self.phAdvData = (self.name, self.gender, self.colour, self.favouriteThing, self.firstToy, self.age, self.education, self.fitness, self.happiness, self.health, self.hunger, self.tiredness, self.toys) #Define the phStats variable, enabling easy display of PotatoHead attributes: def phDefStats(self): self.phStats = """Your Potato Head's Stats are as follows: ---------------------------------------- Name: \t\t""" + self.name + """ Gender: \t\t""" + self.gender + """ Colour: \t\t""" + self.colour + """ Favourite Thing: \t""" + self.favouriteThing + """ First Toy: \t""" + self.firstToy + """ Age: \t\t""" + self.age + """ Education: \t""" + str(float(self.eduScience) + float(self.eduEnglish) + float(self.eduMaths)) + """ -> Science: \t""" + self.eduScience + """ -> English: \t""" + self.eduEnglish + """ -> Maths: \t""" + self.eduMaths + """ Fitness: \t""" + self.fitness + """ Happiness: \t""" + self.happiness + """ Health: \t""" + self.health + """ Hunger: \t""" + self.hunger + """ Tiredness: \t""" + self.tiredness + """ Toys: \t\t""" + str(self.toys) + """ Time: \t\t""" + self.time + """ """ #Change the PotatoHead's favourite thing: def phChangeFavouriteThing(self, newFavouriteThing): self.favouriteThing = newFavouriteThing phChangeFavouriteThingMsg = "Your Potato Head's favourite thing is " + self.favouriteThing + "." #"Feed" the Potato Head i.e. Reduce the 'self.hunger' attribute's value: def phFeed(self): if float(self.hunger) >=3.0: self.hunger = str(float(self.hunger) - 3.0) elif float(self.hunger) < 3.0: self.hunger = '0.0' self.time = str(int(self.time) + 1) #Pass time #"Exercise" the Potato Head if between the ages of 5 and 25: def phExercise(self): if float(self.age) < 5.1 or float(self.age) > 25.1: print "This Potato Head is either too young or too old for this activity!" else: if float(self.fitness) <= 8.0: self.fitness = str(float(self.fitness) + 2.0) elif float(self.fitness) > 8.0: self.fitness = '10.0' self.time = str(int(self.time) + 1) #Pass time #"Teach" the Potato Head: def phTeach(self, subject): if subject == 'Science': if float(self.eduScience) <= 9.0: self.eduScience = str(float(self.eduScience) + 1.0) elif float(self.eduScience) > 9.0 and float(self.eduScience) < 10.0: self.eduScience = '10.0' elif float(self.eduScience) == 10.0: print "Your Potato Head has gained the highest level of qualifications in this subject! It cannot learn any more!" elif subject == 'English': if float(self.eduEnglish) <= 9.0: self.eduEnglish = str(float(self.eduEnglish) + 1.0) elif float(self.eduEnglish) > 9.0 and float(self.eduEnglish) < 10.0: self.eduEnglish = '10.0' elif float(self.eduEnglish) == 10.0: print "Your Potato Head has gained the highest level of qualifications in this subject! It cannot learn any more!" elif subject == 'Maths': if float(self.eduMaths) <= 9.0: self.eduMaths = str(float(self.eduMaths) + 1.0) elif float(self.eduMaths) > 9.0 and float(self.eduMaths) < 10.0: self.eduMaths = '10.0' elif float(self.eduMaths) == 10.0: print "Your Potato Head has gained the highest level of qualifications in this subject! It cannot learn any more!" else: print "That subject is not an option..." print "Please choose either Science, English or Maths" self.time = str(int(self.time) + 1) #Pass time #Increase Health: def phGoToDoctor(self): self.health = '10.0' self.time = str(int(self.time) + 1) #Pass time #Sleep: Age, change stats: #(Time Passes) def phSleep(self): self.time = '0' #Resets time for next 'day' (can do more things next day) #Increase hunger: if float(self.hunger) <= 5.0: self.hunger = str(float(self.hunger) + 5.0) elif float(self.hunger) > 5.0: self.hunger = '10.0' #Lower Fitness: if float(self.fitness) >= 0.5: self.fitness = str(float(self.fitness) - 0.5) elif float(self.fitness) < 0.5: self.fitness = '0.0' #Lower Health: if float(self.health) >= 0.5: self.health = str(float(self.health) - 0.5) elif float(self.health) < 0.5: self.health = '0.0' #Lower Happiness: if float(self.happiness) >= 2.0: self.happiness = str(float(self.happiness) - 2.0) elif float(self.happiness) < 2.0: self.happiness = '0.0' #Increase the Potato Head's age: self.age = str(float(self.age) + 0.1) The game is still under development - There may be parts of modules that aren't complete, but I don't think they're causing the problem

    Read the article

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Android: ActivityThread.performLaunchActivity error

    - by fordays
    Hi, I'm getting an ActivityThread.performLaunchActivity(ActivityThread$ActivityRecord,Intent) error each time I boot up my program in the debugger. The program won't even start up! Any help would be greatly appreciated! I'm very new to this environment. Let me know if you need anymore information/code to help me out. Here is my logcat: 06-09 11:16:26.848: ERROR/vold(27): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 06-09 11:16:26.848: ERROR/vold(27): Error bootstrapping switch '/sys/class/switch/test2' (No such file or directory) 06-09 11:16:26.848: ERROR/vold(27): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 06-09 11:16:26.848: ERROR/vold(27): Error bootstrapping switch '/sys/class/switch/test' (No such file or directory) 06-09 11:16:37.887: ERROR/MemoryHeapBase(53): error opening /dev/pmem: No such file or directory 06-09 11:16:37.887: ERROR/SurfaceFlinger(53): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 06-09 11:16:37.927: ERROR/libEGL(53): couldn't load <libhgl.so> library (Cannot load library: load_library[984]: Library 'libhgl.so' not found) 06-09 11:16:38.407: ERROR/libEGL(64): couldn't load <libhgl.so> library (Cannot load library: load_library[984]: Library 'libhgl.so' not found) 06-09 11:16:41.358: ERROR/BatteryService(53): Could not open '/sys/class/power_supply/usb/online' 06-09 11:16:41.367: ERROR/BatteryService(53): Could not open '/sys/class/power_supply/battery/batt_vol' 06-09 11:16:41.367: ERROR/BatteryService(53): Could not open '/sys/class/power_supply/battery/batt_temp' 06-09 11:16:41.667: ERROR/EventHub(53): could not get driver version for /dev/input/mouse0, Not a typewriter 06-09 11:16:41.667: ERROR/EventHub(53): could not get driver version for /dev/input/mice, Not a typewriter 06-09 11:16:41.797: ERROR/System(53): Failure starting core service 06-09 11:16:41.797: ERROR/System(53): java.lang.SecurityException 06-09 11:16:41.797: ERROR/System(53): at android.os.BinderProxy.transact(Native Method) 06-09 11:16:41.797: ERROR/System(53): at android.os.ServiceManagerProxy.addService(ServiceManagerNative.java:146) 06-09 11:16:41.797: ERROR/System(53): at android.os.ServiceManager.addService(ServiceManager.java:72) 06-09 11:16:41.797: ERROR/System(53): at com.android.server.ServerThread.run(SystemServer.java:162) 06-09 11:16:41.797: ERROR/AndroidRuntime(53): Crash logging skipped, no checkin service 06-09 11:16:42.777: ERROR/LockPatternKeyguardView(53): Failed to bind to GLS while checking for account 06-09 11:16:46.557: ERROR/ActivityThread(111): Failed to find provider info for com.google.settings 06-09 11:16:46.577: ERROR/ActivityThread(111): Failed to find provider info for com.google.settings 06-09 11:16:49.087: ERROR/ApplicationContext(53): Couldn't create directory for SharedPreferences file shared_prefs/wallpaper-hints.xml 06-09 11:16:51.146: ERROR/ActivityThread(108): Failed to find provider info for android.server.checkin 06-09 11:16:54.266: ERROR/ActivityThread(108): Failed to find provider info for android.server.checkin 06-09 11:16:54.416: ERROR/ActivityThread(108): Failed to find provider info for android.server.checkin 06-09 11:16:56.336: ERROR/MediaPlayerService(31): Couldn't open fd for content://settings/system/notification_sound 06-09 11:16:56.356: ERROR/MediaPlayer(53): Unable to to create media player 06-09 11:16:56.637: ERROR/AndroidRuntime(201): Uncaught handler: thread main exiting due to uncaught exception 06-09 11:16:56.757: ERROR/AndroidRuntime(201): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.svgeeks.kidneytest/com.svgeeks.kidneytest.KidneyTest}: java.lang.ClassCastException: android.widget.EditText 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2401) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.os.Handler.dispatchMessage(Handler.java:99) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.os.Looper.loop(Looper.java:123) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.main(ActivityThread.java:4203) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at java.lang.reflect.Method.invokeNative(Native Method) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at java.lang.reflect.Method.invoke(Method.java:521) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at dalvik.system.NativeStart.main(Native Method) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): Caused by: java.lang.ClassCastException: android.widget.EditText 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at com.svgeeks.kidneytest.KidneyTest.onCreate(KidneyTest.java:57) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): ... 11 more 06-09 11:16:56.876: ERROR/dalvikvm(201): Unable to open stack trace file '/data/anr/traces.txt': Permission denied

    Read the article

  • Best style for Python programs: what do you suggest?

    - by Noctis Skytower
    A friend of mine wanted help learning to program, so he gave me all the programs that he wrote for his previous classes. The last program that he wrote was an encryption program, and after rewriting all his programs in Python, this is how his encryption program turned out (after adding my own requirements). #! /usr/bin/env python ################################################################################ """\ CLASS INFORMATION ----------------- Program Name: Program 11 Programmer: Stephen Chappell Instructor: Stephen Chappell for CS 999-0, Python Due Date: 17 May 2010 DOCUMENTATION ------------- This is a simple encryption program that can encode and decode messages.""" ################################################################################ import sys KEY_FILE = 'Key.txt' BACKUP = '''\ !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNO\ PQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ _@/6-UC'GzaV0%5Mo9g+yNh8b">Bi=<Lx [sQn#^R.D2Xc(\ Jm!4e${lAEWud&t7]H\`}pvPw)FY,Z~?qK|3SOfk*:1;jTrI''' ################################################################################ def main(): "Run the program: loads key, runs processing loop, and saves key." encode_map, decode_map = load_key(KEY_FILE) try: run_interface_loop(encode_map, decode_map) except SystemExit: pass save_key(KEY_FILE, encode_map) def run_interface_loop(encode_map, decode_map): "Shows the menu and runs the appropriate command." print('This program handles encryption via a customizable key.') while True: print('''\ MENU ==== (1) Encode (2) Decode (3) Custom (4) Finish''') switch = get_character('Select: ', tuple('1234')) FUNC[switch](encode_map, decode_map) def get_character(prompt, choices): "Gets a valid menu option and returns it." while True: sys.stdout.write(prompt) sys.stdout.flush() line = sys.stdin.readline()[:-1] if not line: sys.exit() if line in choices: return line print(repr(line), 'is not a valid choice.') ################################################################################ def load_key(filename): "Gets the key file data and returns encoding/decoding dictionaries." plain, cypher = open_file(filename) return dict(zip(plain, cypher)), dict(zip(cypher, plain)) def open_file(filename): "Load the keys and tries to create it when not available." while True: try: with open(filename) as file: plain, cypher = file.read().split('\n') return plain, cypher except: with open(filename, 'w') as file: file.write(BACKUP) def save_key(filename, encode_map): "Dumps the map into two buffers and saves them to the key file." plain = cypher = str() for p, c in encode_map.items(): plain += p cypher += c with open(filename, 'w') as file: file.write(plain + '\n' + cypher) ################################################################################ def encode(encode_map, decode_map): "Encodes message for the user." print('Enter your message to encode (EOF when finished).') message = get_message() for char in message: sys.stdout.write(encode_map[char] if char in encode_map else char) def decode(encode_map, decode_map): "Decodes message for the user." print('Enter your message to decode (EOF when finished).') message = get_message() for char in message: sys.stdout.write(decode_map[char] if char in decode_map else char) def custom(encode_map, decode_map): "Allows user to edit the encoding/decoding dictionaries." plain, cypher = get_new_mapping() for p, c in zip(plain, cypher): encode_map[p] = c decode_map[c] = p ################################################################################ def get_message(): "Gets and returns text entered by the user (until EOF)." buffer = [] while True: line = sys.stdin.readline() if line: buffer.append(line) else: return ''.join(buffer) def get_new_mapping(): "Prompts for strings to edit encoding/decoding maps." while True: plain = get_unique_chars('What do you want to encode from?') cypher = get_unique_chars('What do you want to encode to?') if len(plain) == len(cypher): return plain, cypher print('Both lines should have the same length.') def get_unique_chars(prompt): "Gets strings that only contain unique characters." print(prompt) while True: line = input() if len(line) == len(set(line)): return line print('There were duplicate characters: please try again.') ################################################################################ # This map is used for dispatching commands in the interface loop. FUNC = {'1': encode, '2': decode, '3': custom, '4': lambda a, b: sys.exit()} ################################################################################ if __name__ == '__main__': main() For all those Python programmers out there, your help is being requested. How should the formatting (not necessarily the coding by altered to fit Python's style guide? My friend does not need to be learning things that are not correct. If you have suggestions on the code, feel free to post them to this wiki as well.

    Read the article

  • Differing styles in Python program: what do you suggest?

    - by Noctis Skytower
    A friend of mine wanted help learning to program, so he gave me all the programs that he wrote for his previous classes. The last program that he wrote was an encryption program, and after rewriting all his programs in Python, this is how his encryption program turned out (after adding my own requirements). #! /usr/bin/env python ################################################################################ """\ CLASS INFORMATION ----------------- Program Name: Program 11 Programmer: Stephen Chappell Instructor: Stephen Chappell for CS 999-0, Python Due Date: 17 May 2010 DOCUMENTATION ------------- This is a simple encryption program that can encode and decode messages.""" ################################################################################ import sys KEY_FILE = 'Key.txt' BACKUP = '''\ !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNO\ PQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ _@/6-UC'GzaV0%5Mo9g+yNh8b">Bi=<Lx [sQn#^R.D2Xc(\ Jm!4e${lAEWud&t7]H\`}pvPw)FY,Z~?qK|3SOfk*:1;jTrI''' ################################################################################ def main(): "Run the program: loads key, runs processing loop, and saves key." encode_map, decode_map = load_key(KEY_FILE) try: run_interface_loop(encode_map, decode_map) except SystemExit: pass save_key(KEY_FILE, encode_map) def run_interface_loop(encode_map, decode_map): "Shows the menu and runs the appropriate command." print('This program handles encryption via a customizable key.') while True: print('''\ MENU ==== (1) Encode (2) Decode (3) Custom (4) Finish''') switch = get_character('Select: ', tuple('1234')) FUNC[switch](encode_map, decode_map) def get_character(prompt, choices): "Gets a valid menu option and returns it." while True: sys.stdout.write(prompt) sys.stdout.flush() line = sys.stdin.readline()[:-1] if not line: sys.exit() if line in choices: return line print(repr(line), 'is not a valid choice.') ################################################################################ def load_key(filename): "Gets the key file data and returns encoding/decoding dictionaries." plain, cypher = open_file(filename) return dict(zip(plain, cypher)), dict(zip(cypher, plain)) def open_file(filename): "Load the keys and tries to create it when not available." while True: try: with open(filename) as file: plain, cypher = file.read().split('\n') return plain, cypher except: with open(filename, 'w') as file: file.write(BACKUP) def save_key(filename, encode_map): "Dumps the map into two buffers and saves them to the key file." plain = cypher = str() for p, c in encode_map.items(): plain += p cypher += c with open(filename, 'w') as file: file.write(plain + '\n' + cypher) ################################################################################ def encode(encode_map, decode_map): "Encodes message for the user." print('Enter your message to encode (EOF when finished).') message = get_message() for char in message: sys.stdout.write(encode_map[char] if char in encode_map else char) def decode(encode_map, decode_map): "Decodes message for the user." print('Enter your message to decode (EOF when finished).') message = get_message() for char in message: sys.stdout.write(decode_map[char] if char in decode_map else char) def custom(encode_map, decode_map): "Allows user to edit the encoding/decoding dictionaries." plain, cypher = get_new_mapping() for p, c in zip(plain, cypher): encode_map[p] = c decode_map[c] = p ################################################################################ def get_message(): "Gets and returns text entered by the user (until EOF)." buffer = [] while True: line = sys.stdin.readline() if line: buffer.append(line) else: return ''.join(buffer) def get_new_mapping(): "Prompts for strings to edit encoding/decoding maps." while True: plain = get_unique_chars('What do you want to encode from?') cypher = get_unique_chars('What do you want to encode to?') if len(plain) == len(cypher): return plain, cypher print('Both lines should have the same length.') def get_unique_chars(prompt): "Gets strings that only contain unique characters." print(prompt) while True: line = input() if len(line) == len(set(line)): return line print('There were duplicate characters: please try again.') ################################################################################ # This map is used for dispatching commands in the interface loop. FUNC = {'1': encode, '2': decode, '3': custom, '4': lambda a, b: sys.exit()} ################################################################################ if __name__ == '__main__': main() For all those Python programmers out there, your help is being requested. How should the formatting (not necessarily the coding by altered to fit Python's style guide? My friend does not need to be learning things that are not correct. If you have suggestions on the code, feel free to post them to this wiki as well.

    Read the article

  • Service Broker, not ETL

    - by jamiet
    I have been very quiet on this blog of late and one reason for that is I have been very busy on a client project that I would like to talk about a little here. The client that I have been working for has a website that runs on a distributed architecture utilising a messaging infrastructure for communication between different endpoints. My brief was to build a system that could consume these messages and produce analytical information in near-real-time. More specifically I basically had to deliver a data warehouse however it was the real-time aspect of the project that really intrigued me. This real-time requirement meant that using an Extract transformation, Load (ETL) tool was out of the question and so I had no choice but to write T-SQL code (i.e. stored-procedures) to process the incoming messages and load the data into the data warehouse. This concerned me though – I had no way to control the rate at which data would arrive into the system yet we were going to have end-users querying the system at the same time that those messages were arriving; the potential for contention in such a scenario was pretty high and and was something I wanted to minimise as much as possible. Moreover I did not want the processing of data inside the data warehouse to have any impact on the customer-facing website. As you have probably guessed from the title of this blog post this is where Service Broker stepped in! For those that have not heard of it Service Broker is a queuing technology that has been built into SQL Server since SQL Server 2005. It provides a number of features however the one that was of interest to me was the fact that it facilitates asynchronous data processing which, in layman’s terms, means the ability to process some data without requiring the system that supplied the data having to wait for the response. That was a crucial feature because on this project the customer-facing website (in effect an OLTP system) would be calling one of our stored procedures with each message – we did not want to cause the OLTP system to wait on us every time we processed one of those messages. This asynchronous nature also helps to alleviate the contention problem because the asynchronous processing activity is handled just like any other task in the database engine and hence can wait on another task (such as an end-user query). Service Broker it was then! The stored procedure called by the OLTP system would simply put the message onto a queue and we would use a feature called activation to pick each message off the queue in turn and process it into the warehouse. At the time of writing the system is not yet up to full capacity but so far everything seems to be working OK (touch wood) and crucially our users are seeing data in near-real-time. By near-real-time I am talking about latencies of a few minutes at most and to someone like me who is used to building systems that have overnight latencies that is a huge step forward! So then, am I advocating that you all go out and dump your ETL tools? Of course not, no! What this project has taught me though is that in certain scenarios there may be better ways to implement a data warehouse system then the traditional “load data in overnight” approach that we are all used to. Moreover I have really enjoyed getting to grips with a new technology and even if you don’t want to use Service Broker you might want to consider asynchronous messaging architectures for your BI/data warehousing solutions in the future. This has been a very high level overview of my use of Service Broker and I have deliberately left out much of the minutiae of what has been a very challenging implementation. Nonetheless I hope I have caused you to reflect upon your own approaches to BI and question whether other approaches may be more tenable. All comments and questions gratefully received! Lastly, if you have never used Service Broker before and want to kick the tyres I have provided below a very simple “Service Broker Hello World” script that will create all of the objects required to facilitate Service Broker communications and then send the message “Hello World” from one place to anther! This doesn’t represent a “proper” implementation per se because it doesn’t close down down conversation objects (which you should always do in a real-world scenario) but its enough to demonstrate the capabilities! @Jamiet ----------------------------------------------------------------------------------------------- /*This is a basic Service Broker Hello World app. Have fun! -Jamie */ USE MASTER GO CREATE DATABASE SBTest GO --Turn Service Broker on! ALTER DATABASE SBTest SET ENABLE_BROKER GO USE SBTest GO -- 1) we need to create a message type. Note that our message type is -- very simple and allowed any type of content CREATE MESSAGE TYPE HelloMessage VALIDATION = NONE GO -- 2) Once the message type has been created, we need to create a contract -- that specifies who can send what types of messages CREATE CONTRACT HelloContract (HelloMessage SENT BY INITIATOR) GO --We can query the metadata of the objects we just created SELECT * FROM   sys.service_message_types WHERE name = 'HelloMessage'; SELECT * FROM   sys.service_contracts WHERE name = 'HelloContract'; SELECT * FROM   sys.service_contract_message_usages WHERE  service_contract_id IN (SELECT service_contract_id FROM sys.service_contracts WHERE name = 'HelloContract') AND        message_type_id IN (SELECT message_type_id FROM sys.service_message_types WHERE name = 'HelloMessage'); -- 3) The communication is between two endpoints. Thus, we need two queues to -- hold messages CREATE QUEUE SenderQueue CREATE QUEUE ReceiverQueue GO --more querying metatda SELECT * FROM sys.service_queues WHERE name IN ('SenderQueue','ReceiverQueue'); --we can also select from the queues as if they were tables SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   -- 4) Create the required services and bind them to be above created queues CREATE SERVICE Sender   ON QUEUE SenderQueue CREATE SERVICE Receiver   ON QUEUE ReceiverQueue (HelloContract) GO --more querying metadata SELECT * FROM sys.services WHERE name IN ('Receiver','Sender'); -- 5) At this point, we can begin the conversation between the two services by -- sending messages DECLARE @conversationHandle UNIQUEIDENTIFIER DECLARE @message NVARCHAR(100) BEGIN   BEGIN TRANSACTION;   BEGIN DIALOG @conversationHandle         FROM SERVICE Sender         TO SERVICE 'Receiver'         ON CONTRACT HelloContract WITH ENCRYPTION=OFF   -- Send a message on the conversation   SET @message = N'Hello, World';   SEND  ON CONVERSATION @conversationHandle         MESSAGE TYPE HelloMessage (@message)   COMMIT TRANSACTION END GO --check contents of queues SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   GO -- Receive a message from the queue RECEIVE CONVERT(NVARCHAR(MAX), message_body) AS MESSAGE FROM ReceiverQueue GO --If no messages were received and/or you can't see anything on the queues you may wish to check the following for clues: SELECT * FROM sys.transmission_queue -- Cleanup DROP SERVICE Sender DROP SERVICE Receiver DROP QUEUE SenderQueue DROP QUEUE ReceiverQueue DROP CONTRACT HelloContract DROP MESSAGE TYPE HelloMessage GO USE MASTER GO DROP DATABASE SBTest GO

    Read the article

  • Compiling examples for consuming the REST Endpoints for WCF Service using Agatha

    - by REA_ANDREW
    I recently made two contributions to the Agatha Project by Davy Brion over on Google Code, and one of the things I wanted to follow up with was a post showing examples and some, seemingly required tid bits.  The contributions which I made where: To support StructureMap To include REST (JSON and XML) support for the service contract The examples which I have made, I want to format them so they fit in with the current format of examples over on Agatha and hopefully create and submit a third patch which will include these examples to help others who wish to use these additions. Whilst building these examples for both XML and JSON I have learnt a couple of things which I feel are not really well documented, but are extremely good practice and once known make perfect sense.  I have chosen a real basic e-commerce context for my example Requests and Responses, and have also made use of the excellent tool AutoMapper, again on Google Code. Setting the scene I have followed the Pipes and Filters Pattern with the IQueryable interface on my Repository and exposed the following methods to query Products: IQueryable<Product> GetProducts(); IQueryable<Product> ByCategoryName(this IQueryable<Product> products, string categoryName) Product ByProductCode(this IQueryable<Product> products, String productCode) I have an interface for the IProductRepository but for the concrete implementation I have simply created a protected getter which populates a private List<Product> with 100 test products with random data.  Another good reason for following an interface based approach is that it will demonstrate usage of my first contribution which is the StructureMap support.  Finally the two Domain Objects I have made are Product and Category as shown below: public class Product { public String ProductCode { get; set; } public String Name { get; set; } public Decimal Price { get; set; } public Decimal Rrp { get; set; } public Category Category { get; set; } }   public class Category { public String Name { get; set; } }   Requirements for the REST Support One of the things which you will notice with Agatha is that you do not have to decorate your Request and Response objects with the WCF Service Model Attributes like DataContract, DataMember etc… Unfortunately from what I have seen, these are required if you want the same types to work with your REST endpoint.  I have not tried but I assume the same result can be achieved by simply decorating the same classes with the Serializable Attribute.  Without this the operation will fail. Another surprising thing I have found is that it did not work until I used the following Attribute parameters: Name Namespace e.g. [DataContract(Name = "GetProductsRequest", Namespace = "AgathaRestExample.Service.Requests")] public class GetProductsRequest : Request { }   Although I was surprised by this, things kind of explained themselves when I got round to figuring out the exact construct required for both the XML and the REST.  One of the things which you already know and are then reminded of is that each of your Requests and Responses ultimately inherit from an abstract base class respectively. This information needs to be represented in a way native to the format being used.  I have seen this in XML but I have not seen the format which is required for the JSON. JSON Consumer Example I have used JQuery to create the example and I simply want to make two requests to the server which as you will know with Agatha are transmitted inside an array to reduce the service calls.  I have also used a tool called json2 which is again over at Google Code simply to convert my JSON expression into its string format for transmission.  You will notice that I specify the type of Request I am using and the relevant Namespace it belongs to.  Also notice that the second request has a parameter so each of these two object are representing an abstract Request and the parameters of the object describe it. <script type="text/javascript"> var bodyContent = $.ajax({ url: "http://localhost:50348/service.svc/json/processjsonrequests", global: false, contentType: "application/json; charset=utf-8", type: "POST", processData: true, data: JSON.stringify([ { __type: "GetProductsRequest:AgathaRestExample.Service.Requests" }, { __type: "GetProductsByCategoryRequest:AgathaRestExample.Service.Requests", CategoryName: "Category1" } ]), dataType: "json", success: function(msg) { alert(msg); } }).responseText; </script>   XML Consumer Example For the XML Consumer example I have chosen to use a simple Console Application and make a WebRequest to the service using the XML as a request.  I have made a crude static method which simply reads from an XML File, replaces some value with a parameter and returns the formatted XML.  I say crude but it simply shows how XML Templates for each type of Request could be made and then have a wrapper utility in whatever language you use to combine the requests which are required.  The following XML is the same Request array as shown above but simply in the XML Format. <?xml version="1.0" encoding="utf-8" ?> <ArrayOfRequest xmlns="http://schemas.datacontract.org/2004/07/Agatha.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Request i:type="a:GetProductsRequest" xmlns:a="AgathaRestExample.Service.Requests"/> <Request i:type="a:GetProductsByCategoryRequest" xmlns:a="AgathaRestExample.Service.Requests"> <a:CategoryName>{CategoryName}</a:CategoryName> </Request> </ArrayOfRequest>   It is funny because I remember submitting a question to StackOverflow asking whether there was a REST Client Generation tool similar to what Microsoft used for their RestStarterKit but which could be applied to existing services which have REST endpoints attached.  I could not find any but this is now definitely something which I am going to build, as I think it is extremely useful to have but also it should not be too difficult based on the information I now know about the above.  Finally I thought that the Strategy Pattern would lend itself really well to this type of thing so it can accommodate for different languages. I think that is about it, I have included the code for the example Console app which I made below incase anyone wants to have a mooch at the code.  As I said above I want to reformat these to fit in with the current examples over on the Agatha project, but also now thinking about it, make a Documentation Web method…{brain ticking} :-) Cheers for now and here is the final bit of code: static void Main(string[] args) { var request = WebRequest.Create("http://localhost:50348/service.svc/xml/processxmlrequests"); request.Method = "POST"; request.ContentType = "text/xml"; using(var writer = new StreamWriter(request.GetRequestStream())) { writer.WriteLine(GetExampleRequestsString("Category1")); } var response = request.GetResponse(); using(var reader = new StreamReader(response.GetResponseStream())) { Console.WriteLine(reader.ReadToEnd()); } Console.ReadLine(); } static string GetExampleRequestsString(string categoryName) { var data = File.ReadAllText(Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "ExampleRequests.xml")); data = data.Replace("{CategoryName}", categoryName); return data; } }

    Read the article

  • ASP.NET request queue priority

    - by dan
    I'm on IIS 7 and .NET 4.0. My understanding is that IIS takes requests and passes them off to ASP.NET worker threads. If all the threads are in use, the request goes into a queue and is processed once a thread becomes available. If the queue goes over a certain size, all new requests get a 503 until there is room in the queue again. Is there a way to prioritize the order in which queued requests are served? For example, I have consumer traffic and infrastructure traffic coming to the same server. If there are no available threads, I'd like for the consumer requests to be served first, even if they have arrived after infrastructure requests. Basically I want to replace the request queue with a priority queue. Is this possible with IIS?

    Read the article

  • Throttling in OSB

    - by Knut Vatsendvik
    Technorati Tags: soa,integration,osb,throttling,overload protection A common problem with integration is the risk of overloading a particular web service. When the capacity of a web service is reached and it continues to accept connections, it will most likely start to deteriorate. Fortunately there are 2 techniques, with Oracle Service Bus, that you can apply for protecting this from happening. You can either limit the concurrent number of requests for a Business Service (outbound requests) or you can limit the number of threads processing the requests for a Proxy Service (inbound requests). Limiting the Concurrent Number of Requests Limiting the concurrent requests for a Business Service cannot be set at design time so you have to use the built-in Oracle Service Bus Administration Console to do it (/sbconsole). Follow these steps to enable it: In Change Center, click Create to start a new Session Select Project Explorer, and navigate to the Business Service you want to limit Select the Operational Settings tab of the View a Business Service page In this tab, under Throttling, select the Enable check box. By enabling throttling you Specify a value for Maximum Concurrency Specify a positive integer value for Throttling Queue to backlog messages that has exceeded the message concurrency limit Specify the maximum time in milliseconds for Message Expiration a message can spend in Throttling Queue Click Update Click Active in Change Center to active the new settings If you re-publish the service, it will not overwrite the settings. Only if the resource is renamed or moved, it will. Please note that a throttling queue is an in-memory queue. Messages that are placed in this queue are not recoverable when a server fails or when you restart a server. Limiting the Number of Threads A better approach, in my opinion, is to limit the number of threads that can work with request. Follow these steps to do it: Open the WebLogic Server Console (/console) In Change Center, click Create to start a new Session In the left pane expand Environment and select Work Managers In the Global Work Managers page, click New    Click the Work Manager radio button, then click Next Enter a Name for the new Work Manager, and click Next In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish. The new Work Manager now appears in the Global Work Managers page. Select the new Work Manager Right next to the Maximum Threads Constraint drop-down box, click New   Click the Maximum Threads Constraint radio button, then click Next Enter a Name and a thread Count to be the maximum size to allocate for requests. Click Next  In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish Click Save Click Active in Change Center to active your changes.  A restart may be necessary.   Puh! Almost there. Start a new session. Go to the Service Bus Console (/sbconsole) and find your consuming Proxy Service. Click the Edit button of the Transport Configuration tab. Click Next Set the Dispatch Policy to the new Work Manager Click Last Click Save Click Active in Change Center to active your changes. 

    Read the article

  • How do I get a Dane-Elec mp3/mp4 player working?

    - by user40432
    My MP3/MP4 does not plug-in and play and therefore I can not transfer any file to the MP3/MP4 dane-elec music my touch or only dane-elec with 8 gb in memory and perhapses model zt1 with radio,..and microsdhc card slot following the above link the mp3/mp4 is there and it is MP3 Player: TOUCH MY MUSIC and the complete information is on this site http://www.danedigital.com/8-Music-Media-Players/2-music-touch.html as the Technical Specifications MP3 Player: TOUCH MY MUSIC The Mp4 player has a very classy. It allows its users to play music and view photos and video. His fluent interface, its touch-pad, his radio and RDS Micro SDHC reader makes him a very complete device will become the ideal musical companion. ubuntu i am with is ubuntu 11.10 kernel 3.0.0-14-generic the latest I tried to install many applications but nothing worked. With disk utility I can see that Ubuntu can recognize something, that as a peripheral device named rockchip usbdisk user and rockchip usbdisk sd, and i can plug and play other devices, and only this mp3/mp4 do not connect to the computer with ubuntu and the device as no problem working disconnected to computer I try to see if work on Windows and it does! I can see the device and transfer files to the MP3/MP4 dane-elec folder device and use FAT32. So why can not do on Ubuntu!? What can I do and why does not work on Ubuntu? What is wrong with it? Here are the logs: Jan 4 17:27:34 a-ubuntu kernel: [ 141.948863] init: apport pre-start process (1970) terminated with status 1 Jan 4 17:27:34 a-ubuntu kernel: [ 141.963202] init: apport post-stop process (1994) terminated with status 1 Jan 4 17:30:02 a-ubuntu kernel: [ 289.564049] usb 2-4: new high speed USB device number 3 using ehci_hcd Jan 4 17:30:02 a-ubuntu kernel: [ 289.988706] usbcore: registered new interface driver uas Jan 4 17:30:02 a-ubuntu kernel: [ 289.992056] Initializing USB Mass Storage driver... Jan 4 17:30:02 a-ubuntu kernel: [ 289.992272] scsi6 : usb-storage 2-4:1.0 Jan 4 17:30:02 a-ubuntu kernel: [ 289.993082] usbcore: registered new interface driver usb-storage Jan 4 17:30:02 a-ubuntu kernel: [ 289.993088] USB Mass Storage support registered. Jan 4 17:30:03 a-ubuntu kernel: [ 290.996887] scsi 6:0:0:0: Direct-Access RockChip USBDISK User 1.00 PQ: 0 ANSI: 0 Jan 4 17:30:03 a-ubuntu kernel: [ 290.997372] scsi 6:0:0:1: Direct-Access RockChip USBDISK SD 1.00 PQ: 0 ANSI: 0 Jan 4 17:30:03 a-ubuntu kernel: [ 290.997478] scsi: killing requests for dead queue Jan 4 17:30:03 a-ubuntu kernel: [ 291.002712] scsi: killing requests for dead queue Jan 4 17:30:03 a-ubuntu kernel: [ 291.002880] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.016249] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.032252] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.048182] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.060178] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.060357] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.080381] sd 6:0:0:0: Attached scsi generic sg2 type 0 Jan 4 17:30:04 a-ubuntu kernel: [ 291.080646] sd 6:0:0:1: Attached scsi generic sg3 type 0 Jan 4 17:30:04 a-ubuntu kernel: [ 291.088381] sd 6:0:0:0: [sdb] 16015360 512-byte logical blocks: (8.19 GB/7.63 GiB) Jan 4 17:30:04 a-ubuntu kernel: [ 291.088988] sd 6:0:0:1: [sdc] Attached SCSI removable disk Jan 4 17:30:04 a-ubuntu kernel: [ 291.200050] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:04 a-ubuntu kernel: [ 291.448044] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:04 a-ubuntu kernel: [ 291.696055] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:04 a-ubuntu kernel: [ 291.832046] sd 6:0:0:0: [sdb] Test WP failed, assume Write Enabled Jan 4 17:30:04 a-ubuntu kernel: [ 291.832994] sd 6:0:0:0: [sdb] Asking for cache data failed Jan 4 17:30:04 a-ubuntu kernel: [ 291.833001] sd 6:0:0:0: [sdb] Assuming drive cache: write through Jan 4 17:30:04 a-ubuntu kernel: [ 291.834378] sdb: detected capacity change from 8199864320 to 0 Jan 4 17:30:04 a-ubuntu kernel: [ 291.835367] sd 6:0:0:0: [sdb] Attached SCSI removable disk Jan 4 17:30:06 a-ubuntu kernel: [ 293.004741] sd 6:0:0:0: [sdb] 16015360 512-byte logical blocks: (8.19 GB/7.63 GiB) Jan 4 17:30:06 a-ubuntu kernel: [ 293.116051] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:21 a-ubuntu kernel: [ 308.228043] usb 2-4: device descriptor read/64, error -110 Jan 4 17:30:36 a-ubuntu kernel: [ 323.444072] usb 2-4: device descriptor read/64, error -110 Jan 4 17:30:36 a-ubuntu kernel: [ 323.660047] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:51 a-ubuntu kernel: [ 338.772085] usb 2-4: device descriptor read/64, error -110 Jan 4 17:31:06 a-ubuntu kernel: [ 353.988064] usb 2-4: device descriptor read/64, error -110 Jan 4 17:31:07 a-ubuntu kernel: [ 354.204058] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:31:12 a-ubuntu kernel: [ 359.224115] usb 2-4: device descriptor read/8, error -110 Jan 4 17:31:17 a-ubuntu kernel: [ 364.344136] usb 2-4: device descriptor read/8, error -110 Jan 4 17:31:17 a-ubuntu kernel: [ 364.560037] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:31:22 a-ubuntu kernel: [ 369.580132] usb 2-4: device descriptor read/8, error -110 Jan 4 17:31:27 a-ubuntu kernel: [ 374.700126] usb 2-4: device descriptor read/8, error -110 Jan 4 17:31:27 a-ubuntu kernel: [ 374.804121] usb 2-4: USB disconnect, device number 3 Jan 4 17:31:27 a-ubuntu kernel: [ 374.804518] sd 6:0:0:0: Device offlined - not ready after error recovery Jan 4 17:31:27 a-ubuntu kernel: [ 374.804600] sd 6:0:0:0: [sdb] No Caching mode page present Jan 4 17:31:27 a-ubuntu kernel: [ 374.804606] sd 6:0:0:0: [sdb] Assuming drive cache: write through Jan 4 17:31:27 a-ubuntu kernel: [ 374.804693] sd 6:0:0:0: [sdb] READ CAPACITY failed Jan 4 17:31:27 a-ubuntu kernel: [ 374.804698] sd 6:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Jan 4 17:31:27 a-ubuntu kernel: [ 374.804704] sd 6:0:0:0: [sdb] Sense not available. Jan 4 17:31:27 a-ubuntu kernel: [ 374.804744] sd 6:0:0:0: [sdb] No Caching mode page present Jan 4 17:31:27 a-ubuntu kernel: [ 374.804748] sd 6:0:0:0: [sdb] Assuming drive cache: write through Jan 4 17:31:27 a-ubuntu kernel: [ 374.804754] sdb: detected capacity change from 8199864320 to 0 Jan 4 17:31:27 a-ubuntu kernel: [ 374.820273] scsi: killing requests for dead queue Jan 4 17:31:27 a-ubuntu kernel: [ 374.852240] scsi: killing requests for dead queue Jan 4 17:31:27 a-ubuntu kernel: [ 374.980054] usb 2-4: new high speed USB device number 4 using ehci_hcd Jan 4 17:31:43 a-ubuntu kernel: [ 390.092059] usb 2-4: device descriptor read/64, error -110 Jan 4 17:31:58 a-ubuntu kernel: [ 405.308070] usb 2-4: device descriptor read/64, error -110 Jan 4 17:31:58 a-ubuntu kernel: [ 405.524078] usb 2-4: new high speed USB device number 5 using ehci_hcd and the other post is: http://pastebin.ubuntu.com/792915/ and the other bDeviceSubClass 2 ? bDeviceProtocol 1 Interface Association bMaxPacketSize0 64 idVendor 0x04f2 Chicony Electronics Co., Ltd idProduct 0xb008 USB 2.0 Camera bcdDevice 93.27 iManufacturer 2 Chicony Electronics Co., Ltd. iProduct 1 Chicony USB 2.0 Camera iSerial 3 SN0001 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 565 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Association: bLength 8 bDescriptorType 11 bFirstInterface 0 bInterfaceCount 2 bFunctionClass 14 Video bFunctionSubClass 3 Video Interface Collection bFunctionProtocol 0 iFunction 1 Chicony USB 2.0 Camera Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 1 Video Control bInterfaceProtocol 0 iInterface 1 Chicony USB 2.0 Camera VideoControl Interface Descriptor: bLength 13 bDescriptorType 36 bDescriptorSubtype 1 (HEADER) bcdUVC 1.00 wTotalLength 77 dwClockFrequency 15.000000MHz bInCollection 1 baInterfaceNr( 0) 1 VideoControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 3 (OUTPUT_TERMINAL) bTerminalID 2 wTerminalType 0x0101 USB Streaming bAssocTerminal 0 bSourceID 4 iTerminal 0 VideoControl Interface Descriptor: bLength 26 bDescriptorType 36 bDescriptorSubtype 6 (EXTENSION_UNIT) bUnitID 4 guidExtensionCode {7033f028-1163-2e4a-ba2c-6890eb334016} bNumControl 1 bNrPins 1 baSourceID( 0) 3 bControlSize 1 bmControls( 0) 0x01 iExtension 0 VideoControl Interface Descriptor: bLength 18 bDescriptorType 36 bDescriptorSubtype 2 (INPUT_TERMINAL) bTerminalID 1 wTerminalType 0x0201 Camera Sensor bAssocTerminal 0 iTerminal 0 wObjectiveFocalLengthMin 0 wObjectiveFocalLengthMax 0 wOcularFocalLength 0 bControlSize 3 bmControls 0x00000000 VideoControl Interface Descriptor: bLength 11 bDescriptorType 36 bDescriptorSubtype 5 (PROCESSING_UNIT) Warning: Descriptor too short bUnitID 3 bSourceID 1 wMaxMultiplier 0 bControlSize 2 bmControls 0x0000053f Brightness Contrast Hue Saturation Sharpness Gamma Backlight Compensation Power Line Frequency iProcessing 0 bmVideoStandards 0x a NTSC - 525/60 SECAM - 625/50 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0010 1x 16 bytes bInterval 6 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 VideoStreaming Interface Descriptor: bLength 14 bDescriptorType 36 bDescriptorSubtype 1 (INPUT_HEADER) bNumFormats 1 wTotalLength 345 bEndPointAddress 129 bmInfo 0 bTerminalLink 2 bStillCaptureMethod 0 bTriggerSupport 1 bTriggerUsage 0 bControlSize 1 bmaControls( 0) 27 VideoStreaming Interface Descriptor: bLength 27 bDescriptorType 36 bDescriptorSubtype 4 (FORMAT_UNCOMPRESSED) bFormatIndex 1 bNumFrameDescriptors 7 guidFormat {59555932-0000-1000-8000-00aa00389b71} bBitsPerPixel 16 bDefaultFrameIndex 1 bAspectRatioX 0 bAspectRatioY 0 bmInterlaceFlags 0x00 Interlaced stream or variable: No Fields per frame: 2 fields Field 1 first: No Field pattern: Field 1 only bCopyProtect 0 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 1 bmCapabilities 0x00 Still image unsupported wWidth 640 wHeight 480 dwMinBitRate 614400 dwMaxBitRate 18432000 dwMaxVideoFrameBufferSize 614400 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 2 bmCapabilities 0x00 Still image unsupported wWidth 352 wHeight 288 dwMinBitRate 202752 dwMaxBitRate 6082560 dwMaxVideoFrameBufferSize 202752 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 3 bmCapabilities 0x00 Still image unsupported wWidth 320 wHeight 240 dwMinBitRate 153600 dwMaxBitRate 4608000 dwMaxVideoFrameBufferSize 153600 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 4 bmCapabilities 0x00 Still image unsupported wWidth 176 wHeight 144 dwMinBitRate 50688 dwMaxBitRate 1520640 dwMaxVideoFrameBufferSize 50688 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 5 bmCapabilities 0x00 Still image unsupported wWidth 160 wHeight 120 dwMinBitRate 38400 dwMaxBitRate 1152000 dwMaxVideoFrameBufferSize 38400 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 34 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 6 bmCapabilities 0x00 Still image unsupported wWidth 1280 wHeight 800 dwMinBitRate 2048000 dwMaxBitRate 18432000 dwMaxVideoFrameBufferSize 2048000 dwDefaultFrameInterval 1333333 bFrameIntervalType 2 dwFrameInterval( 0) 1333333 dwFrameInterval( 1) 2000000 VideoStreaming Interface Descriptor: bLength 34 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 7 bmCapabilities 0x00 Still image unsupported wWidth 1280 wHeight 1024 dwMinBitRate 2621440 dwMaxBitRate 23592960 dwMaxVideoFrameBufferSize 2621440 dwDefaultFrameInterval 1333333 bFrameIntervalType 2 dwFrameInterval( 0) 1333333 dwFrameInterval( 1) 2000000 VideoStreaming Interface Descriptor: bLength 6 bDescriptorType 36 bDescriptorSubtype 13 (COLORFORMAT) bColorPrimaries 1 (BT.709,sRGB) bTransferCharacteristics 1 (BT.709) bMatrixCoefficients 4 (SMPTE 170M (BT.601)) Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 1 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0080 1x 128 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 2 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0100 1x 256 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 3 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0320 1x 800 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 4 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0b20 2x 800 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 5 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x1320 3x 800 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 6 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x13e8 3x 1000 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 239 Miscellaneous Device bDeviceSubClass 2 ? bDeviceProtocol 1 Interface Association bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) Bus 006 Device 002: ID 04d9:1503 Holtek Semiconductor, Inc. Shortboard Lefty Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x04d9 Holtek Semiconductor, Inc. idProduct 0x1503 Shortboard Lefty bcdDevice 3.10 iManufacturer 1 iProduct 2 USB Keyboard iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 59 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 1 Keyboard iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 62 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 101 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered)

    Read the article

  • Why is the UpdatePanel Response size changing on alternate requests?

    - by Decker
    We are using UpdatePanel in a small portion of a large page and have noticed a performance problem where IE7 becomes CPU bound and the control within the UpdatePanel takes a long time (upwards of 30 seconds) to render. We also noticed that Firefox does not seem to suffer from these delays. We ran both Fiddler (for IE) and Firebug (for Firefox) and noticed that the real problem lied with the amount of data being returned in update panel responses. Within the UpdatePanel control there is a table that contains a number of ListBox controls. The real problem is that EVERY OTHER TIME the response (from making ListBox selections) alternates from 30K to 430K. Firefox handles the 400+K response in a reasonable amount of time. For whatever reason, IE7 goes CPU bound while it is presumably processing this data. So irrespective of whether or not we should be using an UpdatePanel or not, we'd like to figure out why every other async postback response is larger by a factor of more than 10 than the previous one. When the response is in the 30K range, IE updates the display within a second. On the alternate times, the response time is well over 10 times longer. Any idea why this alternating behavior should be happening with an UpdatePanel?

    Read the article

  • How to provide a fileDownloadName only if the user requests to save the file in ASP.NET MVC?

    - by davekaro
    I've got a controller action that returns a FileResult like this return this.File("file.pdf", "application/pdf"); for the URL "/Download/322" - where 322 is the id of the file. This works great, so that if a user clicks on a link to the PDF - it will open in their web browser as long as they have a PDF plugin installed. But, what if they right-click the link and choose "Save as..."? The browser pops up with the filename as "322." I'd like to have a better filename at this point, by doing something like this: return this.File("file.pdf", "application/pdf", "file.pdf"); But if I change the controller to return like that, then it will always pop up the download box, since MVC is setting the Content-Disposition header to attachment (so I can't embed the file). In summary, can I somehow detect that the user is trying to download the file vs. the file is just being embedded in something on the page?

    Read the article

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • What is the fastest way to send 100,000 HTTP requests in Python?

    - by Igor G.
    Hello, I am opening a file which has 100,000 url's. I need to send an http request to each url and print the status code. I am using Python 2.6, and so far looked at the many confusing ways Python implements threading/concurrency. I have even looked at the python concurrence library, but cannot figure out how to write this program correctly. Has anyone come across a similar problem? I guess generally I need to know how to perform thousands of tasks in Python as fast as possible - I suppose that means 'concurrently'. Thank you, Igor

    Read the article

  • AJAX: how to get progress feedback in web apps, and to avoid timeouts on long requests?

    - by David Dombrowsky
    This is a general design question about how to make a web application that will receive a large amount of uploaded data, process it, and return a result, all without the dreaded spinning beach-ball for 5 minutes or a possible HTTP timeout. Here's the requirements: make a web form where you can upload a CSV file containing a list of URLs when the user clicks "submit", the server fetches the file, and checks each URL to see if its alive, and what the title tag of the page is. the result is a downloadable CSV file containing the URL, and the result HTTP code the input CSV can be very large ( 100000 rows), so the fetch process might take 5-30 minutes. My solution so far is to have a spinning javascript loop on the client site, which queries the server every second to determine the overall progress of the job. This seems kludgy to me, and I'm hesitant to accept this as the best solution. I'm using perl, template toolkit, and jquery, but any solution using any web technology would be acceptable.

    Read the article

  • Why can't I send SOAP requests to Ebay finding API with this php?

    - by Jay
    This is my code: <?php error_reporting(E_ALL); //new instance of soapClient pointing to Ebay finding api $client = new SoapClient("http://developer.ebay.com/webservices/finding/latest/FindingService.wsdl"); //attach required parameters to soap message header $header_arr = array(); $header_arr[] = new SoapHeader("X-EBAY-SOA-MESSAGE-PROTOCOL", "SOAP11"); $header_arr[] = new SoapHeader("X-EBAY-SOA-SERVICE-NAME", "FindingService"); $header_arr[] = new SoapHeader("X-EBAY-SOA-OPERATION-NAME", "findItemsByKeywords"); $header_arr[] = new SoapHeader("X-EBAY-SOA-SERVICE-VERSION", "1.0.0"); $header_arr[] = new SoapHeader("X-EBAY-SOA-GLOBAL-ID", "EBAY-GB"); $header_arr[] = new SoapHeader("X-EBAY-SOA-SECURITY-APPNAME", "REMOVED"); $header_arr[] = new SoapHeader("X-EBAY-SOA-REQUEST-DATA-FORMAT", "XML"); $header_arr[] = new SoapHeader("X-EBAY-SOA-MESSAGE-PROTOCOL", "XML"); $test = $client->__setSoapHeaders($header_arr); $client->__setLocation("http://svcs.ebay.com/services/search/FindingService/v1");//endpoint $FindItemsByKeywordsRequest = array( "keywords" => "potter" ); $result = $client->__soapCall("findItemsByKeywords", $FindItemsByKeywordsRequest); //print_r($client->__getFunctions()); //print_r($client->__getTypes()); //print_r($result); ? And this is the error I receive: Fatal error: Uncaught SoapFault exception: [axis2ns2:Server] Missing SOA operation name header in C:\xampplite\htdocs\OOP\newfile.php:25 Stack trace: #0 C:\xampplite\htdocs\OOP\newfile.php(25): SoapClient-__soapCall('findItemsByKeyw...', Array) #1 {main} thrown in C:\xampplite\htdocs\OOP\newfile.php on line 25 It doesnt make sense, I have already set the operation name in the header of the request... Does anyone know what is wrong here?

    Read the article

  • Does JSONP scale? How many JSONP requests can I send?

    - by Cheeso
    Based on Please explain JSONP, I understand that JSONP can be used to get around the same-origin policy. But in order to do that, the page must use a <script> tag. I know that pages can dynamically emit new script tags, such as with: <script type="text/javascript" language='javascript'> document.write('<script type="text/javascript" ' + 'id="contentloadtag" defer="defer" ' + 'src="javascript:void(0)"><\/script>'); var contentloadtag=document.getElementById("contentloadtag"); contentloadtag.onreadystatechange=function(){ if (this.readyState=="complete") { init(); } } </script> (the above works in IE, don't think it works in FF). ... but does this mean, effectively, that every JSONP call requires me to emit another <script> tag into the document? Can I remove the <script> tags that are done?

    Read the article

  • Linq2Sql relationships and WCF serialization problem

    - by devmania
    hi, here is my scenario i got Table1 id name Table2 id family fid with one to many relationship set between Table1. id and Table2.fid now here is my WCF service Code [OperationContract] public List<Table1> GetCustomers(string numberToFetch) { using (DataClassesDataContext context = new DataClassesDataContext()) { return context.Table1s.Take(int.Parse(numberToFetch)).ToList( ); } } and my ASPX page Code <body xmlns:sys="javascript:Sys" xmlns:dataview="javascript:Sys.UI.DataView"> <div id="CustomerView" class="sys-template" sys:attach="dataview" dataview:autofetch="true" dataview:dataprovider="Service2.svc" dataview:fetchParameters="{{ {numberToFetch: 2} }}" dataview:fetchoperation="GetCustomers"> <ul> <li>{{family}}</li> </ul> </div> though i set serialization mode to Unidirectional in Linq2Sql designer i am not able to get the family value and all what i get is this in firebug {"d":[{"__type":"Table1:#","id":1,"name":"asd"},{"__type":"Table1:#","id":2,"name":"wewe"}]} any help would be totally appreciated

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >